package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
adcm-version
ADCM versionNameADCM versionDescriptionThis package is intended to compare versions of theADCMproduct.Installationpip install adcm-versionUsagecompare_adcm_versions(version_a, version_b)- Compare two ADCM version strings, return 1 (ifais newer), 0 (if versions are equal), or -1 (ifbis newer)>>> from adcm_version import compare_adcm_versions >>> compare_adcm_versions("2021.11.22.15", "2023.11.28.07") -1compare_prototype_versions(version_a, version_b)- Compare two prototype version strings for ADCM objects, return 1 (ifais newer), 0 (if versions are equal), or -1 (ifbis newer)>>> from adcm_version import compare_prototype_versions >>> compare_prototype_versions("2.1.10_b1", "2.1.6_b4") 1is_legacy(version)- returnTrue, if ADCM version format is old (for example2023.11.28.07), elseFalse>>> from adcm_version import is_legacy >>> is_legacy("2021.11.22.15") True
adcobalt-test123
testtest
adcom
class standart(): def wait() #wait until pressed ENTERclass Windows(): def notify(title, message, app_name='Python app', app_icon='none') #create a notification messagedef confirm(text) #create a notification message with select(OK or CANScEL) def alert(text) #create a alert message def question(text) #create a question messageclass cfg(): def read(file_name) #read cfg file to dict from type: {what} = {value}def write(file_name, what) #write cfg file from dict with type: {what} = {value}class table(): def reservation(text, x, mode=3) #create reservation for x symbols with assignment(1 - LEFT, 2 - MIDDLE, 3 - RIGHT)def table(column1, column2, mode=1) #create table from list column1, column2
ad-components
Accelerated Discovery Reusable ComponentsThe central implementation of Accelerated Discover Reusable Components. It serves as a wrapper around client libraries we use locally like Dapr and MLflow.1.InstallationAll components will be availble usingpipinstallad-componentsCLIHere's an example usage of the CLIusage: adc [-h] [--verbose] [--version] {<component>} ... Accelerated Discovery reusable components. positional arguments: <component> the component that you want to trigger. optional arguments: -h, --help show this help message and exit. --version show program's version number and exit.2. Usage2.0. In your pipelineTo use a component in your pipeline, you need to run it in a Step contextfromad.stepimportDaprStepfromad.storageimportdownload,uploadwithDaprStep():resp=download(download_src,download_dest,binding_name=binding)print(f"download resp:{resp}")resp=upload(upload_src,upload_dest,binding_name=binding)print(f"upload resp:{resp}")Running the components inside a step will make sure the client dependencies are handled correctly.2.1. Storage2.1.2. Python moduleYou can invoke the manager using native python. Please note that the package must be present in you python environment.fromad.storageimportdownload,uploaddownload_resp=download(src,dest,# binding_name="s3-state", # Or any other binding)upload_resp=upload(src,dest,# binding_name="s3-state", # Or any other binding)2.1.3. CLIusage:adcstorage[-h]--srcPATH--destPATH[--bindingNAME][--timeoutSEC]{download,upload}positionalarguments:{download,upload}actiontobeperformedondata. optionalarguments:-h,--helpshowthishelpmessageandexitactionarguments:--srcPATH,-rPATHpathoffiletoperformactionon.--destPATH,-dPATHobject'sdesiredfullpathinthedestination.--bindingNAME,-bNAMEthenameofthebindingasdefinedinthecomponents. daprarguments:--timeoutSEC,-tSECvalueinsecondsweshouldwaitforsidecartocomeup.Note:You can replaceadcwithpython ad/main.py ...if you don't have the package installed in your python environment.ExamplesTo download an object from S3 runadcstoragedownload\--srctest.txt\--desttmp/downloaded.txtTo upload an object to S3 runadcstorageupload\--srctmp/downloaded.txt\--destlocal/uploaded.txt3. Supported components3.1. Storage3.1.1. Supported operationsBelow is a list of the operations you might intend to perform in your component.UploadUploads data from a file to an object in a bucket.Argumentssrc: Name of file to download.dest: Object name in the bucket.binding: The name of the binding to perform the operation.DownloadDownloads data of an object to file.Argumentssrc: Object name in the bucket.dest: Name of file to download.binding: The name of the binding to perform the operation.Dapr configurationsaddress: Dapr Runtime gRPC endpoint address.timeout: Value in seconds we should wait for sidecar to come up4. PublishingEvery change to the python script requires a new version to be pushed PyPi registry.If you have the right (write) permissions, and a correctly-configured$HOME/.pypircfile, run the following command to publish the packagemake4.1. Increment the versionTo increment the version, go toadstorage/version.pyand increment the version there. Both thesetup.pyand theCLIwill read the new version correctly.4.2 Configure PyPi registryTo be able to push to the package to our private registry, you need to tell PyPi about it. This one-liner command will take care of it for youcat<< EOF > $HOME/.pypirc[distutils]index-servers =pypi[pypi]repository: https://upload.pypi.org/legacy/username: __token__password: $PYPI_TOKENEOFNote:The pip package will fetch the version fromad/version.pyfile, so make sure to increment before pushing.
adcorr
This package provides a set of pure python functions for performing corrections on area detector data.Install via PyPI with:pipinstalladcorrUseful LinksPyPIhttps://pypi.org/project/adcorr/Source codehttps://github.com/DiamondLightSource/adcorrDocumentationhttps://DiamondLightSource.github.io/adcorrReleaseshttps://github.com/DiamondLightSource/adcorr/releasesBrief ExampleA brief example of performing corrections using the library is presented below:frames=load_my_frames()mask=load_my_mask()count_times=load_count_times()frames=mask_frames(frames,mask)frames=correct_deadtime(frames,count_times,DETECTOR_MINIMUM_PULSE_SEPARATION,DETECTOR_MINIMUM_ARRIVAL_SEPARATION,)frames=correct_dark_current(frames,count_times,BASE_DARK_CURRENT,TEMPORAL_DARK_CURRENT,FLUX_DEPENDANT_DARK_CURRENT,)...Library CompatibilityLibraryTestsCoveragenumcertainPintSeehttps://DiamondLightSource.github.io/adcorrfor more detailed documentation.
adcpipeline
ADC PipelineThere are a lot of different steps involved in data science projects. For example: fixing data quality issues, feature engineering, parameter tuning and reporting your results. Although these steps are often approximately the same, the exact approach per step isn't. The data and the goal of the project determine the way you manipulate your data and what model you need. In turn, your model choice determines what kind of parameter tuning you need to do and how you are going to present your results. In other words, there are a lot of data-science-paths to walk. The more you program, the more you might get drowned in an ever increasing amount of if-statements, giving the feeling that you lose grip on the structure in your project. This package aims to solve that problem, by offering a more structured way of working.InstallationYou can install with pip:pip install adcpipelineBasic principlesTo structure your project, you need to follow three steps:Build your ownPipelineclass.Loading your (run) configuration.Running the pipeline.Below, each step will be explained.1. Build your ownPipelineclassFrom theadcpipelinepackage import thePipelineBaseclass:from adcpipeline import PipelineBaseAnd build your ownPipelineclass by inheriting from PipelineBase:class Pipeline(PipelineBase): passThis doesn't do anything yet, so let's add a few steps in ourPipeline. We do this by adding methods we want to execute when we run the Pipeline. The example below adds three methods to this specificPipeline:class Pipeline(PipelineBase): def print_text_from_argument(self, text='asfd'): print(text) def print_predefined_text(self): print('Predefined text') def n_times_squared(self, value: int, n: int): result = value for i in range(0, n): result = result ** 2 print(f'Squaring the number {value} for {n} times in a row gives = {result}')2. Loading your (run) configuration.When we want to instantiate thePipeline, we need to pass the data as an argument (df) and we need to pass our run configuration as an argument (method_settings):p = Pipeline(df=data, method_settings=method_settings)The variabledatacan be any data, as long as it is a Pandas DataFrame. Themethod_settingsvariable is a list containing dictionaries, which define (in order) how all the methods are going to be executed once ourPipelineruns. Each dictionary contains the method (name) that needs to be called. The values are a dictionary of arguments (names) with their corresponding argument value that is going to be passed to the method. An example will make things clear:method_settings = [ {'print_text_from_argument': {'text': 'This is the text passed to the method'}}, {'print_text_from_argument': {'text': 1}}, {'print_predefined_text': None}, {'n_times_squared': {'value': 2, 'n': 2}}, {'print_text_from_argument': {'text': 'Same method is called again, but later in the pipeline'}} ]Here we see that the methodprint_text_from_argumentis called two times with atextargument. Thistextargument is different each time. After that the other two methods are called and lastly,print_text_from_argumentis called one last time.Themethod_settingsas defined in the example above takes up a lot of lines and every time we make an additionalmethod_settings, we get more lines of code. It is therefore recommended to load themethod_settingsfrom a configuration file instead. You can define your pipeline settings in a.yamlfile and let the pipeline class load this file:p = Pipeline.from_yaml_file(df=data, path=f'{root_dir}/configs/<YOUR_METHOD_SETTINGS>.yaml')The.yamlfile would then look like this:pipeline: - print_text_from_argument: {text: 'This is the text passed to the method'} - print_text_from_argument: {text: 1} - print_predefined_text: - n_times_squared: {value: 2, n: 2} - print_text_from_argument: {text: 'Same method is called again, but later in the pipeline'}3. Running the pipeline.Withmethod_settingsdefined in step 2, we can now run ourPipeline:p.run()And that's it! By making multiplemethod_settingswe can define several ways to run ourPipeline, without altering any of our code or writing any if statement. For example, during exploratory data analysis, it might be nice to try different things without constantly changing our code. We could then do something along the following lines:p1 = Pipeline.from_yaml_file(df=data, path=f'{root_dir}/configs/<YOUR_METHOD_SETTINGS_1>.yaml') p2 = Pipeline.from_yaml_file(df=data, path=f'{root_dir}/configs/<YOUR_METHOD_SETTINGS_2>.yaml') p3 = Pipeline.from_yaml_file(df=data, path=f'{root_dir}/configs/<YOUR_METHOD_SETTINGS_3>.yaml') p1.run() p2.run() p3.run()Where eachYOUR_METHOD_SETTINGS_<N>.yamldefines themethod_settingsperPipeline. Alternatively the pipeline ships with arun_or_load()method, which can save and load the result of a pipeline from a .pkl file. This can be useful if you did not change the content of the pipeline, but need to rerun your script.method_settings = [ {'print_text_from_argument': {'text': 'This is the text passed to the method'}}, {'print_text_from_argument': {'text': 1}}, {'print_predefined_text': None}, {'n_times_squared': {'value': 2, 'n': 2}}, {'print_text_from_argument': {'text': 'Same method is called again, but later in the pipeline'}} ] p = Pipeline(df=data, method_settings=method_settings, filename='my_pipeline') p.run() # Executes the pipeline, saves the results in cache/my_pipeline.pkl # Some other code p.run_or_load() # Does not execute the pipeline but loads the content of cache/my_pipeline.pkl # Loads the result of the first function from a pkl file and executes the remaining 4 functions p.run_or_load(load_cache_from_step=1)Advanced usageThemethod_settingsdictionary is converted to actual methods with their corresponding arguments. These are saved as lambda's in the propertymethod_list, which are called in order by therunmethod. You can call the methods from this list directly if you want.ThePipelineBaseclass contains several magic methods, so that it can be used as a list. For instance,p[0]will return the first item in themethod_settingsproperty. For more info, see the magic methods in thePipelineBaseclass.If you have (mostly) the same data manipulations for eachPipeline, you can probably use just a single class as described above. However, if this class becomes extremly large and large portions of the code are evident to be only applicable to certain types of pipelines, you might consider multiple inheritance. For example, you might have completely different methods in yourPipelinefor classification models and regression models. So you might build aPipelineclass as above, but make two extra classes -PipelineClassificationandPipelineRegression- that inherit from yourPipelineclass. Another example is that you maybe have timeseries and non-timeseries data. Here, too, you might consider using multiple inheritance if that seems logical.Other codeThere is some other code in this repository used directly byPipelineBaseor that might be useful to you. To name a few: there is a DatabaseConnection class which is a small wrapper around sqlalchemy and there is a method to load loggers. This is not explicitly explained in the README, but can be used.
adcpreader
ADCPREADER - A python3 module for reading RDI’s ADCP binary data files.Change logVersion 0.2.1Changed name to adcpreaderUpdated documentationDocumentation on readthedocsGlider specific parts of the code have been removed (including unpublished python dependencies)Version 0.1.0Initial releaseSynopsisThis python module is primarily intended to read the binary data files created by RDI’s 600 kHz Doppler Velocity Log mounted on Slocum ocean gliders. The module can, however, also be used to read binary date from other stand-alone ADCPs, that adhere to RDI’s binary data format.The philosophy behind the implementation ofadcpreaderis that acoustic ping (ensembles) are processed according to a user-defined pipeline. Since binary data files can be huge, and the total amount of data of a deployment even larger, possible issues with limited memory are dealt with by pushing ensemble per ensemble through the pipeline, making extensively use of coroutines.InstallationThe python moduleadcpreadercan be installed from source, using the standard method to install python code. Alternatively,adcpreadercan also be installed from PyPi, usingpip install adcpreader.DocumentationComprehensive documentation is provided athttps://adcpreader.readthedocs.io/en/latest/Quick-startFor the impatient…The moduleadcpreaderimplements a class PD0(), which returns an object the serves as the source of the pipeline. Usually the end of the pipeline will be some sink that either writes the data into a file, or into an object that allows access to the data during an interactive python session.In the simplest case we can construct a pipeline with a source and sink only:>>> from adcpreader.rdi_reader import PD0 >>> from adcpreader.rdi_writer import DataStructure >>> source = PD0() >>> sink = DataStructure() >>> pipeline = source | sinkIn the code example above, we create a source operation and a sink operation, and construct a pipeline using the pipe symbol “|”.Now, we can push data of filesample.PD0through the pipeline:>>> pipeline.process("sample.PD0")which results in the sink to contain the data of this file. You can usesink.keys()to list all variables that are accessible through this object. For example the ensemble numbers can be accesed as:>>> sink.data['Ensnum']or more compact:>>> sink.EnsnumIn this example, we processed in a single file. We could also provide a list of filenames as argument topipeline.process. However, we can use the pipeline only once. That is, this will fail:>>> pipeline.process("sample.PD0") >>> pipeline.process("another_sample.PD0")This is because under the hood generators and coroutines are used. When the generator (source) is exhausted, the coroutines are closed, and cannot be used anymore. Either, all data files are processed when supplied as a list topipeline.process(), or the pipeline is defined again.A third way (not recommended), is to leave the coroutines open, by supplying the optional keywordclose_coroutines_at_exit=False. Then it is the user’s responsibility to close the routines when the pipeline is invoked for the last time.An extensive number of operations are defined that can be placed in the pipeline. Some are for information purposes only, but most will in some way modify the data. You could define an operator:>>> info = adcpreader.rdi_writer.Info(pause=True)and create a new pipeline:>>> pipeline = source | info | sinkThis will have no effect on the contents ofsink, but it will display some information to the terminal (and pause before continuing).Other operations will affect the data. Examples, are corrections, rotations, coordinate transforms, and quality checks. See for the documentation for further information onhttps://adcpreader.readthedocs.io/en/latest/.
ad-cs107
team28CS107 Final Project Repository
adc-sdk
Argonne Discovery Cloud SDK & CLIDocs:https://stage.discoverycloud.anl.gov/docs/sdk/InstallationWhen positioned in this repo's root directory:pip install adc-sdkCLI Commandsadc create-datafile adc create-investigation adc create-job adc create-sample adc create-study adc create-token adc current-user adc datafile adc delete-token adc investigation adc job adc remove-permissions adc sample adc set-permissions adc studies adc study adc subscribe-to-investigation adc subscribe-to-job adc subscribe-to-study adc tokens adc update-jobYou can runadc <command> --helpfor more information.LicenseThis project is licensed under the Apache License - see theLICENSEfile for details
adc-socketx
adcxv0.1.1ADCsocket data eXchange py-script.GTLAB Diagnostic LLC, 2023Usage:$adcx[OPTIONS]COMMAND[ARGS]...Options:--help: Show this message and exit.Commands:get-info: Get the ADC information (s/n, frequency, channels, calibration).get-lan: Get lan configuration (port, ip, netmask, gateway, MAC).get-wav: Record a signal from the ADC to a .wav file.reboot: Reboot the ADC.set-lan: Set lan (ipv4) settings (ip/netmask, gateway).adcx get-infoGet the ADC information (s/n, frequency, channels, calibration).Usage:$adcxget-info[OPTIONS]Options:--ip TEXT: IPv4 connection address [default: 192.168.0.50]--help: Show this message and exit.adcx get-lanGet lan configuration (port, ip, netmask, gateway, MAC).Usage:$adcxget-lan[OPTIONS]Options:--ip TEXT: IPv4 connection address [default: 192.168.0.50]--help: Show this message and exit.adcx get-wavRecord a signal from the ADC to a .wav file.Usage:$adcxget-wav[OPTIONS]SECONDSArguments:SECONDS: [required]Options:--ch INTEGER RANGE: Set the number of channels to record [default: 1; 1<=x<=2]--iepe / --no-iepe: [default: iepe]--ip TEXT: IPv4 connection address [default: 192.168.0.50]--out FILE: Path and name (only .wav extension!) of the file to be written--help: Show this message and exit.adcx rebootReboot the ADC.Usage:$adcxreboot[OPTIONS]Options:--ip TEXT: IPv4 connection address [default: 192.168.0.50]--help: Show this message and exit.adcx set-lanSet lan (ipv4) settings (ip/netmask, gateway).Usage:$set-lan[OPTIONS]IPV4[GATEWAY]Arguments:IPV4: New IPv4 address/netmask. Example: 192.168.0.50/24 [required][GATEWAY]: New IPv4 Gateway [default: 192.168.0.1]Options:--ip TEXT: IPv4 connection address [default: 192.168.0.50]--help: Show this message and exit.Contacts
adc-streaming
Astronomy Data Commons Streaming Client LibrariesLibraries making it easy to access astronomy data commons resources.Developer notesSetupTo prepare for development, runpip install --editable ".[dev]"from within the repo directory. This will install all dependencies, including those using during development workflows.This project expects you to use apip-centric workflow for development on the project itself. If you're using conda, then use the conda environment'spipto install development dependencies, as described above.Integration tests require Docker to run a Kafka broker. The broker might have network problems on OSX if you use Docker Desktop; run the tests in a Linux virtual machine (like with VirtualBox) to get around this.Code WorkflowWrite code, making changes.Usemake formatto reformat your code to comply with PEP8.Usemake lintto catch common mistakes.Usemake test-quickto run fast unit tests.Usemake testto run the full slow test suite, including integration tests.Once satisfied with all four of those, push your changes and open a PR.Tag, build, and upload to PyPI and CondaTag a new version:git tag -s -a v0.x.xBuild and release:make pypi-dist make pypi-dist-check make pypi-upload make conda-build make conda-upload
ad-ctf-paas-lib
No description available on PyPI.
adctools
Failed to fetch description. HTTP Status Code: 404
add
UNKNOWN
add1
Failed to fetch description. HTTP Status Code: 404
add-15-05-23
ADD TWO NUMBERSThis is an example project demonstrating how to publish a python module to PyPI.InstallationRun the following to install: ''' python3 pip install add_15_05_23 '''##Usage ''' python3 from add import add_numbersGenerate "25"add_numbers(15,10) '''Developing add_numbersTo install add_numbers, along with tools you need to develop and run tests, run the following in your virtualenv: ''' bash $pip install -e .[dev] '''
add1mine
No description available on PyPI.
add1-pkg
ÿþH\x00i\x00 \x00 \x00 \x00T\x00h\x00i\x00s\x00 \x00i\x00s\x00 \x00t\x00r\x00i\x00a\x00l\x00.\x00
add2
Failed to fetch description. HTTP Status Code: 404
add2-mokha
No description available on PyPI.
add2numbers-jamesbond
No description available on PyPI.
add2numbers-mokha
No description available on PyPI.
add2numbers-TomRiddle
My first Python package with a slightly longer description
add2winpath
add2winpathAdds/removes folders to the PATH on Windows (Current User/All Users). It doesn't spoil paths with variables (e.g. %windir%\system32)pip install add2winpathTested against Windows 10 / Python 3.10 / AnacondaUsage# Adds "folders" to the path (beginning) and removes "remove_from_path",# the function doesn't mess around with file paths containing variables like "%windir%\system32"fromadd2winpathimportadd_to_path_all_users,add_to_path_current_user,get_all_subfolders_from_foldercva0=add_to_path_current_user(folders=["c:\\cygwin\\bin",r"C:\baba ''bubu"],remove_from_path=["c:\\cygwin"],beginning=True,)cva1=add_to_path_all_users(folders=["c:\\cygwin\\bin","c:\\cygwin"],remove_from_path=["c:\cygwin3"],beginning=True,)allsubfolders=get_all_subfolders_from_folder(folders=[r"C:\cygwin\var\lib\rebase"])# list of all subfoldersadd_to_path_all_usersadd_to_path_all_users(folders:Union[str,List[str]],remove_from_path:Union[str,List[str]],beginning:bool=False)->strAddsthespecifiedfolderstothePATHenvironmentvariableforallusersonthesystem.Args:folders(Union[str,List[str]]):AstringorlistofstringsrepresentingthefolderstobeaddedtothePATH.remove_from_path(Union[str,List[str]]):AstringorlistofstringsrepresentingthefolderstoberemovedfromthePATH.beginning(bool,optional):AbooleanindicatingwhetherthefoldersshouldbeaddedtothebeginningorendofthePATH.DefaultstoFalse.Returns:str:AstringrepresentingtheregistryscriptthatwasexecutedtoupdatethePATHenvironmentvariable.add_to_path_current_useradd_to_path_current_user(folders:Union[str,List[str]],remove_from_path:Union[str,List[str]],beginning:bool=False)->strAddsthespecifiedfolderstothePATHenvironmentvariableofthecurrentuser.Args:folders(Union[str,List[str]]):Thefolder(s)tobeaddedtothePATHenvironmentvariable.remove_from_path(Union[str,List[str]]):Thefolder(s)toberemovedfromthePATHenvironmentvariable.beginning(bool,optional):IfTrue,thespecifiedfolderswillbeaddedtothebeginningofthePATHvariable.IfFalse,thespecifiedfolderswillbeaddedtotheendofthePATHvariable.DefaultstoFalse.Returns:str:ThestringrepresentationoftheregistrykeythatwasaddedtotheWindowsregistry.get_all_subfolders_from_folderget_all_subfolders_from_folder(folders:Union[str,List[str]])->List[str]Returnsasortedlistofuniquesubfoldernamescontainedwithinthespecifiedfolder(s).Args:folders(Union[str,List[str]]):Astringrepresentingthepath(s)tothefolder(s)tosearchforsubfolders.Returns:List[str]:Asortedlistofuniquesubfoldernamescontainedwithinthespecifiedfolder(s).
add5302
No description available on PyPI.
add-abedelnabi
No description available on PyPI.
addana
addanaaddressFree software: MIT licenseDocumentation:https://advancehs.github.io/addana中文地址提取工具,支持中国三级区划地址(省、市、区)提取和级联映射,支持地址目的地热力图绘制。适配python2和python3。Feature地址提取["徐汇区虹漕路461号58号楼5楼", "福建泉州市洛江区万安塘西工业区"] ↓ 转换 |省 |市 |区 |地名 | |上海市|上海市|徐汇区|虹漕路461号58号楼5楼 | |福建省|泉州市|洛江区|万安塘西工业区 |注:“地名”列代表去除了省市区之后的具体地名数据集:中国行政区划地名数据源:爬取自国家统计局,中华人民共和国民政局全国行政区划查询平台数据文件存储在:addressparser2/resources/pca.csv,数据为2021年统计用区划代码和城乡划分代码(截止时间:2021-10-31,发布时间:2021-12-30)Demohttp://42.193.145.218/product/address_extraction/Installpip install addressparser2orgit clone https://github.com/advancehs/addressparser2.git cd addressparser2 python3 setup.py installUsage省市区提取示例base_demo.pylocation_str=["徐汇区虹漕路461号58号楼5楼","泉州市洛江区万安塘西工业区","朝阳区北苑华贸城"]importaddressparser2df=addressparser2.transform(location_str)print(df)output:省 市 区 地名 0 上海市 上海市 徐汇区 虹漕路461号58号楼5楼 1 福建省 泉州市 洛江区 万安塘西工业区 2 北京市 北京市 朝阳区 北苑华贸城程序的此处输入location_str可以是任意的可迭代类型,如list,tuple,set,pandas的Series类型等;输出的df是一个Pandas的DataFrame类型变量,DataFrame可以非常轻易地转化为csv或者excel文件,Pandas的官方文档:http://pandas.pydata.org/pandas-docs/version/0.20/dsintro.html#dataframe带位置索引的省市县提取示例pos_sensitive_demo.pylocation_str=["徐汇区虹漕路461号58号楼5楼","泉州市洛江区万安塘西工业区","朝阳区北苑华贸城"]importaddressparser2df=addressparser2.transform(location_str,pos_sensitive=True)print(df)output:省 市 区 地名 省_pos 市_pos 区_pos 0 上海市 上海市 徐汇区 虹漕路461号58号楼5楼 -1 -1 0 1 福建省 泉州市 洛江区 万安塘西工业区 -1 0 3 2 北京市 北京市 朝阳区 北苑华贸城 -1 -1 0切词模式的省市区提取默认采用全文匹配模式,不进行分词,直接全文匹配,这样速度慢,准确率高。示例enable_cut_demo.pylocation_str=["浙江省杭州市下城区青云街40号3楼"]importaddressparser2df=addressparser2.transform(location_str)print(df)output:省 市 区 地名 0 浙江省 杭州市 下城区 青云街40号3楼可以先通过jieba分词,之后做省市区提取及映射,所以我们引入了切词模式,速度很快,使用方法如下:location_str=["徐汇区虹漕路461号58号楼5楼","泉州市洛江区万安塘西工业区","朝阳区北苑华贸城"]importaddressparser2df=addressparser2.transform(location_str,cut=True)print(df)output:省 市 区 地名 0 上海市 上海市 徐汇区 虹漕路461号58号楼5楼 1 福建省 泉州市 洛江区 万安塘西工业区 2 北京市 北京市 朝阳区 北苑华贸城但可能会出错,如下所示,这种错误的结果是因为jieba本身就将词给分错了:location_str=["浙江省杭州市下城区青云街40号3楼"]importaddressparser2df=addressparser2.transform(location_str,cut=True)print(df)output:省 市 区 地名 0 浙江省 杭州市 城区 下城区青云街40号3楼默认情况下transform方法的cut参数为False,即采用全文匹配的方式,这种方式准确率高,但是速度可能会有慢一点; 如果追求速度而不追求准确率的话,建议将cut设为True,使用切词模式。地址经纬度、省市县级联关系查询示例find_place_demo.py## 查询经纬度信息fromaddressparser2importlatlnglatlng[('北京市','北京市','朝阳区')]#输出('39.95895316640668', '116.52169489108084')## 查询含有"鼓楼区"的全部地址fromaddressparser2importarea_maparea_map.get_relational_addrs('鼓楼区')#[('江苏省', '南京市', '鼓楼区'), ('江苏省', '徐州市', '鼓楼区'), ('福建省', '福州市', '鼓楼区'), ('河南省', '开封市', '鼓楼区')]#### 注: city_map可以用来查询含有某个市的全部地址, province_map可以用来查询含有某个省的全部地址## 查询含有"江苏省", "鼓楼区"的全部地址fromaddressparser2importprovince_area_mapprovince_area_map.get_relational_addrs(('江苏省','鼓楼区'))# [('江苏省', '南京市', '鼓楼区'), ('江苏省', '徐州市', '鼓楼区')]绘制echarts热力图使用echarts的热力图绘图函数之前需要先用如下命令安装它的依赖(为了减少本模块的体积,所以这些依赖不会被自动安装):pip install pyecharts==0.5.11 pip install echarts-countries-pypkg pip install pyecharts-snapshot使用本仓库提供的一万多条地址数据tests/addr.csv测试。示例draw_demo.py#读取数据importpandasaspdorigin=pd.read_csv("tests/addr.csv")#转换importaddressparser2addr_df=addressparser2.transform(origin["原始地址"])#输出processed=pd.concat([origin,addr_df],axis=1)processed.to_csv("processed.csv",index=False,encoding="utf-8")fromaddressparser2importdrawerdrawer.echarts_draw(processed,"echarts.html")output:1) processed.csv:1万多地址的省市县提取结果 2)echarts.html:echarts热力图浏览器打开echarts.html后:绘制分类信息图样本分类绘制函数,通过额外传入一个样本的分类信息,能够在地图上以不同的颜色画出属于不同分类的样本散点图,以下代码以“省”作为类别信息绘制分类散点图(可以看到,属于不同省的样本被以不同的颜色标记了出来,这里以“省”作为分类标准只是举个例子,实际应用中可以选取更加有实际意义的分类指标):示例draw_demo.py,接上面示例代码:fromaddressparser2importdrawerdrawer.echarts_cate_draw(processed,processed["省"],"echarts_cate.html")浏览器打开输出的echarts_cate.html后:Command line usage命令行模式支持批量提取地址的省市区信息:示例cmd_demo.pypython3 -m addressparser2 address_input.csv -o out.csv usage: python3 -m addressparser2 [-h] -o OUTPUT [-c] input @description: positional arguments: input the input file path, file encode need utf-8. optional arguments: -h, --help show this help message and exit -o OUTPUT, --output OUTPUT the output file path. -c, --cut use cut mode.输入文件:address_input.csv;输出文件:out.csv,省市县地址以\t间隔,-c表示使用切词Todobug修复,吉林省、广西省部分地址和上海浦东新区等三级区划地址匹配错误增加定期从民政局官网,统计局官网爬取最新省市县镇村划分的功能,延后,原因是2018年后官网未更新解决路名被误识别为省市名的问题,eg"天津空港经济区环河北路80号空港商务园"添加省市区提取后的级联校验逻辑大批量地址数据,准召率效果评估补充香港、澳门、台湾三级区划地址信息License授权协议为The Apache License 2.0,可免费用做商业用途。请在产品说明中附加addressparser2的链接和授权协议。Contribute项目代码还很粗糙,如果大家对代码有所改进,欢迎提交回本项目,在提交之前,注意以下两点:在tests添加相应的单元测试使用python -m pytest来运行所有单元测试,确保所有单测都是通过的之后即可提交PR。Referencechinese_province_city_area_mapperaddressparser
addanase
No description available on PyPI.
add-and-multiple
No description available on PyPI.
add_Arschloch
UNKNOWN
add_asts
UNKNOWN
addata
No description available on PyPI.
addbioschemas
Adding bioschemas to mkdocsA small markdown extension to add bioschemas to mkdocs. It requires a yaml file with bioschemas markup, and adds it to the rendered html.InstallationpipinstalladdbioschemasUsageA markdown snippet of a page where you want to add bioschemas to:# awesome title[add-bioschemas] I started with some YAML and turned it into JSON-LDA yaml file with the bioschemas markup:"@context":https://schema.org/"@type":LearningResource"@id":https://elixir-europe-training.github.io/ELIXIR-TrP-LessonTemplate-MkDocs/http://purl.org/dc/terms/conformsTo:"@type":CreativeWork"@id":https://bioschemas.org/profiles/TrainingMaterial/1.0-RELEASEdescription:Template for ELIXIR lessonskeywords:FAIR, OPEN, Bioinformatics, Teachingname:ELIXIR Training Lesson template# lookup at https://spdx.org/licenses/license:CC-BY-4.0author:-"@type":Personname:Geert van Geestemail:[email protected]:GeertvanGeestorcid:0000-0002-1561-078X-"@type":Personname:Elin Kronandergithub:elinkronanderorcid:0000-0003-0280-6318Add tomkdocs.yml:markdown_extensions:-addbioschemas:yaml:'path/to/yaml/metadata.yaml'
addbuffer
Failed to fetch description. HTTP Status Code: 404
add-by-cpp
No description available on PyPI.
addc
No description available on PyPI.
addcalculate
No description available on PyPI.
addci
No description available on PyPI.
addCode
Very basic function for adding two numbers.
addCodeReact
Package to add dynamic react components import code
add-colorprint
Each item of a list against all others$pipinstalladd-colorprintfromadd_colorprintimportadd_color_print_to_regeditadd_color_print_to_regedit()
addcomb
No description available on PyPI.
addcomments
django_database_colum_commentauto add comment for columns of MySQL or PostgreSQLthe propertyverbose_nameorhelp_textof the django model will be used ascommentfor columnsDatabase SupportedMySQLPostgreSQLHow to usethe model, you need to notice that the order verbose_name is at the first, then help_textname = models.CharField(max_length=200, verbose_name="名称", blank=True, default=None) age = models.SmallIntegerField(help_text="年龄", blank=True, default=None)then, install the packagepip install addcommentsinsettings.pyadd appINSTALLED_APPS += [ 'addcomments', ]next, type commandpython manage.py addcolumncommentsfinally, the info will be printed, all the models created will be processed## MySQL -- FOR test_student.name ALTER TABLE test_student MODIFY COLUMN `name` varchar(200) COLLATE utf8mb4_bin NOT NULL COMMENT '名称' -- FOR test_student.age ALTER TABLE test_student MODIFY COLUMN `age` smallint(6) NOT NULL COMMENT '年龄' ## PostgreSQL -- FOR test_student.name COMMENT ON COLUMN test_student.name IS '名称' -- FOR test_student.age COMMENT ON COLUMN test_student.age IS '年龄'If any bugyou can fix by yourself or commit your issue here, I will fix it
addcopyfighandler
addcopyfighandler: Add a Ctrl+C / Cmd+C handler to matplotlib figures for copying the figure to the clipboardImporting this module (after importing matplotlib or pyplot) will add a handler to all subsequently-created matplotlib figures so that pressing Ctrl+C (or Cmd+C on MacOS) with a matplotlib figure window selected will copy the figure to the clipboard as an image. The copied image is generated through matplotlib.pyplot.savefig(), and thus is affected by the relevant rcParams settings (savefig.dpi, savefig.format, etc.).Uses code & concepts from:https://stackoverflow.com/questions/31607458/how-to-add-clipboard-support-to-matplotlib-figureshttps://stackoverflow.com/questions/34322132/copy-image-to-clipboard-in-python3Windows-specific behavior:addcopyfighandler should work regardless of which graphical backend is being used by matplotlib (tkagg, gtk3agg, qtagg, etc.).Ifmatplotlib.rcParams['savefig.format']is'svg', the figure will be copied to the clipboard as an SVG.If Pillow is installed, all non-SVG format specifiers will be overridden, and the figure will be copied to the clipboard as a Device-Independant Bitmap.If Pillow is not installed, the supported format specifiers are'png','jpg','jpeg', and'svg'. All other format specifiers will be overridden, and the figure will be copied to the clipboard as PNG data.Linux-specific behavior:Requires either Qt or GTK libraries for clipboard interaction. Automatically detects which is being used frommatplotlib.get_backend().Qt support requiresPyQt5,PyQt6,PySide2orPySide6.GTK support requirespycairo,PyGObjectandPILorpillowto be installed.Only GTK 3 is supported, as GTK 4 has totally changed the way clipboard data is handled and I can't figure it out. I'm totally open to someone else solving this and submitting a PR if they want. I don't use GTK.The figure will be copied to the clipboard as a PNG, regardless ofmatplotlib.rcParams['savefig.format']. Alas, SVG output is not currently supported. Pull requests that enable SVG support would be welcomed.MacOS-specific behavior:- Requires Qt, whether PyQt5/6 or PySide2/6. - The figure will be copied to the clipboard as a PNG, regardless of matplotlib.rcParams['savefig.format'].Releases3.2.0: 2024-02-13Added MacOS support (thanks @orlp!). No SVG support, same as Linux.3.1.1: 2024-02-13Wrap matplotlib.pyploy.figure appropriately to maintain docstring (thanks @eendebakpt!)3.1.0: 2024-02-13Add support for PyQt6 and PySide6 on Linux (already supported on Windows)3.0.0: 2021-03-28Add Linux support (tested on Ubuntu). Requires PyQt5, PySide2, or PyObject libraries; relevant library chosen based on matplotlib graphical backend in use. No SVG support.On Windows, non SVG-formats will now use the Pillow library if installed, storing the figure to the clipboard as a device-indepenent bitmap (as previously handled in v2.0). This is compatible with a wider range of Windows applications.2.1.0: 2020-08-27Remove Pillow.Add support for png & svg file formats.2.0.0: 2019-06-07Remove Qt requirement. Now use Pillow to grab the figure image, and win32clipboard to manage the Windows clipboard.1.0.2: 2018-11-27Force use of Qt4Agg or Qt5Agg. Some installs will default to TkAgg backend, which this module doesn't support. Forcing the backend to switch when loading this module saves the user from having to manually specify one of the Qt backends in every analysis.1.0.1: 2018-11-27Improve setup.py: remove need for importing module, add proper installation dependenciesChange readme from ReST to Markdown1.0: 2017-08-09Initial release
add-custom-key-binding
Adding custom key binding in Gnome environment.Usage# args : <name> <binding> <command> add_keybinding "MyShortcut" "<super>w" "/usr/bin/gedit"InstallationRequirementsCompatibilityLicenceAuthorsadd_custom_key_binding was written byfx-kirin.
add-data
Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content.
add-decor
Failed to fetch description. HTTP Status Code: 404
addDemo
Add_DemoA Python package upload test.UsageFollowing cmd will give output alsp. ''' addDemo'''
add-demo
No description available on PyPI.
adddi-test
No description available on PyPI.
added-value
A Sphinx extension for embedding Python objectvaluesinto documentation as text, lists or tables.This is achieved by adding new roles and directives which refer to Python objects which contain the values to be represented. The extension provides roles for embedded single and lists of values, and a sophisticated directive for rendering complex data structures like lists of dictionaries as tables.StatusBuild status:InstallationTheadded-valuepackage is available on the Python Package Index (PyPI):The package supports Python 3 only. To install:$ pip install added-valueUsageAdded Valueprovides a number of roles and directives for embedding Python values into your documentation. For example, sing added value we can extract the value ofpifrom the Python Standard Librarymathmodule, and embed it in a sentence, using theformatrole provided byAdded Values, like this:The ratio of the circumference to the diameter of a circle is :format:`math.pi, .3f` to three decimal places.Which gives:The ratio of the circumference to the diameter of a circle is 3.142 to three decimal places.Powerful toolsAdded value is powerful, and allows lists, dictionaries, and even complex data structures such as lists of lists of dictionaries to be rendered into tables in various ways. Consult thedocumentationfor more details.Copyright 2023 Sixty North ASRedistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
addend
addendThis'addend.py'code is a python utility intended forBLINDpython programmers.For Visual Impaired (VI) programmers it is NOT easy to listen to and explore large python code files where the number of indent spaces on each line of code has a syntactical meaning for python interpreters.'addend' adds #comments, like '# @end: if','# @end: for' etc, for easier sound recognition when a syntax block structure has ended as defined by python's space indentation rules. Those added '# @end:' comment lines are placed whenever a syntax block has ended. Hopefully, these #comments provides guidance for VI folks while they simultaneously decipher python code and intently listen to unfamiliar python code with their favorite reader like JAWS or NVDA.Python started in 1991 and is now a popular programming language because of its syntax simplicity and success in Data Science & ML Projects. With the advent of python-based LLMs the use of python has increased dramatically leaving blind python programmers at a disadvantage in that field. Python is an "offside" language depending solely on the different space indentation to close syntax blocks. This makes it difficult for blind folks to study unfamiliar python code and to listen at the same time to readers like JAWS or NVDA.Below highlights the blind python programmer's dilemma - advice given by a python developer to folks working in python:"Generally, if you're used to curly braces {} language, just indent the code like them and you'll be fine."should be expanded to ==>"except when you are blind and depend on a reader".Blind folks are disadvantaged in today's fast past AI work environments when it comes to programming in python.'addend' is hopefully a simple step to make it easier for the VI folks to use JAWS or NVDA readers while working on large & unfamiliar python files.The basics for working with 'addend':[1] use it in conjunction with VScode or any other editor: ==> 'code my_large.py' launch python file in VScode ==> 'addend my_large.py' run in cmd or terminal window a backup 'my_large.MMDD.hhmmss.py' is automatically created and VScode updates showing '# @end:' comments. [2] use it to keep original file untouched, use new file that is generated: ==> 'addend my_large.py new_large.py' 'code new_large.py' VScode shows the added '# @end:' comments. [3] to remove any added '# @end:' comments use: ==> 'addend -r new_large.py' (output to same file) ==> 'addend -r new_large.py final_large.py' (output to different file) ==> 'addend -r -d new_large.py' (also lists all ignored lines) Python version: >=3.7 installation is required. Optional requirement: to run [-b] option: 'black' python syntax checker to install use:     'pip install black' To install 'addend' on MacOS or Windows platform use: 'pip install addend' Run 'addend' from command line without leading 'python' or '.py' : % 'addend --help' usage: addend [-h] [-b] [-r] [-d] [-v] [inFilename] [outFilename] positional arguments: inFilename process input inFilename.py to add '# @end:' lines based on python indent rules. outFilename specify OPTIONAL outFilename.py ; DEFAULT output is same inFilename. optional arguments: -h, --help show this help message and exit -b, --black run 'black' python syntax checker before & after 'addend' processing. -r, --remove ONLY remove ALL '# @end:' comment lines from input filename. -d, --debug print debugging lines. -v, --version print version number. Default output is 'inFilename.py' with '# @end:' comments added. Unchanged input file is SAVED as 'inFilename.MMDD-hhmmss.py' NOTE: all prior added "# @end:" comment lines are removed during the next addend run and are re-added accordingly.
addepar
No description available on PyPI.
adder
UNKNOWN
adderall
a miniKanren implementation in Hy.
addereq
addereq(地震前兆数据自动分析框架)地震前兆分析手段长期积弱,数据源是一个很大的原因,没有文件存储标准,数据库接入门槛也比较高。为了突破数据源的壁垒,助力地震前兆科研发展,开发上线了该框架。主要功能从地震系统的Oracle前兆数据库中提取数据,生成可视化图形,并无缝集成各种地球物理分析方法,以便实现自动化操作。安装Python环境安装建议安装Anaconda或者Miniconda,Anaconda安装参考官网链接,Miniconda安装参考官网链接,入门建议安装Anaconda,不需要太多配置,开箱即用。addereq包安装安装好Python环境后,执行以下命令安装addereq。pipinstalladdereq由于cx_Oracle在Windows系统下的安装需要Visual C++编译环境,配置起来比较复杂,建议先使用conda安装cx_Oracle,然后再安装addereq,安装命令如下:condainstallcx_Oracle安装 Oracle 即时客户端下载以及安装参见 Oracle Instant Client官网链接数据库配置文件需要将常用的数据库配置到default.conf文件中,该文件可以存放在和脚本相同目录中,也可以存放在系统用户目录中,建议存放在系统用户目录中,目录需为~/.adder/default.conf。 配置文件格式为:[db1]HOST=192.168.181.12PORT=1521USERNAME=testPASSWORD=testTNSNAME=pdbqz建议将常用数据库全部配置进去,一劳永逸。主要模块功能说明fetching 模块该模块为数据下载模块,可以提供快速批量的数据下载功能。连接数据库参数只需要输入default.conf文件中配置的数据库名称即可。fromaddereqimportfetchingastsfconn=tsf.conn_to_Oracle('db1')数据下载fromaddereqimportfetchingastsfdf=tsf.fetching_data(conn,'20230416','20230416','地电场','北京','分钟值','原始库',gzip_flag=False)plotting 模块该模块为批量绘图模块,提供类MapSIS的功能,可以批量绘制多个台站或者多个测向的曲线。df变量中可以包含多个台站、多个测向的数据,可以一次性批量绘制,输出文件名自动生成。按台站绘图fromaddereqimportplottingastsptsp.plot_by_stations(df,conn)按测向代码绘图fromaddereqimportplottingastsptsp.plot_by_items(df,conn)联系作者[email protected]
adderlib
adderlibadderlibis an unofficial python implementation of theAdder API, for use with Adderlink KVM systems.Withadderlib, you can:Log in or out as an existing KVM userQuery lists of transmitters, receivers, and channels available to the userAccess many properties of the KVM devicesConnect receivers to channelsManage presets...and so much more! Well, a little bit more.Getting StartedThe best way to get started is to check out theexamples, and then theofficial documentation on ReadTheDocs. But in general, it's four easy steps:fromadderlibimportadder# Step 1: Create a handle to the API by passing the IP address or hostname of the AIM (the KVM server)api=adder.AdderAPI("192.168.1.10")# Step 2: Log in using an exising KVM accountapi.login("username","password")# Step 3: Do some stufffortxinapi.getTransmitters():do_some_stuff(tx)# Step 4: Don't forget to log out!api.logout()CustomizableBoy oh boy is this customizable! AnUrlHandlerabstract class is provided. Subclass this and override theapi_call()method to communicate with the server however you wish! I sure spent a lot of time on the defaultRequestsHandlerclass, which uses therequestslibrary but that's ok I'm sure you have your reasons.
addertestcode456
testingfilejust a quick test file, delete later
addetect
StatusCompatibilitiesContactaddetectInstallationUsageCe package permet de trouver les outliers d'une série, à partir de différentes méthodes.De plus, il permet aussi d'avoir de nombreux information sur la sérieimportpandasaspdfromaddetect.detectorimportDetectorserie=pd.Series([1,2,3],index=pd.date_range(start='2022-01-01',end="2022-01-03"))detector=Detector(serie)outliers=Detector._standard_deviation()
addext
No description available on PyPI.
addfips
AddFIPSAddFIPS is a tool for adding state or county FIPS codes to files that contain just the names of those geographies.FIPS codes are the official ID numbers of places in the US. They're invaluable for matching data from different sources.Say you have a CSV file like this:state,county,statistic IL,Cook,123 California,Los Angeles County,321 New York,Kings,137 LA,Orleans,99 Alaska,Kusilvak,12AddFIPS lets you do this:> addfips --county-field=county input.csv countyfp,state,county,statistic 17031,IL,Cook,123 06037,California,Los Angeles County,321 36047,New York,Kings,137 22071,LA,Orleans,99 02270,Alaska,Kusilvak,12InstallingAddFIPS is a Python package compatible with versions 3.7+.If you've used Python packages before:pip install addfips # or pip install --user addfipsIf you haven't used Python packages before,get pip, then come back.FeaturesUse full names or postal abbrevations for statesWorks with all states, territories, and the District of ColumbiaSlightly fuzzy matching allows for missing diacretic marks and different name formats ("Nye County" or "Nye', "Saint Louis" or "St. Louis", "Prince George's" or "Prince Georges")Includes up-to-date 2015 geographies (shout out to Kusilvak Census Area, AK, and Oglala Lakota Co., SD)Note that some states have counties and county-equivalent independent cities with the same names (e.g. Baltimore city & County, MD, Richmond city & County, VA). AddFIPS's behavior may pick the wrong geography if just the name ("Baltimore") is passed.Command line toolusage: addfips [-h] [-V] [-d CHAR] (-s FIELD | -n NAME) [-c FIELD] [-v VINTAGE] [--no-header] [input] AddFIPS codes to a CSV with state and/or county names positional arguments: input Input file. default: stdin optional arguments: -h, --help show this help message and exit -V, --version show program's version number and exit -d CHAR, --delimiter CHAR field delimiter. default: , -s FIELD, --state-field FIELD Read state name or FIPS code from this field -n NAME, --state-name NAME Use this state for all rows -c FIELD, --county-field FIELD Read county name from this field. If blank, only state FIPS code will be added -v VINTAGE, --vintage VINTAGE 2000, 2010, or 2015. default: 2015 --no-header Input has no header now, interpret fields as integers -u, --err-unmatched Print rows that addfips cannot match to stderrOptions and flags:input: (positional argument) The name of the file. If blank,addfipsreads from stdin.--delimiter: Field delimiter, defaults to ','.--state-field: Name of the field containing state name--state-name: Name, postal abbreviation or state FIPS code to use for all rows.--county-field: Name of the field containing county name. If this is blank, the output will contain the two-character state FIPS code.--vintage: Use earlier county names and FIPS codes. For instance, Clifton Forge city, VA, is not included in 2010 or later vintages.--no-header: Indicates that the input file has no header.--state-fieldand--county-fieldare parsed as field indices.--err-unmatched: Rows thataddfipscannot match will be printed to stderr, rather than stdoutThe output is a CSV with a new column, "fips", appended to the front. Whenaddfipscannot make a match, the fips column will have an empty value.ExamplesAdd state FIPS codes:addfips data.csv --state-field fieldName > data_with_fips.csvAdd state and county FIPS codes:addfips data.csv --state-field fieldName --county-field countyName > data_with_fips.csvFor files with no header row, use a number to refer to the columns with state and/or county names:addfips --no-header-row --state-field 1 --county-field 2 data_no_header.csv > data_no_header_fips.csvColumn numbers are one-indexed.AddFIPS for counties from a specific state. These are equivalent:addfips ny_data.csv -c county --state-name NY > ny_data_fips.csv addfips ny_data.csv -c county --state-name 'New York' > ny_data_fips.csv addfips ny_data.csv -c county --state-name 36 > ny_data_fips.csvUse an alternate delimiter:addfips -d'|' -s state pipe_delimited.dsv > result.csv addfips -d';' -s state semicolon_delimited.dsv > result.csvPrint unmatched rows to another file:addfips --err-unmatched -s state state_data.csv > state_data_fips.csv 2> state_unmatched.csv addfips -u -s STATE -c COUNTY county_data.csv > county_data_fips.csv 2> county_unmatched.csvPipe from other programs:curl http://example.com/data.csv | addfips -s stateFieldName -c countyField > data_with_fips.csv csvkit -c state,county,important huge_file.csv | addfips -s state -c county > small_file.csvPipe to other programs. In files with extensive text, filtering with the FIPS code is safer than using county names, which may be common words (e.g. cook):addfips culinary_data.csv -s stateFieldName -c countyField | grep -e "^17031" > culinary_data_cook_county.csv addfips -s StateName -c CountyName data.csv | csvsort -c fips > sorted_by_fips.csvAPIAddFIPS is available for use in your Python scripts:>>>importaddfips>>>af=addfips.AddFIPS()>>>af.get_state_fips('Puerto Rico')'72'>>>af.get_county_fips('Nye',state='Nevada')'32023'>>>row={'county':'Cook County','state':'IL'}>>>af.add_county_fips(row,county_field="county",state_field="state"){'county':'Cook County','state':'IL','fips':'17031'}The results ofAddFIPS.get_state_fipsandAddFIPS.get_county_fipsare strings, since FIPS codes may have leading zeros.ClassesAddFIPS(vintage=None)The AddFIPS class takes one keyword argument,vintage, which may be either2000,2010or2015. Any other value will use the most recent vintage. Other vintages may be added in the future.get_state_fips(self, state)Returns two-digit FIPS code based on a state name or postal code.get_county_fips(self, county, state)Returns five-digit FIPS code based on county name and state name/abbreviation/FIPS.add_state_fips(self, row, state_field='state')Returns the input row with a two-figit state FIPS code added. Input row may be either adictor alist. If adict, the 'fips' key is added. If alist, the FIPS code is added at the start of the list.add_county_fips(self, row, county_field='county', state_field='state', state=None)Returns the input row with a five-figit county FIPS code added. Input row may be either adictor alist. If adict, the 'fips' key is added. If alist, the FIPS code is added at the start of the list.LicenseDistributed under the GNU General Public License, version 3. See LICENSE for more information.
add-fonction
This is a simple example package. You can use [Github-flavored Markdown](https://guides.github.com/features/mastering-markdown/) to write your content.dz zd dzdzaefaf
addfunclib
Failed to fetch description. HTTP Status Code: 404
addfunctions
Python package with additional functionsModule:addfunctions.pypython functions
addfunctool
This is a test project.
add-funtion-sum-and-sub
A package to perform arithmetic operations
add-gitignore
No description available on PyPI.
addheader
addheader - add headers to filesThis repository contains a single command to manage a header section, e.g. copyright, for a source code tree.Using UNIX glob patterns, addheader modifies an entire tree of source code at once. The program replaces existing headers with an updated version, and places the header after any shell magic at the top of the file.As of version 0.3.0, Jupyter notebooks can also be handled. See Usage -> Adding headers to Jupyter Notebooks.Installationaddheaderis written in Python and can be simply installed from the PyPI package:pip install addheaderIf you want Jupyter Notebook support, add "jupyter" in square brackets after the name of the package (use the quotes unless you know your shell doesn't need them):pip install 'addheader[jupyter]'UsageUse the commandaddheader. Invokingaddheader -hshows a detailed help message for the command arguments and options. Below are some examples and comments on usage.Basic usageIf you have the header file in "copyright.txt", and your source tree is a Python package located at "./mypackage", then you would invoke the program like this:adddheadermypackage--textcopyright.txtBy default, the header will not be added to "init.py" files.Additional actionsIf you want to see which files would be changed without modifying them, add-nor--dry-runto the command line arguments. If this argument is given, any arguments related to modifying or removing headers will be ignored.If you want to remove existing headers instead of adding or updating them, add-ror--removeto the command line arguments.Specifying file patternsYou can customize the files that are modified with the-por--patternargument, which takes a UNIX glob-style pattern and can be repeated as many times as you like. To help exclude files, if the '~' is the first letter of the pattern, then the rest of the pattern is used to exclude (not include) files. So, for example, if you provide the following source code tree:mypackage __init__.py foo.py bar.py tests/ __init__.py test_foo.py test_bar.pyThe following commands would match the following lists of files:addheader mypackage -t header.txt -p *.pymypackage/{init.py, foo.py, bar.py}, mypackage/tests/{init.py, test_foo.py, test_bar.py}addheader mypackage -t header.txt -p *.py -p ~__init__.pymypackage/{foo.py, bar.py}, mypackage/tests/{test_foo.py, test_bar.py}addheader mypackage -t header.txt -p *.py -p ~__init__.py -p ~test_*.pymypackage/{foo.py, bar.py}Header delimitersThe header itself is, by default, delimited by a line of 78 '#' characters. Whiledetectingan existing header, the program will look for any separator of 10 or more '#' characters. For example, if you have a file that looks like this:##########myheaderwith10hashesaboveandbelow##########helloand a header text file containing simply "Hello, world!", then the modified header will be:############################################################################### Hello, world!##############################################################################helloThe comment character and separator character, as well as the width of the separator, can be modified with command-line options. For example, to add a C/C++ style comment as a header, use these options:addheadermypackage--comment"//"--sep"="--sep-len40-tmyheader.txtThis will insert a header that looks like this://======================================== // my text goes here //========================================Keep in mind that subsequent operations on files with this header, including--remove, will need the same--commentand--separguments so that the header can be properly identified. For example, runningaddheader mypackage --removeafter the above command will not remove anything, andaddheader mypackage -t myheader.txtwill insert a second header (using the default comment character and separator).You can control whether the final line has a newline character appended with the--final-linesepcommand-line option (or thefinal_linesepconfiguration option). This is True by default for text files, but False for Jupyter notebooks. The logic is that Jupyter notebook headers are in their own cell -- and also, this avoids spurious modifications by the Black code reformatter.To avoid passing command-line arguments every time, use the configuration file. See the "Configuration" section for more details.Adding headers to Jupyter notebooksStarting in version 0.3.0, you can add headers to Jupyter Notebooks as well.To enable Jupyter notebooks, you must install the 'jupyter' optional dependencies, e.g.,pip install addheader[jupyter].To enable this, add a-j {suffix}or--jupyter {suffix}argument to the command-line, or similarly add ajupyter: {suffix}argument in the configuration file. The{suffix}indicates an alternate file suffix to use for identifying whether a file is a Jupyter Notebook, where the default is ".ipynb". In the configuration file, usejupyter: trueto use the default. On the command-line, omit the value to use the default.To set the Jupyter notebook format version, add--notebook-version {value}to the command-line or, equivalently,notebook_version: {value}to the configuration file. Values can be from 1 to 4. The default value is 4.The file pattern arguments (seeSpecifying file patterns, above) are still honored, but if Jupyter notebooks are enabled, the pattern*{suffix}will be automatically added to the patterns to match. Thus, by default*.ipynbwill be added to the files to match.If there is no existing header, the Jupyter notebook header will be inserted as the first 'cell', i.e. the first item, in the notebook. An existing header will be found anywhere in the notebook (by itsheadertag, see below).Currently the header cell is of type "code", with every line of the cell commented (using a 'markdown' cell is another possibility, but the code cell is friendler to the Jupyterbook machinery, and also retains the header in exported versions of the notebook without markdown cells). The content of the header is the same as for text files. Two, optionally three, tags will be added to the cell metadata:header- Indicates this is the header cell, so it can be modified or removed later.hide-cell- If you build documentation with Jupyterbook, this will hide the cell in the generated documentation behind a toggle button (seehttps://jupyterbook.org/interactive/hiding.html).Just as for text files, Jupyter notebook headers can be updated or removed.For reference, below is the form of the generated Jupyter notebook cell JSON (with the 'id' field):{"id":"1234567890abcdef1234567890abcdef","cell_type":"code","metadata":{"tags":["header","hide-cell"]},"source":["# Copyright info\n","# is placed here.\n"],"outputs":[]}ConfigurationTo avoid passing commandline arguments every time, you can create a configuration file that simply lists them as key/value pairs (using the long-option name as the key). By default, the program will look for a fileaddheader.cfgin the current directory, but this can also be specified on the command-line with-c/--config. For example:addheader# looks for addheader.cfg, ok if not presentaddheader-cmyoptions.conf# uses myoptions.conf, fails if not presentThe configuration file is in YAML format. For example:text:myheader.txtpattern:-"*.py"-"~__init__.py"# C/Java style comment blocksep:"-"comment:"//"sep-len:40# Verbosity as a number instead of -vvverbose:2Command-line arguments will override configuration arguments, even if the configuration file is explicitly provided with-c/--config. The "action" arguments,-r/--removeand-n/--dry-run, will be ignored in the configuration file.Using in testsTo test your package for files missing headers, use the following formula:importosimportmypackagefromaddheader.addimportFileFinder,detect_filesdeftest_headers():root=os.path.dirname(mypackage.__file__)# modify patterns to match the files that should have headersff=FileFinder(root,glob_pat=["*.py","~__init__.py"])has_header,missing_header=detect_files(ff)assertlen(missing_header)==0CreditsTheaddheaderprogram was developed for use in theIDAESproject and is maintained in the IDAES organization in Github athttps://github.com/IDAES/addheader. The primary author and maintainer is Dan Gunter (dkgunter at lbl dot gov).LicensePlease see the COPYRIGHT.md and LICENSE.md files in the repository for limitations on use and distribution of this software.
add-header-mv
HammalInstall:$pipinstalladd-header-mvUsage:$add-header-mv<arg1><arg2><arg3>arg1is file postfix, maybe".py"or".java".arg2is what you want to add to head of file, maybeimport osorpackage what.you.want;.arg3is target dir name, maybe../otherDiror/User/xxx/doc.
addhrefs
Turns plain text into HTML with links on URLs and email addressesRecommended: Python 2.3 or later
addhundred
UNKNOWN
addic7ed
Addic7ed Scraper================Requirements------------This scraper is made to work with Python 3 only. It is pre-installed onmany linux distribution.If it's not your case, install it :pInstall-------Using python-pip:::$ sudo pip install addic7edUsing Git repository:::$ git clone https://github.com/Jesus-21/addic7ed.git addic7edor download/unzip`archive <https://github.com/Jesus-21/addic7ed/archive/master.zip>`__then (from download/clone path):::$ sudo pip install -r requirements.txtor use python `Pythonvirtualenv <http://docs.python-guide.org/en/latest/dev/virtualenvs/>`__and install requirements within.Create ~/.addic7edrc file containing language you want (english for instance):::[addic7ed]lang = enYou can find language codes `here <https://github.com/Jesus-21/addic7ed/blob/master/addic7ed/constants.py>`__Usage-----If you installed using python-pip, just run *addic7ed* (otherwise *addic7ed.py* file should be excutable) from the folder where your video files are,::$ addic7edor::$ /git/clone/path/addic7ed.pyfollowing command line arguments can be provided:::positional arguments:PATH path of file to search subtitles for (default: allvideo from current dir).optional arguments:-h, --help show this help message and exit--list-lang list supported languages.-n, --dry-run do not ask or download subtitlejust output availableones and leave.-l LANG, --lang LANG language to search subs for (default: en).-k, --keep-lang suffix subtitle file with language code.-e EXTENSIONS [EXTENSIONS ...], --extensions EXTENSIONS [EXTENSIONS ...]Find subtitles for files matching given extensions(space separated values)--names-from-file NAMES_FROM_FILEread file names from a file.--paths-from-file PATHS_FROM_FILEread file paths from a file.-r {none,sub,video}, --rename {none,sub,video}rename sub/video to match video/sub or none at all(default: none).then it will prompt which file you want to download. If download issuccessful, it will rename the video file to match subtitle file.|Example|TODO List---------- Error management/reporting- Intelligent auto-download (using comment + completion +popularity)- Better file crawling (recursivity mainly)Suggestions and/or pull requests are more than welcome!.. |Example| image:: https://raw.githubusercontent.com/Jesus-21/addic7ed/master/readme/capture.jpg
addic7ed-cli
This is a little command-line utility to fetch subtitles from addic7ed.InstallFrom pypiInstall latest stable version with:$ pip install addic7ed-cliUse--upgradeto upgrade.LatestInstall latest development version with:$ pip install https://github.com/BenoitZugmeyer/addic7ed-cli/archive/master.zipArchLinuxAnAUR packageis waiting for you.Nix/NixOSaddic7ed-cli is available from nixpkgs unstable channel:$ nix-env -iA nixpkgs.pythonPackages.addic7ed-cliUsageExample, if you speak french and english:$ addic7ed -l french -l english The.Serie.S02E23.MDR.mkvHelp:$ addic7ed --helpAuthentificationYou can login with your addic7ed.com identifiers to increase your daily download quota:Anonymous users are limited to 15 downloads per 24 hours on their IP addressRegistered users are limited to 40VIPs get 80 downloads (please consider donating)Configuration fileYou can store frequently used options in a configuration file. Create a file at~/.config/addic7ed(Linux, OSX) or%APPDATA%/Addic7ed Configuration.txt(Windows), and it will be parsed using the Python ConfigParser (see example below). Hint: use the--verboseargument to print the full path of the configuration file when running a command. It can contain three sections:[flags], to set a flag (verbose, hearing-impaired, overwrite, ignore, batch or brute-batch, seeaddic7ed search --helpfor informations about those flags)[languages], to list prefered languages[session], the session to use for authentification (this is automatically populated when usingaddic7ed login)Example:[flags] hearing-impaired = no batch [languages] french english [session] abcdefVideo organizervideo-organizerformat is supported. If a “filelist” file is next to an episode, it will use it to extract its real name and forge the good query.
addict
Addict is a module that exposes a dictionary subclass that allows items to be set like attributes. Values are gettable and settable using both attribute and item syntax. For more info check out the README at ‘github.com/mewwts/addict’.
addicted
addicted = addict ExtendeDThis library comes from ‘mewwts/addict‘ with some more features.DictSame as Dict from ‘mewwts/addict‘ except that update() method accept list,tuple and kwargs like usual python dict. The use of ‘inspect‘ module has been removed for performance reason.AddDictDict with these additional methods :pprint() find(pattern,**kwargs) count_some_values(pattern,ignore_case=False) count_some_keys(pattern,ignore_case=False) count_some_items(filter) iter_some_items(pattern,ignore_case=False) iter_some_values(pattern,ignore_case=False) iter_some_keys(pattern,ignore_case=False) get_some_items(pattern,ignore_case=False) get_some_values(pattern,ignore_case=False) get_some_keys(pattern,ignore_case=False) mget(*key_list) extract(key_list) parse_booleans(key_list) parse_numbers(key_list) update_dict(*args, **kwargs)NoAttrDictWorks like AddDict, except that it returns a ‘NoAttr‘ value when an attribute is missing. Please readnoattrpackage notes for explaination about ‘NoAttr‘from addicted import Dict,NoAttrDict d1 = AddDict() d2 = NoAttrDict() print type(d1.a.b.c.d) >>> <class 'addicted.AddDict'> print type(d2.a.b.c.d) >>> <class 'noattr.NoAttrType'>News0.0.9 (2017-05-01)switch from pprint property to pprint() method, add pformat()0.0.8 (2016-08-26)add update_dict() for not recursive update()0.0.6 (2015-08-06)Flag to Beta.0.0.4 (2015-08-06)add __all__ list0.0.3 (2015-08-04)mewwts/addict source code has been included directly into elapouya/addicted to remove the use of ‘inspect’ module for performance reason the isgenerator() function has been coded another way. update() method has been changed to accept list,tuple and kwargs0.0.2 (2015-07-31)Add some methods0.0.1 (2015-07-30)First version
addicted3
This library is a port to python 3 of the package ‘addicted’DictSame as Dict from ‘mewwts/addict‘ except that update() method accept list,tuple and kwargs like usual python dict. The use of ‘inspect‘ module has been removed for performance reason.AddDictDict with these additional methods :pprint() find(pattern,**kwargs) count_some_values(pattern,ignore_case=False) count_some_keys(pattern,ignore_case=False) count_some_items(filter) iter_some_items(pattern,ignore_case=False) iter_some_values(pattern,ignore_case=False) iter_some_keys(pattern,ignore_case=False) get_some_items(pattern,ignore_case=False) get_some_values(pattern,ignore_case=False) get_some_keys(pattern,ignore_case=False) mget(*key_list) extract(key_list) parse_booleans(key_list) parse_numbers(key_list) update_dict(*args, **kwargs)NoAttrDictWorks like AddDict, except that it returns a ‘NoAttr‘ value when an attribute is missing. Please readnoattrpackage notes for explaination about ‘NoAttr‘from addicted import Dict,NoAttrDict d1 = AddDict() d2 = NoAttrDict() print type(d1.a.b.c.d) >>> <class 'addicted.AddDict'> print type(d2.a.b.c.d) >>> <class 'noattr.NoAttrType'>News3.0.1 (2021-11-30)Fix basestring -> str3.0.0 (2018-11-07)First version python 3
addict-tracking-changes
addict-tracking-changesIntroductionHEADS UPBefore using the library carefully read the Known bugs and Caveats section below.Originally, this repository was a fork ofaddictby Mats Julian Olsen. Overtime, it has substatially diverged in functionality and codebase that it made sense to breakout as its own repository.The original addict: provides an alternative and succient interface to manipulate a dictionary. This is especially useful when dealing with heavily nested dictionaries. As example (taken fromhttps://github.com/mewwts/addict) a dictionary created using standard python dictionary interface looks as follows:body={'query':{'filtered':{'query':{'match':{'description':'addictive'}},'filter':{'term':{'created_by':'Mats'}}}}}`can be summarized to following three lines:body=Dict()body.query.filtered.query.match.description='addictive'body.query.filtered.filter.term.created_by='Mats'This repo builds on original addict and adds contraptions to track key additions in the dictionary. This features comes in quite handy in building reactive webapps where one has to respond to all the changes made on the browser. Addict-tracking-changes is the underpinning data-structure inofjustpy: a python based webframework build fromjustpyThe functions relevant to tracking changed history are:get_changed_historyandclear_changed_history. Theget_changed_historyreturns an iterator of XPath style paths like/a/b/c/e(seeDemo example).Usage and examplesInstallationThis project is not on PyPI. Its a simple package with no third party dependency. Simply clone from github and include the source directory in your PYTHONPATH.Demo examplefromaddictimportDictbody=Dict()body.query.filtered.query.match.description='addictive'body.query.filtered.filter.term.created_by='Mats'forchanged_pathinbody.get_changed_history():#<work with changed_path>body.clear_changed_history()Behaviour when values are instances of container typesaddict works as expected when the values of keys are simple data types (such as string, int, float, etc.). However, for container types such as dict, list, tuples, etc. the behaviour is somewhat differs.dicts are treated as opaque object just like simple data typesfromaddictimportDictmydict=Dict()mydict.a.b.c={'kk':1}mydict.a.b.e={'dd':1}for_inmydict.get_changed_history():print(_)will print/a/b/c /a/b/eand not/a/b/cc/kk /a/b/e/ddlists are seen as container, i.e.,get_changed_historywill report path for each element of the listfromaddictimportDictmydict=Dict()mydict.a.b=[1,[1]]mydict.a.c=[2,[2,[3]]]get_changed_history will report following paths:/a/b/0, /a/b/1/0, /a/c/0, /a/c/1/0, /a/c/1/1/"tuple, namedtuple, sets tuple behave same as dict and are treated as opaque data structureKnown bugs and CaveatsOnly tracks field additions. Deletions and updates are not tracked.freeze doesn't guards against deletionsbuilding dict from another dict as shown in following expression wont' workcjs_cfg=Dict(other_dict,track_changes=True)instead usecjs_cfg=Dict(track_changes=True)withopen("other_dict.pickle","rb")asfh:x=pickle.load(fh)for_inoj.dictWalker(x):oj.dnew(cjs_cfg,_[0],_[1])Use TrackedList for tracking nested listfromaddict_tracking_changesimportDict,TrackedListtrackerprop=Dict(track_changes=True)trackerprop.a.b=[1,TrackedList([1],track_changes=True)]trackerprop.a.c=[2,TrackedList([2,TrackedList([3],track_changes=True)],track_changes=True)]which when asked changed history will output as follows:print(list(trackerprop.get_changed_history()))output['/a/b/0','/a/b/1/0','/a/c/0','/a/c/1/0','/a/c/1/1/0']APIsAPIDescriptionDict(*args, **kwargs)Initializes a new Dict objectto_dict(self)Converts the Dict object to a regular dictionaryfreeze(self, shouldFreeze=True)Freezes the Dict object, making it immutableunfreeze(self)Unfreezes the Dict object, making it mutable againget_changed_history(self, prefix="", path_guards=None)Returns an iterator with the changed history of keysclear_changed_history(self)Clears the changed history of the Dict objectset_tracker(self, track_changes=False)Sets or resets the change tracker for the Dict objectEndNotesDocs (in readthedocs format):https://monallabs-org.github.io/addict-tracking-changes/#introduction,https://webworks.monallabs.in/addict-tracking-changesDeveloped By: webworks.monallabs.in
addict-tracking-changes-fixed-attributes
IntroductionSpecialized addict with tracking changes. Used as optimization enhancements. Provides dicts that can have either no or at most one key.EndNotesDeveloped By: webworks.monallabs.in
addicty
No description available on PyPI.
addignore
addignoreAdds .ignore files to current directory.
adding
No description available on PyPI.
addit
Add the .gitignore file via command line
addition
Failed to fetch description. HTTP Status Code: 404
additional-data
additional dataintroductionThe additional data library offers a means to extend metadata structures via an external xml file referencing additional information for referenced elements. It is made available under theMIT license.installationGetting to use the additional data package is as simple as::pip install additional-datahow toVisit the projects homepage to learn how to use the package properly.
additional-functions
https://github.com/s0urcedev/AdditionalFunctions- main resource
additional-resources-menu
additional_resources_menuA JupyterLab extension.This extension adds an Additional Resources submenu to the Help Menu in JupyterLab. This submenu provides links to outside documentation as set in theoverrides.jsonfile (see Configuration).RequirementsJupyterLab >= 3.0InstallTo install the extension, enter the root repository folder and execute:pipinstalladditional-resources-menuUninstallTo remove the extension, execute:pipuninstalladditional-resources-menuIf that does not work, you can directly delete the extension folder from Jupyter. Seethis linkto find where extensions are installed.ConfigurationThe additional resources menu is populated with links that are specified in theoverrides.jsonfile. Clickthis linkto see where your overrides.json file is installed. This will set the additional resources for all users who access Jupyter through the shared install (so this file must be set for all virtual environments or installs).View the example_overrides.json file above to see how to format this fileContributingDevelopment installNote: You will need NodeJS to build the extension package.Thejlpmcommand is JupyterLab's pinned version ofyarnthat is installed with JupyterLab. You may useyarnornpmin lieu ofjlpmbelow.# Clone the repo to your local environment# Change directory to the additional_resources_menu directory# Install package in development modepipinstall-e.# Link your development version of the extension with JupyterLabjupyterlabextensiondevelop.--overwrite# Rebuild extension Typescript source after making changesjlpmrunbuildYou can watch the source directory and run JupyterLab at the same time in different terminals to watch for changes in the extension's source and automatically rebuild the extension.# Watch the source directory in one terminal, automatically rebuilding when neededjlpmrunwatch# Run JupyterLab in another terminaljupyterlabWith the watch command running, every saved change will immediately be built locally and available in your running JupyterLab. Refresh JupyterLab to load the change in your browser (you may need to wait several seconds for the extension to be rebuilt).By default, thejlpm run buildcommand generates the source maps for this extension to make it easier to debug using the browser dev tools. To also generate source maps for the JupyterLab core extensions, you can run the following command:jupyterlabbuild--minimize=FalseDevelopment uninstallpipuninstalladditional_resources_menuIn development mode, you will also need to remove the symlink created byjupyter labextension developcommand. To find its location, you can runjupyter labextension listto figure out where thelabextensionsfolder is located. Then you can remove the symlink namedadditional-resources-menuwithin that folder.
additional-urwid-widgets
additional_urwid_widgetsThe python libraryurwidcontains many basic widgets, but lacks (in my opinion) of "more specialized" widgets, such as a date picker.This project provides such "more specialized" widgets.InstallationThe project can be installed viapip.OptionsThere are several approaches to install a package via the terminal (as discribedhere):Setup a virtual env to install the package (recommended):python3 venv env source ./env/bin/activate python3 -m pip install additional-urwid-widgetsInstall the package to the user folder:python3 -m pip install --user additional-urwid-widgetsInstall to the system folder (not recommended):python3 -m pip install additional-urwid-widgetsWidgetsSee the corresponding wiki entries of the widgets for more information.DatePickerA (rudimentary) date picker.IndicativeListBoxAurwid.ListBoxwith additional bars to indicate hidden list items.IntegerPickerA selector for integer numbers.MessageDialogWrapsurwid.Overlayto show a message and expects a reaction from the user.SelectableRowWrapsurwid.Columnsto make it selectable and adds behavior.
addition-de-deux-nombres
No description available on PyPI.
addition-lpkapil
Example PackageThis is a simple two numbers addition package.Fixed additin module issue
addition-maz
No description available on PyPI.
addition-package
No description available on PyPI.
additionpkg
No description available on PyPI.
additions
No description available on PyPI.
addition-shantanu
this is programe for addition funcn(a,b)
addition-subtraction
Python Class for Dynamically Switching Between Addition and Subtraction Operations: AddSubIntroductionIn this tutorial, we will create a Python class namedAddSubthat allows us to dynamically switch between addition and subtraction operations. This class provides a flexible way to perform different operations on numbers based on a specified method.Code ImplementationclassAddSub:def__init__(self,method_val:int=1)->None:"""This is the constructor of the AddSub class.It takes an optional parameter method_val, which specifies the default method to be used for operations.If method_val is not an integer, it is set to 1 (addition) by default.If method_val is less than 0 or greater than 1, it is also set to 1 by default."""iftype(method_val)!=type(1):self.method=1print('only integer elements accepted')elif((method_val<=1)and(method_val>=0)):self.method=method_valelse:self.method=1print("Please enter a valid method by default addition operation has set")defoperation(self,first_number:int=0,second_number:int=0)->int:"""This method performs the operation specified by the method attribute on the two given numbers.If the method attribute is 0, subtraction is performed.If the method attribute is 1, addition is performed."""if(self.method==0):returnfirst_number-second_numberelif(self.method==1):returnfirst_number+second_numberdefchange_method(self,method_val:int)->None:"""This method changes the method attribute to the given value.If the given value is not an integer, the method attribute is not changed.If the given value is less than 0 or greater than 1, the method attribute is also not changed."""if(type(method_val)!=type(1)):print('only integer elements accepted, previous method has unchanged')elif((method_val<=1)and(method_val>=0)):self.method=method_valelse:print("Please enter a valid method, previous method has unchanged")# Usage:# Creating an instance of the AddSub classadd_sub=AddSub()# Performing addition using the default methodresult=add_sub.operation(10,20)print("Addition result:",result)# Output: 30# Changing the method to subtractionadd_sub.change_method(0)# Performing subtraction using the new methodresult=add_sub.operation(20,10)print("Subtraction result:",result)# Output: 10# Changing the method to an invalid valueadd_sub.change_method(2)# Performing addition using the previous method (addition)result=add_sub.operation(100,200)print("Addition result (using previous method):",result)# Output: 300ExplanationThe__init__method is the constructor of theAddSubclass. It takes an optional parametermethod_valthat specifies the default method to be used for operations.In the__init__method, we check ifmethod_valis an integer. If it is not, we print an error message and setmethodto 1 (addition).We also check ifmethod_valis within the valid range of 0 (subtraction) and 1 (addition). If it is not, we print an error message and setmethodto 1.Theoperationmethod takes two integer parametersfirst_numberandsecond_numberand performs the operation specified by themethodattribute on the two given numbers.Thechange_methodmethod takes an integer parametermethod_valand changes themethodattribute to the given value, provided that it is a valid value.In the usage section, we create an instance of theAddSubclass and perform addition and subtraction operations using the default method.We then change the method to subtraction using thechange_methodmethod and perform a subtraction operation.We try to change the method to an invalid value, but thechange_methodmethod prevents this and prints an error message.Finally, we perform an addition operation using the previous method (addition) and demonstrate that the method has not changed.ConclusionTheAddSubclass provides a flexible way to dynamically switch between addition and subtraction operations. This can be useful in various scenarios where you need to perform different operations based on certain conditions or user input.
additive
Data structure for representing additive secret shares of integers, designed for use within secure multi-party computation (MPC) protocol implementations.PurposeThis library provides a data structure and methods that make it possible work withn-out-of-nadditive secret sharesof integers within secure multi-party computation (MPC) protocol implementations. Secret shares of signed and unsigned integers can be represented using elements from finite fields, with support currently limited to fields having a power-of-two order.Installation and UsageThis library is available as apackage on PyPI:python-mpipinstalladditiveThe library can be imported in the usual ways:importadditivefromadditiveimport*ExamplesThis library makes it possible to concisely construct multiple secret shares from an integer:>>>fromadditiveimportshares>>>(a,b)=shares(123)>>>(c,d)=shares(456)>>>((a+c)+(b+d)).to_int()579It is possible to specify the exponent in the order of the finite field used to represent secret shares, as well as whether the encoding of the integer should support signed integers:>>>(s,t)=shares(-123,exponent=8,signed=True)>>>(s+t).to_int()-123The number of shares can be specified explicitly (the default is two shares):>>>(r,s,t)=shares(123,quantity=3)Thesharedata structure supports Python’s built-in addition operators, enabling both operations on shares and concise reconstruction of values from a collection of shares:>>>(r+s+t).to_int()123>>>sum([r,s,t]).to_int()123In addition, conversion methods for Base64 strings and bytes-like objects are included to support encoding and decoding ofshareobjects:>>>fromadditiveimportshare>>>share.from_base64('HgEA').to_bytes().hex()'1e0100'>>>[s.to_base64()forsinshares(123)]['PvmKMG8=','PoJ1z5A=']DevelopmentAll installation and development dependencies are fully specified inpyproject.toml. Theproject.optional-dependenciesobject is used tospecify optional requirementsfor various development tasks. This makes it possible to specify additional options (such asdocs,lint, and so on) when performing installation usingpip:python-mpipinstall.[docs,lint]DocumentationThe documentation can be generated automatically from the source files usingSphinx:python-mpipinstall.[docs]cddocssphinx-apidoc-f-E--templatedir=_templates-o_source..&&makehtmlTesting and ConventionsAll unit tests are executed and their coverage is measured when usingpytest(see thepyproject.tomlfile for configuration details):python-mpipinstall.[test]python-mpytestAlternatively, all unit tests are included in the module itself and can be executed usingdoctest:pythonsrc/additive/additive.py-vStyle conventions are enforced usingPylint:python-mpipinstall.[lint]python-mpylintsrc/additiveContributionsIn order to contribute to the source code, open an issue or submit a pull request on theGitHub pagefor this library.VersioningThe version number format for this library and the changes to the library associated with version number increments conform withSemantic Versioning 2.0.0.PublishingThis library can be published as apackage on PyPIby a package maintainer. First, install the dependencies required for packaging and publishing:python-mpipinstall.[publish]Ensure that the correct version number appears inpyproject.toml, and that any links in this README document to the Read the Docs documentation of this package (or its dependencies) have appropriate version numbers. Also ensure that the Read the Docs project for this library has anautomation rulethat activates and sets as the default all tagged versions. Create and push a tag for this version (replacing?.?.?with the version number):gittag?.?.?gitpushorigin?.?.?Remove any old build/distribution files. Then, package the source into a distribution archive:rm-rfbuilddistsrc/*.egg-infopython-mbuild--sdist--wheel.Finally, upload the package distribution archive toPyPI:python-mtwineuploaddist/*
add-juniper-software
Welcome to add_juniper_software!InstallationThis package is installed using pip. Pip should come pre-installed with all versions of Python for which this package is compatible. Nonetheless, if you wish to install pip, you can do so by downloadingget-pip.pyand running that python file (Windows/MacOS/Linux/BSD), or you can run the following command in terminal (Linux/BSD):sudoaptinstallpython3-pipIf you’re using brew (most likely for MacOS), you can install pip (along with the rest of Python 3) using brew:brewinstallpython3Note: The creator of this software does not recommend the installation of python or pip using brew, and instead recommends that Python 3.7+ be installed using the installation candidates found onpython.org,which include pip by default.Using Pip to install from PyPiFetching this repository from PyPi is the recommended way to install this package. From your terminal, run the following command:pip3installadd_juniper_softwareAnd that’s it! Now you can go right ahead to the quick-start guide!Install from GitHubNot a big fan of pip? Well, you’re weird, but weird is OK! I’ve written a separate script to help make installation from GitHub releases as easy as possible. To start, download the installation script and run it:wgethttps://raw.githubusercontent.com/EGuthrieWasTaken/add_juniper_software/main/source_install.pypython3source_install.pyAfter completing, the script will have downloaded the latest tarball release and extracted it into the working directory. Now, all you have to do is switch into the newly-extracted directory and run the install command:cdEGuthrieWasTaken-add_juniper_software-[commit_id]/python3setup.pyinstallIf you get a permission denied error, you may have to re-run usingsudoor equivalent. Congratulations, you just installed add_juniper_software from GitHub releases! Feel free to check out the quick-start guide!Alternatively, you can download the latest code from GitHub to install from source.This is not recommended:ghrepocloneEGuthrieWasTaken/add_juniper_softwarecdadd_juniper_software/python3setup.pyinstallAnd with a little bit of luck, you should have just installed from source!Quick-Start GuideGetting started with this package is easy! Just runadd-juniper-softwarefrom your machine! Use the-hflag to see the help menu!add-juniper-software-hDocumentationDocumentation for this project can be found onRead the Docs. Otherwise, feel free to browse the source code within the repository! It is (hopefully) well-documented…
addletterboxcv
Add Letterbox CVAdd a letterbox to the video and scale it to the specified size.Note: this program does not support audio.Usageusage: addletterboxcv [-h] --dimension WIDTH HEIGHT [--interpolation {NEAREST,LINEAR,AREA,CUBIC,LANCZOS4}] [--codec CODEC] video output
addlib
No description available on PyPI.
addlicense
addlicenseaddlicenseis a simple utility that automatically inserts a specified license file or copyright message at the top of one or more source code filesusage: addlicense.py [-h] [--licensefile LICENSEFILE] [--commentblock COMMENTBLOCK] [--comment COMMENT] [-s] [--backup] sourcefiles [sourcefiles ...] Automatically inserts a specified license file or copyright message at the top of one or more source code files positional arguments: sourcefiles a list of files to update with the license or copyright message optional arguments: -h, --help show this help message and exit --licensefile LICENSEFILE a file containing the license or copyright text, defaulting to LICENSE.txt --commentblock COMMENTBLOCK a space-separated string indicating the characters to use at the beginning and end of the license message to demark them as a comment block --comment COMMENT a string indicating the characters to use at the beginning of each line of the license message to demark them as comments -s, --skip-shebang-executable skip the initial shebang executable command: if the source file starts with a comment symbol (identified via the --comment option) followed by a shebang, to indicate an executable script on a POSIX system, then the license text will be inserted AFTER this initial line --backup keep a copy of the original source-file with a .bak extensionInstallationaddlicenseis written in Python, and you can use the pip installer to install it thus:$ pip install addlicenseHomepageYou can find the homepage ofaddlicenseathttps://github.com/hossg/addlicense