package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
acheeve
acheeveThe repository coordinates multiple chembees. A reminder: A chembee object manifests itself through the below shown SOLID design pattern.Thus, theacheeveobject handles multiple chembees, for example to evaluate an algorithm:The above pattern, lets us implement a swarm of chembees for a wide array of endpoints fast. But we only need one API to stick them together.The concept is at the moment evaluated by comparing three different datasets (chembees). If the method proves valuable, the above pattern will be used for ourSaaSproducts.Data setsIt turns out data is easily abstracted. However, thechembee_datasetsmodule implements the classes in a way that violate the original software pattern. Testing the new pattern it turns out, that the imagined SOLID pattern is more useful than the intuition and data operations should indeed not be part of any data class.This aligns with knowledge about data modelling.Commercial usageCurrently, we license the software under AGPL 3.0 or later. According to the software pattern, you have to open-source your data when using the package.You can easily do so usingveritas-archiveCiteWhen using the package for your scientific work please cite the repo and the connected papers and thesis workReferencesPublication quality RDKit by proteinsandwavefunctions
acheron
AcheronAboutAcheron is a plotting application and data recorder for USB and TCP devices communicating via the Asphodel protocol.The Asphodel communication protocol was developed by Suprock Technologies (http://www.suprocktech.com)LicenseAcheron is licensed under the ISC license.The ISC license is a streamlined version of the BSD license, and permits usage in both open source and propretary projects.
ach-file
ach-fileGenerates ACH files. Highly configurable and permissive enough to be able to generate a valid ACH file for any originating bank.ExampleHere is an example of how to use the file builder:from ach.files import ACHFileBuilder from ach.constants import AutoDateInput, BatchStandardEntryClassCode, TransactionCode b = ACHFileBuilder( destination_routing='012345678', origin_routing='102345678', destination_name='YOUR BANK', origin_name='YOUR FINANCIAL INSTITUTION', ) b.add_batch( company_name='YOUR COMPANY', company_identification='1234567890', company_entry_description='Test', effective_entry_date=AutoDateInput.TOMORROW, standard_entry_class_code=BatchStandardEntryClassCode.CCD, ) b.add_entries_and_addendas([ { 'transaction_code': TransactionCode.CHECKING_CREDIT, 'rdfi_routing': '123456789', 'rdfi_account_number': '65656565', 'amount': '300', 'individual_name': 'Janey Test', }, { 'transaction_code': TransactionCode.CHECKING_DEBIT, 'rdfi_routing': '123456789', 'rdfi_account_number': '65656565', 'amount': '300', 'individual_name': 'Janey Test', 'addendas': [ {'payment_related_information': 'Reversing the last transaction pls and thx'}, ] }, { 'transaction_code': TransactionCode.CHECKING_CREDIT, 'rdfi_routing': '023456789', 'rdfi_account_number': '45656565', 'amount': '7000', 'individual_name': 'Mackey Shawnderson', 'addendas': [ {'payment_related_information': 'Where\'s my money'}, ] }, ])It has this result:101 012345678 102345678221110 A094101YOUR BANK YOUR FINANCIAL INSTITUT 5200YOUR COMPANY 1234567890CCDTest 221111 1102345670000001 62212345678965656565 0000000300 Janey Test 0102345670000001 62712345678965656565 0000000300 Janey Test 1102345670000002 705Reversing the last transaction pls and thx 00010000002 62202345678945656565 0000007000 Mackey Shawnderson 1102345670000003 705Where's my money 00010000003 820000000500270370340000000003000000000073001234567890 102345670000001 9000001000001000000050027037034000000000300000000007300 9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999Valid Keyword ArgumentsBelow are tables describing all of the valid input arguments for the callables in the above example.File Setting Fields (ACHFileBuilder(...))Key NameFieldTypeRequiredDefaultrecord_type_codeIntegerFieldTypeTrue1priority_codeIntegerFieldTypeTrue1destination_routingBlankPaddedRoutingNumberFieldTypeTrueNoneorigin_routingBlankPaddedRoutingNumberFieldTypeTrueNonefile_creation_dateDateFieldTypeTrue"NOW"file_creation_timeTimeFieldTypeFalseNonefile_id_modifierAlphaNumFieldTypeTrue"A"record sizeIntegerFieldTypeTrue94blocking_factorIntegerFieldTypeTrue10format_codeIntegerFieldTypeTrue1destination_nameAlphaNumFieldTypeTrueNoneorigin_nameAlphaNumFieldTypeTrueNonereference_codeAlphaNumFieldTypeTrue""Batch Fields (b.add_batch(...))Key NameFieldTypeRequiredDefaultrecord_type_codeIntegerFieldTypeTrue5service_class_codeIntegerFieldTypeTrue200company_nameAlphaNumFieldTypeTrueNonecompany_discretionary_dataAlphaNumFieldTypeFalseNonecompany_identificationIntegerFieldTypeTrueNonestandard_entry_class_codeAlphaNumFieldTypeTrue"PPD"company_entry_descriptionAlphaNumFieldTypeTrueNonecompany_descriptive_dateDateFieldTypeFalseNoneeffective_entry_dateDateFieldTypeTrue"TOMORROW"settlement_dateAlphaNumFieldTypeFalseNoneoriginator_status_codeIntegerFieldTypeTrue1odfi_identificationIntegerFieldTypeTrue(auto-set)batch_numberIntegerFieldTypeTrue(auto-set)Entry Fields (b.add_entries_and_addendas([{...}]))Key NameFieldTypeRequiredDefaultrecord_type_codeIntegerFieldTypeTrue6transaction_codeIntegerFieldTypeTrueNonerdfi_routingIntegerFieldTypeTrueNonerdfi_account_numberAlphaNumFieldTypeTrueNoneamountIntegerFieldTypeTrueNoneindividual_identification_numberAlphaNumFieldTypeFalseNoneindividual_nameAlphaNumFieldTypeTrueNonediscretionary_dataAlphaNumFieldTypeFalseNoneaddenda_record_indicatorIntegerFieldTypeTrue(auto-set)trace_odfi_identifierIntegerFieldTypeTrue(auto-set)trace_sequence_numberIntegerFieldTypeTrue(auto-set)addendasList[Dict[Addenda Fields]]FalseNoneAddenda Fields (b.add_entries_and_addendas([{'addendas': [{...}]}]))Key NameFieldTypeRequiredDefaultrecord_type_codeIntegerFieldTypeTrue7addenda_type_codeIntegerFieldTypeTrue5payment_related_informationAlphaNumFieldTypeFalseNoneaddenda_sequence_numberIntegerFieldTypeTrue(auto-set)entry_detail_sequence_numberIntegerFieldTypeTrue(auto-set)
achievements
# achievements
achilles
achillesDistributed/parallel computing in modern Python based on themultiprocessing.PoolAPI (map,imap,imap_unordered).What/why is it?The purpose ofachillesis to make distributed/parallel computing as easy as possible by limiting the required configuration, hiding the details (server/node/controller architecture) and exposing a simple interface based on the popularmultiprocessing.PoolAPI.achillesprovides developers with entry-level capabilities for concurrency across a network of machines (see PEP 372 on the intent behind addingmultiprocessingto the standard library ->https://www.python.org/dev/peps/pep-0371/) using a server/node/controller architecture.Theachilles_server,achilles_nodeandachilles_controllerare designed to run cross-platform/cross-architecture. The server/node/controller may be hosted on a single machine (for development) or deployed across heterogeneous resources.achillesis comparable to excellent Python packages likepathos/pyina,Parallel PythonandSCOOP, but different in certain ways:Designed for developers familiar with themultiprocessingmodule in the standard library with simplicity and ease of use in mind.In addition to the blockingmapAPI which requires that developers wait for all computation to be finished before accessing results (common in such packages),imap/imap_unorderedallow developers to process results as they are returned to theachilles_controllerby theachilles_server.achillesallows for composable scalability and novel design patterns as:Iterables including lists, lists of lists and generator functions (as first-class object - generator expressions will not work as generators cannot be serialized bypickle/dill) are accepted as arguments.TIP: Use generator functions together withimaporimap_unorderedto perform distributed computation on arbitrarily large data.Thedillserializer is used to transfer data between the server/node/controller andmultiprocess(fork ofmultiprocessingthat uses thedillserializer instead ofpickle) is used to performPool.mapon theachilles_nodes, so developers are freed from some of the constraints of thepickleserializer.Installpip install achillesQuick StartStart anachilles_serverlistening for connections fromachilles_nodesat a certain endpoint specified as arguments or in an.envfile in theachillespackage's directory.Then simply importmap,imap, and/orimap_unorderedfromachilles_mainand use them dynamically in your own code (under the hood they create and closeachilles_controllers).map,imapandimap_unorderedwill distribute your function to eachachilles_nodeconnected to theachilles_server. Then, theachilles_serverwill distribute arguments to eachachilles_node(load balanced and made into a list of arguments if the arguments' type is not already a list) which will then perform your function on the arguments usingmultiprocess.Pool.map.Eachachilles_nodefinishes its work, returns the results to theachilles_serverand waits to receive another argument. This process is repeated until all of the arguments have been exhausted.runAchillesServer(host=None, port=None, username=None, secret_key=None)-> run on your local machine or on another machine connected to your networkin:fromachilles.lineReceiver.achilles_serverimportrunAchillesServer# host = IP address of the achilles_server# port = port to listen on for connections from achilles_nodes (must be an int)# username, secret_key used for authentication with achilles_controllerrunAchillesServer(host='127.0.0.1',port=9999,username='foo',secret_key='bar')# OR generate an .env file with a default configuration so that# arguments are no longer required to runAchillesServer()# use genConfig() to overwritefromachilles.lineReceiver.achilles_serverimportrunAchillesServer,genConfiggenConfig(host='127.0.0.1',port=9999,username='foo',secret_key='bar')runAchillesServer()out:ALERT: achilles_server initiated at 127.0.0.1:9999 Listening for connections...runAchillesNode(host=None, port=None)-> run on your local machine or on another machine connected to your networkin:fromachilles.lineReceiver.achilles_nodeimportrunAchillesNode# genConfig() is also available in achilles_node, but only expects host and port argumentsrunAchillesNode(host='127.0.0.1',port=9999)out:GREETING: Welcome! There are currently 1 open connections. Connected to achilles_server running at 127.0.0.1:9999 CLIENT_ID: 0Examples of how to use the 3 most commonly usedmultiprocessing.Poolmethods inachilles:Note:map,imapandimap_unorderedcurrently accept iterables including - but not limited - to lists, lists of lists, and generator functions asachilles_args.Also note: if there isn't already a.envconfiguration file in theachillespackage directory, must usegenConfig(host, port, username, secret_key)before using or includehost,port,usernameandsecret_keyas arguments when usingmap,imap,imap_unordered.map(func, args, callback=None, chunksize=1, host=None, port=None, username=None, secret_key=None)in:fromachilles.lineReceiver.achilles_mainimportmapdefachilles_function(arg):returnarg**2defachilles_callback(result):returnresult**2if__name__=="__main__":results=map(achilles_function,[1,2,3,4,5,6,7,8,9,10],achilles_callback,chunksize=1)print(results)out:ALERT: Connection to achilles_server at 127.0.0.1:9999 and authentication successful. [[1, 16, 81, 256, 625, 1296, 2401, 4096], [6561, 10000]]imap(func, args, callback=None, chunksize=1, host=None, port=None, username=None, secret_key=None)in:fromachilles.lineReceiver.achilles_mainimportimapdefachilles_function(arg):returnarg**2defachilles_callback(result):returnresult**2if__name__=="__main__":forresultinimap(achilles_function,[1,2,3,4,5,6,7,8,9,10],achilles_callback,chunksize=1):print(result)out:ALERT: Connection to achilles_server at 127.0.0.1:9999 and authentication successful. {'ARGS_COUNTER': 0, 'RESULT': [1, 16, 81, 256, 625, 1296, 2401, 4096]} {'ARGS_COUNTER': 8, 'RESULT': [6561, 10000]}imap_unordered(func, args, callback=None, chunksize=1, host=None, port=None, username=None, secret_key=None)in:fromachilles.lineReceiver.achilles_mainimportimap_unordereddefachilles_function(arg):returnarg**2defachilles_callback(result):returnresult**2if__name__=="__main__":forresultinimap_unordered(achilles_function,[1,2,3,4,5,6,7,8,9,10],achilles_callback,chunksize=1):print(result)out:ALERT: Connection to achilles_server at 127.0.0.1:9999 and authentication successful. {'ARGS_COUNTER': 8, 'RESULT': [6561, 10000]} {'ARGS_COUNTER': 0, 'RESULT': [1, 16, 81, 256, 625, 1296, 2401, 4096]}HowachillesworksUnder the hoodTwistedAn event-driven networking engine written in Python and MIT licensed.dilldillextends Python’spicklemodule for serializing and de-serializing Python objects to the majority of the built-in Python types.multiprocessmultiprocess is a fork of multiprocessing that usesdillinstead ofpicklefor serialization.multiprocessingis a package for the Python language which supports the spawning of processes using the API of the standard library’s threading module.ExamplesSee theexamplesdirectory for tutorials on various use cases, including:Square numbers/run multiple jobs sequentiallyWord count (TO DO)How to kill clusterfromachilles.lineReceiver.achilles_mainimportkillCluster# simply use the killCluster() command and verify your intent at the prompt# killCluster() will search for an .env configuration file in the achilles package's directory# if it does not exist, specify host, port, username and secret_key as arguments# a command is sent to all connected achilles_nodes to stop the Twisted reactor and exit() the process# optionally, you can pass command_verified=True to proceed directly with killing the clusterkillCluster(command_verified=True)Caveats/Things to knowachilles_nodes use all of the CPU cores available on the host machine to performmultiprocess.Pool.map(pool = multiprocess.Pool(multiprocess.cpu_count())).achillesleaves it up to the developer to ensure that the correct packages are installed onachilles_nodes to perform the function distributed by theachilles_serveron behalf of theachilles_controller. Current recommended solution is to SSH into each machine andpip installarequirements.txtfile.All import statements required by the developer's function, arguments and callback must be included in the definition of the function.Theachilles_serveris currently designed to handle one job at a time. For more complicated projects, I highly recommend checking outDask(especiallydask.distributed) and learning more about directed acyclic graphs (DAGs).Fault tolerance: if someachilles_nodedisconnects before returning expected results, the argument will be distributed to anotherachilles_nodefor computation instead of being lost.callback_errorargument has yet to be implemented, so detailed information regarding errors can only be gleaned from the interpreter used to launch theachilles_server,achilles_nodeorachilles_controller. Deploying the server/node/controller on a single machine is recommended for development.achillesperforms load balancing at runtime and assignsachilles_nodes arguments bycpu_count*chunksize.Defaultchunksizeis 1.Increasing thechunksizeis an easy way to speed up computation and reduce the amount of time spent transferring data between the server/node/controller.If your arguments are already lists, thechunksizeargument is not used.Instead, one argument/list will be distributed to the connectedachilles_nodes at a time.If your arguments are load balanced, the results returned are contained in lists of lengthachilles_node's cpu_count*chunksize.map:Final result ofmapis an ordered list of load balanced lists (the final result is not flattened).imap:Results are returned as computation is finished in dictionaries that include the following keys:RESULT: load balanced list of results.ARGS_COUNTER: index of first argument (0-indexed).Results are ordered.The first result will correspond to the next result after the last result in the preceding results packet's list of results.Likely to be slower thanimmap_unordereddue toachilles_controlleryielding ordered results.imap_unordered(see below) yields results as they are received, whileimapyields results as they are received only if the argument'sARGS_COUNTERis expected based on the length of theRESULTlist in the preceding results packet. Otherwise, aresult_bufferis checked for the results packet with the expectedARGS_COUNTERand the current results packet is added to theresult_buffer. If it is not found,achilles_controllerwill not yield results until a results packet with the expectedARGS_COUNTERis received.imap_unordered:Results are returned as computation is finished in dictionaries that include the following keys:RESULT: load balanced list of results.ARGS_COUNTER: index of first argument (0-indexed).Results are not ordered.Results packets are yielded as they are received (after anyachilles_callbackhas been performed on it).Fastest way of consuming results received from theachilles_server.achillesis in the early stages of active development and your suggestions/contributions are kindly welcomed.achillesis written and maintained by Alejandro Peña. Email me at adpena at gmail dot com.
achironet-paynow
Paynow Zimbabwe Python SDKPython SDK for Paynow Zimbabwe's APIPrerequisitesThis library has a set of prerequisites that must be met for it to workrequestsInstallationInstall the library using pip$pipinstallpaynowand import the Paynow class into your projectfrompaynowimportPaynow# Do stuffUsage exampleCreate an instance of the Paynow class optionally setting the result and return url(s)paynow=Paynow('INTEGRATION_ID','INTEGRATION_KEY','http://google.com','http://google.com')Create a new payment passing in the reference for that payment (e.g invoice id, or anything that you can use to identify the transaction and the user's email addresspayment=paynow.create_payment('Order #100','[email protected]')You can then start adding items to the payment# Passing in the name of the item and the price of the itempayment.add('Bananas',2.50)payment.add('Apples',3.40)When you're finally ready to send your payment to Paynow, you can use thesendmethod in thepaynowobject.# Save the response from paynow in a variableresponse=paynow.send(payment)The response from Paynow will b have some useful information like whether the request was successful or not. If it was, for example, it contains the url to redirect the user so they can make the payment. You can view the full list of data contained in the response in our wikiIf request was successful, you should consider saving the poll url sent from Paynow in the databaseifresponse.success:# Get the link to redirect the user to, then use it as you see fitlink=response.redirect_url# Get the poll url (used to check the status of a transaction). You might want to save this in your DBpollUrl=response.poll_urlMobile TransactionsIf you want to send an express (mobile) checkout request instead, the only thing that differs is the last step. You make a call to thesend_mobilein thepaynowobject instead of thesendmethod.Thesend_mobilemethod unlike thesendmethod takes in two additional arguments i.e The phone number to send the payment request to and the mobile money method to use for the request.Note that currently only ecocash is supported# Save the response from paynow in a variableresponse=paynow.send_mobile(payment,'0777777777','ecocash')The response object is almost identical to the one you get if you send a normal request. With a few differences, firstly, you don't get a url to redirect to. Instead you instructions (which ideally should be shown to the user instructing them how to make payment on their mobile phone)if(response.success):# Get the poll url (used to check the status of a transaction). You might want to save this in your DBpoll_url=response.poll_urlinstructions=response.instructionsChecking transaction statusThe SDK exposes a handy method that you can use to check the status of a transaction. Once you have instantiated the Paynow class.# Check the status of the transaction with the specified poll url# Now you see why you need to save that url ;-)status=paynow.check_transaction_status(poll_url)ifstatus.paid:# Yay! Transaction was paid for. Update transaction?else:# Handle thatFull Usage ExamplefrompaynowimportPaynowpaynow=Paynow('INTEGRATION_ID','INTEGRATION_KEY','http://google.com','http://google.com')payment=paynow.create_payment('Order','[email protected]')payment.add('Payment for stuff',1)response=paynow.send_mobile(payment,'0777832735','ecocash')if(response.success):poll_url=response.poll_urlprint("Poll Url: ",poll_url)status=paynow.check_transaction_status(poll_url)time.sleep(30)print("Payment Status: ",status.status)
achn
achnDeveloper GuideSetup# create conda environment$mambaenvcreate-fenv.yml# update conda environment$mambaenvupdate-nbnx--fileenv.ymlInstallpipinstall-e.# install from pypipipinstallbnxnbdev# activate conda environment$condaactivatebnx# make sure the bnx package is installed in development mode$pipinstall-e.# make changes under nbs/ directory# ...# compile to have changes apply to the bnx package$nbdev_preparePublishing# publish to pypi$nbdev_pypi# publish to conda$nbdev_conda--build_args'-c conda-forge'$nbdev_conda--mambabuild--build_args'-c conda-forge -c dsm-72'UsageInstallationInstall latest from the GitHubrepository:$pipinstallgit+https://github.com/dsm-72/bnx.gitor fromconda$condainstall-cdsm-72bnxor frompypi$pipinstallbnxDocumentationDocumentation can be found hosted on GitHubrepositorypages. Additionally you can find package manager specific guidelines oncondaandpypirespectively.
acho-sdk
READMEThis package will serve as the programmatic interface between Acho and your python scripts
achoz
achozlike a web search, but for your personal files. demohereIt will just normalize your all documents, and later it will be easy to search.Storycregox have a lot of data. files, emails, messages, web links, web content, etc. they also are of different kinds; text, video, audio, apps, etc. when trying to find something they do remember to be there, sometimes it gets impossible! the goal of achoz is making cregox self-data-searching-life not only easier, but enable a new world of possibilities, in which they don’t have to worry anymore how to store data for themselves (as long as it’s stored with open and free standards).more details athttp://ahoxus.org/achozInstallation.Linux (x86_64,aarch64)Requirement.python3.8+meilisearchUser must have to ensure that you are using same meilisearch version as achoz. Since meilisearch database is not compatible over different version. so achoz have option to install meilisearch for you.following packages must be installed in your system. Instructions for Debian and ubuntu. use your own package manager to install it.apt-get install python3-dev libxml2-dev libxslt1-dev antiword unrtf poppler-utils pstotext tesseract-ocr \ flac ffmpeg lame libmad0 libsox-fmt-mp3 sox libjpeg-dev swigAfter that. use pip to install achoz.pip install achozMeilisearchOnce you have done with above. achoz executable should be in your PATH. Now lets install meilisearch.sudo achoz --install-meiliit will download and install meilisearch binary at/usr/local/bin/you could specify another path to install. just make sure that path should be cover by $PATH Environment.achoz --install-meili path/to/dirUsageQuick startachoz start -a ~/Documentsfor adding more directory, provide comma sepatated list of dirs. like~/Documents,~/musicwhat above command gonna do is, it will start crawling all documents and file indocumentsdirectory. and it will start a web server at default port 8990. It will create an config.json at~/.achoz, you could add more options at config file or with command-line itself.Also using configuration file is recommended way to go with achoz.Configuration.Config file at~/.achoz/config.jsonwill create automatically if you runachozwith or without option at first time.Sample config file{"dir_to_index":["/home/kcubeterm/Documents","/home/kcubeterm/books"],"dir_to_ignore":["/home/kcubeterm/secrets"],"extenstion_to_ignore":["db","git","mp3","webm"],"file_to_ignore":[],"web_port":8990,"meili_api_port":8989,"data_dir":"/home/kcubeterm/.achoz","priority":"low"}Explain configdir_to_index: contains list of directory which you are willing to normalize(crawl,index,searchable). command line option-a dir1,dir2,dir3does the same. Don't use any kind of pattern here(except: '~'). use absolute path.dir_to_ignore: Show your regrex skills here. Patterns can be use to ignore the directory or you can just give absolute path if not advanced patterns. Any hidden directory ignored by default. any pattern you provide will match with directory not file. if you want to ignore files. there is another option.file_to_ignoreNote: under the hood. it usesre.match()so make sure your patterns are compatible to python re.match.extesnion_to_ignore: Just put extension to which ignore. No pattern. just extension.file_to_ignore: Any python re.match() compatible patterns. It will specifically for files.web_port: Specify on which port web server gonna listen. Default:8990meili_api_port: The backend api Meilisearch server gonna listen on it. Default:8989data_dir: Directory where program will keep metadata and database. Default: ~/.achozpriority: (High or Low) It will decide priority of CPU time to be given to achoz program. Default: lowCommand-line optionsachoz -his enough to know about all command line option.Techical issues and infoMeilisearch consumes too much ram while indexing. if system dont have enough ram. Meilisearch may not function. make sure you have atleast 700+ MB of free RAM.
achso
Command Line Utilities for AtCoder
aci
Analyze indel at target region sequence from CRISPR experiment.
aciclean
ACIClean is a tool that helps you clean up your Cisco ACI infrastructure by detecting and reporting on unused objects. It uses theACI COBRA Python SDKprovided by Cisco.ACIClean currently detects and reports on the following objects:VLAN PoolsAAEPsPhysical DomainsLeafswitch ProfilesAccessportsPCs and VPCsInstallationTo install ACIClean, follow these steps:Install the ACI COBRA Python SDK by following the instructionshere.If you dont’t have the SDK modules handy, run the following commands to install an ACI v5.2.7 SDK module:pip installhttps://github.com/cubinet-code/aci_cobra_sdk/raw/main/acicobra-5.2.7.0.7-py2.py3-none-any.whlpip installhttps://github.com/cubinet-code/aci_cobra_sdk/raw/main/acimodel-5.2.7.0.7-py2.py3-none-any.whlRunpip install acicleanto install this module and command line script.UsageTo use ACIClean, run the aciclean.py script. The script will detect and report on any unused objects in your Cisco ACI infrastructure.The following APIC credentials will be read from the environment, if they exist:ACI_APIC_URLACI_APIC_USERACI_APIC_PASSWORDaciclean --help Usage: aciclean.py [OPTIONS] Options: --url TEXT APIC URL including protocol. --user TEXT APIC user. [default: (admin)] --password TEXT APIC password. -w, --write Write report to aciclean_report.txt -r, --remove WARNING: !!! This will remove all policies without relationships from the APIC !!! --help Show this message and exit.ContributingContributions to ACIClean are welcome! If you find a bug or have a feature request, please open an issue on the GitHub repository. If you would like to contribute code, please fork the repository and submit a pull request.LicenseACIClean is licensed under the MIT License.
aciClient
aciClientA python wrapper to the Cisco ACI REST-API.Python VersionWe support Python 3.6 and up. Python 2 is not supported and there is no plan to add support for it.Installationpip install aciClientInstallation for Developinggit clone https://github.com/netcloud/aciclient.git pip install -r requirements.txt python setup.py developUsageInitialisationUsername/passwordimportaciClientimportlogginglogging.basicConfig(level=logging.INFO)logger=logging.getLogger(__name__)aciclient=aciClient.ACI(apic_hostname,apic_username,apic_password,refresh=False)try:aciclient.login()aciclient.getJson(uri)aciclient.postJson(config)aciclient.deleteMo(dn)aciclient.logout()exceptExceptionase:logger.exception("Stack Trace")For automatic authentication token refresh you can set variablerefreshto Trueaciclient=aciClient.ACI(apic_hostname,apic_username,apic_password,refresh=True)Certificate/signatureimportaciClientimportlogginglogging.basicConfig(level=logging.INFO)logger=logging.getLogger(__name__)aciclient=aciClient.ACICert(apic_hostname,path_to_privatekey_file,certificate_dn)try:aciclient.getJson(uri)aciclient.postJson(config)aciclient.deleteMo(dn)exceptExceptionase:logger.exception("Stack Trace")Examplesget configtenants=aciclient.getJson('class/fvTenant.json?order-by=fvTenant.dn|asc')formointenants:print(f'tenant DN:{mo["fvTenant"]["attributes"]["dn"]}')post configconfig={"fvTenant":{"attributes":{"dn":"uni/tn-XYZ"}}}aciclient.postJson(config)delete MOsaciclient.deleteMo('uni/tn-XYZ')create snapshotYou can specify a tenant in variabletarget_dnor not provide any to do a fabric-wide snapshot.aci.snapshot(description='test',target_dn='/uni/tn-test')Testingpip install -r requirements.txt python -m pytestContributingPlease readCONTRIBUTING.mdfor details on our code of conduct, and the process for submitting pull requests to this project.AuthorsMarcel Zehnder-Initial workAndreas Graber-Migration to open sourceRichard Strnad-Paginagtion for large requests, various small stuffDario Kaelin-Added snapshot creationLicenseThis project is licensed under MIT - see theLICENSE.mdfile for details.
acid
Failed to fetch description. HTTP Status Code: 404
acid-box
No description available on PyPI.
acid-chess
The Chess Computer for nerds, by nerds.Picture by PictureACID Chess is a chess computer written in Python, which can be used with any? board. By filming the board, the contour of the board is recognized, and the positions of the individual pieces can be determined. TwoNeural Networkswere trained for the board and squares recognition.FeaturesYou can play against an engine, Stockfish or Maia are available, or play a game against another human. In both variants, a PGN is generated, which you can load later in the analysis board at Lichess, or so, for analysis.Engine play against Stockfish or MaiaUse polyglot opening booksPGN exportsPlanned FeaturesClockPlay on Lichess... seeissuesfor detailsTechnologyPython as a programming languageQt (PySide6) as toolkit for the GUI (with own extension for reactive bindings)PyTorch (Lightning ) for the development of AI modelsI want to play against ACID!We have tested ACID Chess with four different boards and were able to complete games without significant flaws. There will be problems on unknown boards, but every tester makes ACID Chess better!Regardless of the chosen installation method: ACID Chess saves images of data that cannot be classified sufficiently. Please provide us with this data. Create anissueand upload a ZIP file as an attachment.<3There are two ways to install ACID Chess.as binary: for users who want to try ACID Chess and don't want to deal with installing Python etc.check out the project via git and install the dependencies manually for people who want to develop on ACID Chess themselves.Modern hardware, preferably NVIDIA GPU or Mac M[0-9]+is recommended!Known bugs and limitationsafter switching cameras you will see an "Image capture failed: timed out waiting for a preview frame" error in the logs. Workaroud: Select camara you want to use and restart the appResourcesDocumentationhttps://acid-chess.readthedocs.ioSourcecodehttps://github.com/ierror/acid-chessContributingContributions are always welcome. Please discuss major changes via issue first before submitting a pull request.Data AttributionGoogle Programmable Search EngineRest API was used to search for Creative Commons licensed images of chess boards used for training the neural network models.Notebookfor collecting the dataCSVto document the AttributionContactMastodon@[email protected]
acidfile
acidfile========`acidfile` module provides the ACIDFile object. This object can be used as aregular file object but instead of write one copy of the data, it will writeseveral copies to disk in an ACID manner.This algorithm was explained by `Elvis Pfützenreuter`_ in his blog post`Achieving ACID transactions with common files`_.Latest stable version can be found on `PyPI`_... image:: https://travis-ci.org/nilp0inter/acidfile.png?branch=develop:target: https://travis-ci.org/nilp0inter/acidfile.. image:: https://pypip.in/v/acidfile/badge.png:target: https://pypi.python.org/pypi/acidfile:alt: Latest PyPI version.. image:: https://pypip.in/d/acidfile/badge.png:target: https://pypi.python.org/pypi/acidfile:alt: Number of PyPI downloads`acidfile` is compatible with python 2.6, 2.7, 3.2, 3.3, 3.4 and pypyContribute:.. image:: http://api.flattr.com/button/flattr-badge-large.png:target: https://flattr.com/submit/auto?user_id=nilp0inter&url=https://github.com/nilp0inter/acidfile&title=acidfile&language=&tags=github&category=software:alt: Flattr this git repoInstallation------------Latest version can be installed via `pip`.. code-block:: bash$ pip install --upgrade acidfileRunning the tests-----------------Clone this repository and install the develop requirements... code-block:: bash$ git clone https://github.com/nilp0inter/acidfile.git$ cd acidfile$ pip install -r requirements/develop.txt$ python setup.py develop$ toxUsage examples--------------Basic usage+++++++++++.. code-block:: python>>> from acidfile import ACIDFile>>> myfile = ACIDFile('/tmp/myfile.txt', 'w')>>> myfile.write(b'Some important data.')>>> myfile.close()At the close invocation two copies will be written to disk: *myfile.txt.0* andbelow *myfile.txt.1*. Each one will have an creation timestamp and a HMACsignature... code-block:: python>>> myfile = ACIDFile('/tmp/myfile.txt', 'r')>>> print myfile.read()'Some important data.'>>> myfile.close()If any of the files is damaged due to turning off without proper shutdown ordisk failure, manipulation, etc. It will be detected by the internal HMAC andthe other's file data would be used instead... note:: If you want to read an `acidfile`, never pass the full path of thereal file, instead use the file name that you use in the creation step.| ✗ ACIDFile('/tmp/myfile.txt.0', 'r')| ✗ ACIDFile('/tmp/myfile.txt.1', 'r')| ✓ ACIDFile('/tmp/myfile.txt', 'r')Context manager+++++++++++++++ACIDFile can (and should) be used as a regular context manager:.. code-block:: python>>> with ACIDFile('/tmp/myfile.txt', 'w') as myfile:... myfile.write(b'Some important data.')Number of copies++++++++++++++++The number of inner copies of the data can be configured through the **copies**parameter.Checksum Key++++++++++++The key used for compute and check the internal HMAC signature can be settedby the **key** parameter.It's recommended to change that key in order to protect against fraud, makingmore difficult for a tamperer to put a fake file in place of the legitimateone... _PyPI: https://pypi.python.org/pypi/acidfile.. _Elvis Pfützenreuter: [email protected].. _Achieving ACID transactions with common files: http://epx.com.br/artigos/arqtrans_en.php.. This is your project NEWS file which will contain the release notes... Example: http://www.python.org/download/releases/2.6/NEWS.txt.. The content of this file, along with README.rst, will appear in your.. project's PyPI page.News====1.2.1-----* Using io.open in setup.py to read README and NEWS. This fix someproblems installing the package.+ Python 3.4 support.1.2.0-----+ Python 2.6 support.+ Added Python 3.2 and pypy to tox tests.+ Added flattr button :D* Fixed flake8 and pylint warnings.1.1.0-----+ Python 3 support.+ Changed testing framework to `behave` because python 3 support.+ Using `tox` for multiple python version testing.1.0.0-----* First stable release.+ Documentation.0.0.1-----* Initial development.
acidfile-optelgroup
acidfileThis project was forked fromhttps://github.com/nilp0inter/acidfileto fix issues with more recent versions of Python.acidfilemodule provides the ACIDFile object. This object can be used as a regular file object but instead of write one copy of the data, it will write several copies to disk in an ACID manner.This algorithm was explained byElvis Pfützenreuterin his blog postAchieving ACID transactions with common files.Latest stable version can be found onPyPI.acidfileis compatible with python 2.7, 3.6 and up, and pypyInstallationLatest version can be installed viapip$pipinstall--upgradeacidfile-optelgroupRunning the testsClone this repository and install the develop requirements.$gitclonehttps://gitlab.com/optelgroup-public/acidfile.git$cdacidfile$pipinstall-rrequirements/develop.txt$pythonsetup.pydevelop$toxUsage examplesBasic usage>>>fromacidfileimportACIDFile>>>myfile=ACIDFile('/tmp/myfile.txt','w')>>>myfile.write(b'Some important data.')>>>myfile.close()At the close invocation two copies will be written to disk:myfile.txt.0and belowmyfile.txt.1. Each one will have an creation timestamp and a HMAC signature.>>>myfile=ACIDFile('/tmp/myfile.txt','r')>>>printmyfile.read()'Some important data.'>>>myfile.close()If any of the files is damaged due to turning off without proper shutdown or disk failure, manipulation, etc. It will be detected by the internal HMAC and the other’s file data would be used instead.NoteIf you want to read anacidfile, never pass the full path of the real file, instead use the file name that you use in the creation step.✗ ACIDFile(‘/tmp/myfile.txt.0’, ‘r’)✗ ACIDFile(‘/tmp/myfile.txt.1’, ‘r’)✓ ACIDFile(‘/tmp/myfile.txt’, ‘r’)Context managerACIDFile can (and should) be used as a regular context manager:>>>withACIDFile('/tmp/myfile.txt','w')asmyfile:...myfile.write(b'Some important data.')Number of copiesThe number of inner copies of the data can be configured through thecopiesparameter.Checksum KeyThe key used for compute and check the internal HMAC signature can be setted by thekeyparameter.It’s recommended to change that key in order to protect against fraud, making more difficult for a tamperer to put a fake file in place of the legitimate one.News1.2.2 (fork)Fix missing parameter in hmacAdd support to Python 3.6 - 3.10Rewrote the ci/cd pipeline using gitlab-ci1.2.1Using io.open in setup.py to read README and NEWS. This fix some problems installing the package.Python 3.4 support.1.2.0Python 2.6 support.Added Python 3.2 and pypy to tox tests.Added flattr button :DFixed flake8 and pylint warnings.1.1.0Python 3 support.Changed testing framework tobehavebecause python 3 support.Usingtoxfor multiple python version testing.1.0.0First stable release.Documentation.0.0.1Initial development.
acidfs
AcidFS allows interaction with the filesystem using transactions with ACID semantics.Gitis used as a back end, andAcidFSintegrates with thetransactionpackage allowing use of multiple databases in a single transaction. AcidFS makes concurrent persistent to the filesystem safe and reliable.Full documentation is available atRead the Docs.
acido
acido 0.13Acido stands forAzureContainerInstanceDistributedOperations, with acido you can easily deploy container instances in Azure and distribute the workload of a particular task, for example, a port scanning task which has an input file withxhosts is splitted and distributed betweenyinstances.This tool is inspired byaxiomwhere you can just spin up hundreds of instances to perform a distributed nmap/ffuf/screenshotting scan, and then delete them after they have finished.Depending on your quota limit you may need to open a ticket to Azure to request container group limits increase.Add an alias in .bashrc / .zshrc:alias acido='python3 -m acido.cli'Usage:usage: acido [-h] [-c] [-f FLEET] [-im IMAGE_NAME] [-n NUM_INSTANCES] [-t TASK] [-e EXEC_CMD] [-i INPUT_FILE] [-w WAIT] [-s SELECT] [-l] [-r REMOVE] [-in] [-sh SHELL] [-d DOWNLOAD_INPUT] [-o WRITE_TO_FILE] optional arguments: -h, --help show this help message and exit -c, --config Start configuration of acido. -f FLEET, --fleet FLEET Create new fleet. -im IMAGE_NAME, --image IMAGE_NAME Deploy an specific image. -n NUM_INSTANCES, --num-instances NUM_INSTANCES Instances that the operation affect -t TASK, --task TASK Execute command as an entrypoint in the fleet. -e EXEC_CMD, --exec EXEC_CMD Execute command on a running instance. -i INPUT_FILE, --input-file INPUT_FILE The name of the file to use on the task. -w WAIT, --wait WAIT Set max timeout for the instance to finish. -s SELECT, --select SELECT Select instances matching name/regex. -l, --list List all instances. -r REMOVE, --rm REMOVE Remove instances matching name/regex. -in, --interactive Start interactive acido session. -sh SHELL, --shell SHELL Execute command and upload to blob. -d DOWNLOAD_INPUT, --download DOWNLOAD_INPUT Download file contents remotely from the acido blob. -o WRITE_TO_FILE, --output WRITE_TO_FILE Save the output of the machines in JSON format. -rwd, --rm-when-done Remove the container groups after finish.Example usage with nmapIn this example we are going to:Create our base container image with acido (required) and nmap.Create 20 containers.Run a nmap scan using the 20 containers.Step 1: Create the base imageDockerfile (merabytes.azurecr.io/ubuntu:latest):FROM ubuntu:20.04 RUN apt-get update && apt-get install python3 python3-pip python3-dev -y RUN python3 -m pip install acido RUN apt-get install nmap -y CMD ["sleep", "infinity"]This will install acido & nmap on our base docker image (merabytes.azurecr.io/ubuntu:latest).To upload the image to the registry, as always go to the folder of your Dockerfile and:docker login merabytes.azurecr.io docker build -t ubuntu . docker tag ubuntu merabytes.azurecr.io/ubuntu:latest docker push merabytes.azurecr.io/ubuntu:latestStep 2: Run the scan$ cat file.txt merabytes.com uber.com facebook.com ... $ acido -f ubuntu \ -n 20 \ --image merabytes.azurecr.io/ubuntu:latest \ -t 'nmap -iL input -p 0-200' \ -i file.txt \ -o output [+] Selecting I/O storage account (acido). [+] Splitting into 20 files. [+] Uploaded 20 targets lists. [+] Successfully created new group/s: [ ubuntu-01 ubuntu-02 ] [+] Successfully created new instance/s: [ ubuntu-01-01 ubuntu-01-02 ubuntu-01-03 ubuntu-01-04 ubuntu-01-05 ubuntu-01-06 ubuntu-01-07 ubuntu-01-08 ubuntu-01-09 ubuntu-01-10 ubuntu-02-01 ubuntu-02-02 ubuntu-02-03 ubuntu-02-04 ubuntu-02-05 ubuntu-02-06 ubuntu-02-07 ubuntu-02-08 ubuntu-02-09 ubuntu-02-10 ] [+] Waiting 2 minutes until the machines get provisioned... [+] Waiting for outputs... [+] Executed command on ubuntu-02-01. Output: [ Starting Nmap 7.80 ( https://nmap.org ) at ... ... ] [+] Executed command on ubuntu-02-02. Output: [ Starting Nmap 7.80 ( https://nmap.org ) at ... ... ] ... [+] Saved container outputs at: output.json [+] Saved merged outputs at: all_output.txt.The result of doing this, is that acido automatically creates 2 container groups with 10 instances, splits the targets file into 20 chunks, uploads the chunks to the instances with the name "input", runs the command provided with -t and after finishing, saves the output to a JSON file.RequirementsOS: Mac OS / Linux / WindowsRequirement 1: Login to Azure & Create an Azure Container Registry$ az login $ az acr create --resource-group Merabytes \ --name merabytes --sku BasicRequirement 2: Install acido and configure your RG & Registrypip install acido python3 -m acido.cli -c $ acido -c [+] Selecting I/O storage account (acido). [!] Please provide a Resource Group Name to deploy the ACIs: Merabytes [!] Image Registry Server: merabytes.azurecr.io [!] Image Registry Username: merabytes [!] Image Registry Password: ********* $Optional requirement (--exec): Install tmux & Patch Azure CLIIf you want to use --exec (similar to ssh) to execute commands on running containers having tmux installed and on PATH is mandatory.Also, for the --exec command to work properly, you need to monkey-patch a bug insideaz container execcommand in the sys.stdout.write function.File: /lib/python3.9/site-packages/azure/cli/command_modules/container/custom.pyLine: 684def _cycle_exec_pipe(ws): r, _, _ = select.select([ws.sock, sys.stdin], [], []) if ws.sock in r: data = ws.recv() sys.stdout.write(data.decode() if isinstance(data, bytes) else data) # MODIFY THE LINE LIKE THIS sys.stdout.flush() if sys.stdin in r: x = sys.stdin.read(1) if not x: return True ws.send(x) return TrueUpcoming featuresAdd argument to specify docker image of the fleetAdd argument to execute scans through the Docker ENTRYPOINT (-t / --task)Test on WindowsAdd argument to retrieve ACI logsAdd argument to create the fleet with a Network Group (route the traffic from all instances to a single Public IP)Get rid of monkey-patching of Azure CLI for --execCredits / AcknowledgementsXavier Álvarez ([email protected])Juan Ramón Higueras Pica ([email protected])
acidoseq
No description available on PyPI.
acid.senza.templates
Senza template for automatic deployment of PostgreSQL instancesThis package provides an external template for the stups-senza tool (https://github.com/zalando-stups/senza), allowing rapid deployment of PostgreSQL nodes on AWS. It’s designed to work together with an external tool that runs senza with all necessary parameters and deploy DB instances automatically, therefore, the template is a non-interactive one. Compared to the PostgresApp template (included with senza) it adds the following actions:NAT gateways are detected based on the customer DNS zone.Correct Etcd endpoints in the current account are detected for a specific region.Non-interactive mode is the default one, all parameters can be supplied with environment variables (-voption during senza init).pg_hba.conf is configured by default to reject non-SSL connections.Standby and superuser passwords are automatically generated.All passwords and scalyr keys are encrypted.zmon2 group is automatically picked from the current account.EBS is always used.Installation$sudopip3install--upgradesenza.templates.acidUsage$senzainit-tbase[-vparam=name]deployment.yamlBelow is the list of parameters supported by the template:team_name: the name of the team to deploy the template (used as part of the DNS name for the resulting instance).team_region: AWS region of the team to deploy the template (by default, eu-west-1 and eu-central-1 are supported).team_gateway_zone: the DNS zone the application runs at, to look for the NAT gateways.add_replica_loadbalancer: whether to add a separate load-balancer to serve requests for the replica (default: false).instance_type: AWS EC2 instance type to deploy the DB on (default: t2.medium).volume_size: initial size of the DB EBS volume in GBs (default: 10).volume_type: AWS type of the EBS volume (default: gp2).volume_iops: number of the IO operations per second for the provision IO EBS volumes.snapshot_id: ID of the existing EBS snapshot to initialize the new database from.scalyr_account_key: Key to the scalyr account to log the database activity.pgpassword_admin: password to the admin account.postgresql_conf: a JSON dictionary of the key-value parameters for the PostgreSQL.ExamplesInitialization:$senzainit-tbase-vteam_name=foo-v'team_region=eu-west-1'-v'team_gateway_zone=foo.example.com'-v'hosted_zone=db.example.com'-vinstance_type=m3.medium' -v 'postgresql_conf='{shared_buffers: 1GB}'deploy.yamlDeployment:$senzacreatedeploy.yamlbarThe steps above result in the deployment of the new PostgreSQL cluster consisting of 3 t2.medium instances, available under the name ofbar.db.example.comand accessible to the application running in the account associated with the DNS zonefoo.example.com. They only work in the AWS environment configured for STUPS and senza.Senza it a powerful tool developed by Zalando to deploy applications on AWS. If you are not familiar with senza-based deployments, please, refer to the STUPS documentation:http://stups.readthedocs.io/en/latest/.LicenseApache 2.0Releasing$./release.sh<NEW_VERSION>
acids-msprior
MSPriorA multi(scale/stream) prior model for realtime temporal learningDisclaimerThis is an experimental project thatwillbe subject to lots of changes.Installationpipinstallacids-mspriorUsageMSPrior assumes you haveA pretrained RAVE model exportedwithout streamingas a torchscript.tsfileThe dataset on which RAVE has been trained (a folder of audio files).1. PreprocessingMSPrior operates on the latent representation yielded by RAVE. Therefore, we start by encoding the entirety of the audio dataset into a latent dataset.mspriorpreprocess--audio/path/to/audio/folder--out_path/path/to/output/folder--rave/path/to/pretrained/rave.ts2. TrainingMSPrior has several possible configurations. The default is a ALiBi-Transformer with a skip prediction backend, which can run in realtime on powerful computers (e.g. Apple M1-2 chips, GPU enabled Linux stations). A less demanding configuration is a large GRU. Both configurations can launched be usingmspriortrain--configconfiguration--db_path/path/to/preprocessed/dataset--nametraining_name--pretrained_embedding/path/to/pretrained/rave.tsHere are the different configurations availableNameDescriptiondecoder_onlyUnconditional autoregressive models, relying solely on previous samples to produce a prediction. The recurrent mode uses a Gated Recurrent Unit instead of a Transformer, suitable for small datasets and lower computational requirements.recurrentencoder_decoderEncoder / decoder autoregressive mode, where the generation process is conditioned by an external input (aka seq2seq). The continuous version is based oncontinuous featuresinstead of a discrete token sequence.encoder_decoder_continuousThe configurationsdecoder_onlyandrecurrentare readily usable, the seq2seq variants depends on another project calledrave2vecthat will be open sourced in the near future.3. ExportExport your model to a.tsfile that you can load inside thenn~ external for Max/MSP and PureData.mspriorexport--run/path/to/your/runWARNINGIf you are training on top of acontinuousrave (i.e. anything but thediscreteconfiguration), you shoud pass the--continuousflag during exportmspriorexport--run/path/to/your/run--continuous4. Realtime usageOnce exported, you can load the model inside MaxMSP following the image below.Note that additional inputs (e.g. semantic) are only available when using seq2seq models. The last output yields the perplexity of the model.FundingThis work is funded by the DAFNE+ N° 101061548 project, and is led at IRCAM in the STMS lab.
acids-rave
RAVE: Realtime Audio Variational autoEncoderOfficial implementation ofRAVE: A variational autoencoder for fast and high-quality neural audio synthesis(article link) by Antoine Caillon and Philippe Esling.If you use RAVE as a part of a music performance or installation, be sure to cite either this repository or the article !If you want to share / discuss / ask things about RAVE you can do so in ourdiscord server!Previous versionsThe original implementation of the RAVE model can be restored usinggitcheckoutv1InstallationInstall RAVE usingpipinstallacids-raveYou will needffmpegon your computer. You can install it locally inside your virtual environment usingcondainstallffmpegColabA colab to train RAVEv2 is now available thanks tohexorcismos!UsageTraining a RAVE model usually involves 3 separate steps, namelydataset preparation,trainingandexport.Dataset preparationYou can know prepare a dataset using two methods: regular and lazy. Lazy preprocessing allows RAVE to be trained directly on the raw files (i.e. mp3, ogg), without converting them first.Warning: lazy dataset loading will increase your CPU load by a large margin during training, especially on Windows. This can however be useful when training on large audio corpus which would not fit on a hard drive when uncompressed. In any case, prepare your dataset usingravepreprocess--input_path/audio/folder--output_path/dataset/path(--lazy)TrainingRAVEv2 has many different configurations. The improved version of the v1 is calledv2, and can therefore be trained withravetrain--configv2--db_path/dataset/path--out_path/model/out--namegive_a_nameWe also provide a discrete configuration, similar to SoundStream or EnCodecravetrain--configdiscrete...By default, RAVE is built with non-causal convolutions. If you want to make the model causal (hence lowering the overall latency of the model), you can use the causal moderavetrain--configdiscrete--configcausal...New in 2.3, data augmentations are also available to improve the model's generalization in low data regimes. You can add data augmentation by adding augmentation configuration files with the--augmentkeywordravetrain--configv2--augmentmute--augmentcompressMany other configuration files are available inrave/configsand can be combined. Here is a list of all the available configurations & augmentations :TypeNameDescriptionArchitecturev1Original continuous modelv2Improved continuous model (faster, higher quality)v2_smallv2 with a smaller receptive field, adpated adversarial training, and noise generator, adapted for timbre transfer for stationary signalsv2_nopqmf(experimental) v2 without pqmf in generator (more efficient for bending purposes)v3v2 with Snake activation, descript discriminator and Adaptive Instance Normalization for real style transferdiscreteDiscrete model (similar to SoundStream or EnCodec)onnxNoiseless v1 configuration for onnx usageraspberryLightweight configuration compatible with realtime RaspberryPi 4 inferenceRegularization (v2 only)defaultVariational Auto Encoder objective (ELBO)wassersteinWasserstein Auto Encoder objective (MMD)sphericalSpherical Auto Encoder objectiveDiscriminatorspectral_discriminatorUse the MultiScale discriminator from EnCodec.OtherscausalUse causal convolutionsnoiseEnables noise synthesizer V2hybridEnable mel-spectrogram inputAugmentationsmuteRandomly mutes data batches (default prob : 0.1). Enforces the model to learn silencecompressRandomly compresses the waveform (equivalent to light non-linear amplification of batches)gainApplies a random gain to waveform (default range : [-6, 3])ExportOnce trained, export your model to a torchscript file usingraveexport--run/path/to/your/run(--streaming)Setting the--streamingflag will enable cached convolutions, making the model compatible with realtime processing.If you forget to use the streaming mode and try to load the model in Max, you will hear clicking artifacts.PriorFor discrete models, we redirect the user to themspriorlibraryhere. However, as this library is still experimental, the prior from version 1.x has been re-integrated in v2.3.TrainingTo train a prior for a pretrained RAVE model :ravetrain_prior--model/path/to/your/run--db_path/path/to/your_preprocessed_data--out_path/path/to/outputthis will train a prior over the latent of the pretrained modelpath/to/your/run, and save the model and tensorboard logs to folder/path/to/output.ScriptingTo script a prior along with a RAVE model, export your model by providing the--priorkeyword to your pretrained prior :raveexport--run/path/to/your/run--prior/path/to/your/prior(--streaming)Pretrained modelsSeveral pretrained streaming modelsare available here. We'll keep the list updated with new models.Realtime usageThis section presents how RAVE can be loaded insidenn~in order to be used live with Max/MSP or PureData.ReconstructionA pretrained RAVE model nameddarbouka.ginavailable on your computer can be loaded insidenn~using the following syntax, where the default method is set to forward (i.e. encode then decode)This does the same thing as the following patch, but slightly faster.High-level manipulationHaving an explicit access to the latent representation yielded by RAVE allows us to interact with the representation using Max/MSP or PureData signal processing tools:Style transferBy default, RAVE can be used as a style transfer tool, based on the large compression ratio of the model. We recently added a technique inspired from StyleGAN to include Adaptive Instance Normalization to the reconstruction process, effectively allowing to definesourceandtargetstyles directly inside Max/MSP or PureData, using the attribute system ofnn~.Other attributes, such asenableorgpucan enable/disable computation, or use the gpu to speed up things (still experimental).Offline usageA batch generation script has been released in v2.3 to allow transformation of large amount of filesravegeneratemodel_pathpath_1path_2--outout_pathwheremodel_pathis the path to your trained model (original or scripted),path_Xa list of audio files or directories, andout_paththe out directory of the generations.DiscussionIf you have questions, want to share your experience with RAVE or share musical pieces done with the model, you can use theDiscussion tab!DemonstrationRAVE x nn~Demonstration of what you can do with RAVE and the nn~ external for maxmsp !embedded RAVEUsing nn~ for puredata, RAVE can be used in realtime on embedded platforms !FundingThis work is led at IRCAM, and has been funded by the following projectsANR MakiMonoACTORDAFNE+N° 101061548
acid-vault
Python password vault.Python password vault to keep track of password either locally or centralized in your own cloud. As this is a hobby project I cannot guarantee any functionality or that no data loss will occur, but as I use it personally I will do my best to avoid it. Currently development is done on Python 3.9 and the client runs on Win10 while the cloud is run on Raspbian on a Raspberry Pi 2.PrerequisitescryptographyparamikopillowSetupInstall prerequisitesClone repo or pip install acid_vaultRun VaultGui.pyw (For GUI)Setup your vault in file menu (Only necessary for Cloud and/or Steganography)Setup SSH for cloud (For remote storage of vault)Host - URL to hostPort - Port to use on hostUsername - Username to login with at hostPassword - Password to login with at host, will not be saved and has to be entered each time program is started. Recomended usage is through key exchange, see belowSetup Steganography (For hiding the vault in an image)File location - path to vault storage E.g. images/picture.pngOriginal file - path to local file with the original png picture to compare against (Important that its a png and not jpeg as jpeg compression is not stable)CheckSteganography(If Steganography is to be used)Chose Local/Remote (Where to store vault)Basic usageAdd passwords by pressing "Add Password" button.Chose a password in password box.Press Save passwords to save passwords in vault.Press Load passwords to load passwords into vault (Will clear any unsaved data).Lock/Unlock - Will lock/unlock the data kept by program while its running to avoid overhead of getting data from the cloud.If vault detect that the user has not used the UI for 5 minutes it will lock itself.The file menu has options to save/load backups both as encrypted and unencrypted locally where the user chose.
acid-xblock
acid-blockAn XBlock for testing XBlock Runtimes
acie
Aadhaar Card Information ExtractorThis project is an implementation of OCR (Optical Character Recognition) to extract relevant information from an Aadhaar card.Deployed App availablehereIMPORTANTPlease ensure you have pytesseract installed in the following path:C:\Program Files\Tesseract-OCR\tesseract.exeUSAGEJust typeaciefollowed by the filename argument, for example:acie filename.jpg
ac-imandrill-periship
DocumentationThis is a Mandrill/Transactional Email MailChimp helper function package.There are 5 functions available.list_all_templatestemplate_merge_tagssend_templatesend_template_attachmentretrieve_dataExplanationlist_all_templates(api_key)Given a valid API key (string), the function will return a list of template slugs and names.Example:api_key = "abcd1234567" list_of_templates = list_all_templates(api_key) # list_of_templates could contain: # [ {'slug': 'example-template', 'name': 'Example Template'} ]template_merge_tags(api_key, template_name, merge_language)Given a valid API key (string), template name or slug (string), and specified merge tag language (string), the function will return a list of merge tags in that template. However, there may be some merge tags that will be addressed somewhere else. Thus, do not need to be defined in global_merge_vars or merge_vars.Example:api_key = "abcd1234567" template_name = "example-template" # can also be "Example Template" merge_language = "mailchimp" # or "handlebars" list_of_merge_tags = template_merge_tags(api_key, template_name, merge_language) # list_of_merge_tags could contain: # {'merge_tags': ['subject', 'fullname', 'activity']}'subject' will be defined somewhere else, so it does not need to be defined under global_merge_vars or merge_vars (this is applicable to every template)send_template(api_key, template_name, fields)Given a valid API key (string), template name or slug (string), and fields (struct), the function will return Mandrill's API JSON response.Example:api_key = "abcd1234567" template_name = "example-template" # can also be "Example Template" fields = { 'subject': 'This is an example', 'from_email': '[email protected]', 'from_name': 'Example Name', 'to': [ { 'email': '[email protected]', 'name': 'Example Rcpt', 'type': 'to' #type as is }, { 'email': '[email protected]', 'name': 'Example Rcpt2', 'type': 'to' #type as is } ], 'reply-to': '[email protected]', 'global_merge_vars': [ { 'name': 'merge tag name', 'content': 'replace merge tag with' } ], 'merge_tags': [ { 'rcpt': '[email protected]', 'vars': [ { 'name': 'merge tag name', 'content': 'replace merge tag content' } ] } ], 'merge_language': 'mailchimp' } response = send_template(api_key, template_name, fields) # response could contain: # [ # {'email': '[email protected]', 'status': 'sent', '_id': '12349853'}, # {'email': '[email protected]', 'status': 'sent', '_id': '63313685'} # ]not every attribute is required (ex: just sending generic info to everyone, so don't need 'merge_vars')If the attribute is not needed, you can simply delete it in its entirety.'global_merge_vars' vs 'merge_vars' :: one will replace the merge tags with the same info for everyone, and the other will override with specific info to a certain recipient (in the case above, Example Rcpt2)'merge_language' must be either 'mailchimp' or 'handlebars', this just depends on what your template is using|MERGETAG|(mailchimp) or {{MERGETAG}} (handlebars)send_template_attachment(api_key, template_name, fields)Similar tosend_template, but with an additional attribute in 'fields'Example:api_key = "abcd1234567" template_name = "example-template" # can also be "Example Template" fields = { 'subject': 'This is an example', 'from_email': '[email protected]', 'from_name': 'Example Name', 'to': [ { 'email': '[email protected]', 'name': 'Example Rcpt', 'type': 'to' #type as is }, { 'email': '[email protected]', 'name': 'Example Rcpt2', 'type': 'to' #type as is } ], 'reply-to': '[email protected]', 'global_merge_vars': [ { 'name': 'merge tag name', 'content': 'replace merge tag with' } ], 'merge_tags': [ { 'rcpt': '[email protected]', 'vars': [ { 'name': 'merge tag name', 'content': 'replace merge tag content' } ] } ], 'merge_language': 'mailchimp', 'attachments': [ { 'name': 'myfile.txt' }, { 'name': 'myimage.png' } ] } response = send_template_attachment(api_key, template_name, fields) # response could contain: # [ # {'email': '[email protected]', 'status': 'sent', '_id': '12349853'}, # {'email': '[email protected]', 'status': 'sent', '_id': '63313685'} # ]The same comments fromsend_templateapply. If the attribute is not needed, you can simply delete it in its entirety.'attachments' can support any file type, although, images will be sent as an attachment, not an embedded image.This works by using base64 to convert the file into a base64-encoded string, which is then sent through Mandrill's API.retrieve_data(api_key, query)Given a valid API key (string) and a query (struct), the function returns an array of matching messages.Example:api_key = "abcd1234567" message = { 'query': '[email protected]', 'date_from': '2020-06-29', 'date_to': '2020-06-30', 'tags': [ ] 'limit': 100 } matching_msgs = retrieve_data(api_key, messgae) # matching_msgs could contain: # [ # { # a bunch of fields that Mandrill's API returns per matching message # } # ]none of the attributes in 'message' are required, but they help narrow the returned results to messages that are more relevant to what you're looking for. Please note how the 'date_from' and 'date_to' strings are formatted.If the attribute is not needed, you can simply delete it in its entirety.Github-flavored Markdown
acinf
AboutThis software provides basic CLI and API functionality for controlling a single fan and reading sensor values from the AC Infinity Controller 69 Pro.Installpip install acinfThis script was authored for python 3.10 and Linux, but other python versions and operating systems may work.UsageIf everything is installed correctly, commandacinfshould have the following output:$ acinf usage: acinf [-h] [--log-level {debug,info,warning,error}] mac_address {get,set} [value] acinf: error: the following arguments are required: mac_address, actionTo set fan level to 5 use command for exampleMAC addressDE:AD:BE:EF:CA:FE:acinf DE:AD:BE:EF:CA:FE set 5To show all values to stdout in json format:$ acinf DE:AD:BE:EF:CA:FE get { "temperature_c": 18.56, "temperature_f": 65.408, "humidity": 70.03, "vpd_kpa": 0.61 }To retrieve just a single value as ASCII float to stdout, combinegetwith any of the following valuestemperature_c,temperature_f,humidity,vpd_kpa:$ acinf DE:AD:BE:EF:CA:FE get temperature_f 65.408MAC addressTo determine the MAC address of your AC Infinity controller on Linux, ensure the bluetooth package is installed and executebluetoothctl, then, commandscan on, examine devices marked ‘NEW’, or, enter commanddevices.AC infinity controllers are marked ‘ACI-E’, as in the example below:linux# bluetoothctl Agent registered [CHG] Controller 00:11:22:33:44:66 Pairable: yes [bluetooth]# scan on Discovery started [CHG] Controller 00:11:22:33:44:66 Discovering: yes [NEW] Device AB:CD:EF:AB:CD:EF AB-CD-EF-AB-CD-EF [NEW] Device DE:AD:BE:EF:CA:FE ACI-E [NEW] Device AB:CD:EF:AB:CD:FF AB-CD-EF-AB-CD-FFPairingIf you haven’t previously paired with your device, do that now. Press and Hold the bluetooth button on the controller until it begins flashing, then, enter bluetoothctl command,pair DE:AD:BE:EF:CA:FE.AboutThis software is not affiliated with AC Infinity. AC Infinity is a trademark of AC Infinity Inc. This software is not guaranteed to work with any particular AC Infinity product. It is not guaranteed to work at all. Use at your own risk.CaveatsAlthough 4 fans may be connected to the controller, this software only supports controlling a single fan.Do not execute more than 1 of these processes at a time. All ‘get’ and ‘set’ commands are failed if they exceed 1 minute, and so any scheduled execution of this command should be limited to approximately once per 2 minutes.This API and software has a very slow startup time, because it reconnects to the bluetooth device each time. Although it is possible to re-use a connection, I found persistent connections to be unreliable over long durations.Errors may often report to stderr, about timeout or failure to discover or connect, especially for long receiver distances, but all get or set operations are automatically retried for a full minute before failure.ContributingThis project is not very serious, if you wish to expand it for more features and devices, consider making a pull requests and becoming a co-maintainer and feel free to fork. Thanks, enjoy, and best wishes!
ac-infinity-ble
AC Infinity BLEAC Infinity BLE ControllerInstallationInstall this via pip (or your favourite package manager):pip install ac-infinity-bleContributors ✨Thanks goes to these wonderful people (emoji key):This project follows theall-contributorsspecification. Contributions of any kind welcome!CreditsThis package was created withCopierand thebrowniebroke/pypackage-templateproject template.
acinonyx
AcinonyxAsinoyx is a package which can simplify your multiprocessing implementation, also you can easily watch the progress of multiprocessing execution.UsageA simple sample:importtimefromrandomimportrandomfromacinonyximportrundeflog(val):time.sleep(random())returnvalvalues=range(100)print(run(log,values))It will run withcpu_countprocesses and print progress bar, output is below:1%|█ | 1/100 [00:01<00:40, 2.29it/s] 26%|██ | 26/100 [00:01<00:32, 21.29it/s] 100%|██████████| 100/100 [00:03<00:00, 26.86it/s] [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, ..., 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99]Also you can use multiple args:importtimefromrandomimportrandomfromacinonyximportrundefadd(a,b):time.sleep(random())returna+bif__name__=='__main__':values=[(i,i)foriinrange(100)]print(run(add,values))Also you can use it in other scenario such as web spider:importrequestsfromrandomimportrandomimporttimefromacinonyximportirundeffetch():delay=random()url='https://httpbin.org/uuid'time.sleep(delay)returnrequests.get(url).json().get('uuid')if__name__=='__main__':forresultinirun(fetch,range(10),ordered=False):print(result)Trouble ShootingNSPlaceholderDate initialize errorobjc[67206]:+[__NSPlaceholderDateinitialize]mayhavebeeninprogressinanotherthreadwhenfork()wascalledTry to set env before execute script:exportOBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
acit1515
ACIT 1515A test framework for ACIT 1515 - Scripting for ITThe framework relies on the following assumptions:Assignments must consist of:Simple functions with no side effectsFunctions must not sys.exit, raise exceptions, or otherwise terminate the programFunctions must not output or accept input. Input, output, and user interaction must be confined to__main__Test functions are organized in order of blocking requirements, where x is the maximum number of marks for the question, e.g.:First function tests if a value is returned (x marks deducted if failing)Second function tests if value is correct type (x - 1 marks deducted if failing)Third function tests is value is valid (x - 2 marks deducted if failing)etc.
acit4040-config-helper
Acit4040 Config HelperConfig Helper provides multiple ways of getting configuration data from environment variables and secrets from GCP Secret Manager..Environment VariablesAs of version 0.1.1, the following methods are supported:get_envvar_int(envvar_name: str, default_value: int | None = None) -> intReads an integer from the environment variable inenvvar_name.Supports fallback values withdefault_value.get_envvar_path(envvar_name: str, check_exists: bool = True) -> PathReads a path from the environment variable inenvvar_name.Supports checking if the path exists withcheck_exists.Does not support fallback values.get_envvar_str(envvar_name: str, default_value: str | None = None) -> strReads a string from the environment variable inenvvar_name.Supports fallback values withdefault_value.GCP Secret ManagerAs of version 0.1.1, the following methods are supported:get_secret(env_var_name: str, fallback_env_var_name: t.Optional[str]) -> strReads a text secret from GCP Secret Manager.Supports fallback values withfallback_env_var_name.get_secret_file(env_var_name: str, output_file: Path, fallback_env_var_name: t.Optional[str]) -> PathReads a binary secret/a file from GCP Secret Manager and writes it tooutput_file.Supports fallback values withfallback_env_var_name.
acitoolkit
No description available on PyPI.
aci-utils
No description available on PyPI.
acjnlp
Failed to fetch description. HTTP Status Code: 404
ackack
(Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License, Author Wonder Wafflehttps://www.deviantart.com/wonder-waffle/art/ACK-ACK-Mars-Attacks-359710975)Have fun with your vaccuum robot!AckAck is a simple control API to manually controll weback vacuum robots. Paired with its web interface, and a RTPS (in my case, I’m using an old yi ants camera withyi-hack)This way you can remotely-scare your cats!KeysUse your arrow keys to move the robot left, right, go front or turn backwards. Enter will start cleaning and backspace will stop.Environment variablesYou’ll need to setup your weback username and password. Usually, this will be your phone + the password you use on the control app. Besides that, only RTSP_URL is required.KEYDescriptionRTSP_URLYi camera’s RTSP stream URLWEBACK_USERNAMEYour weback’s username (phone number)WEBACK_PASSWORDYour weback’s passwordBASE_URLBase URL, for reverse proxiesInstallationDockerWith docker, just setup the specified env vars and launch the image. You can use the following docker-compose.yml example. Setting base_url is useful in reverse proxy scenarios (like traefik).version:"3.3"services:ackack:image:XayOn/ackackrestart:unless-stoppedports:-8080:8080environment:RTSP_URL:http://192.168.1...WEBACK_USERNAME:+33-123123123WEBACK_PASSWORD:yourpasswordBASE_URL:/ackackManual setupInstall the project, set your environment variables, launch ffmpeg to create a m3u8 file in static/playlist.m3u8 from your rtsp.Requires ffmpeg. Check your distro’s instructions on how to install ffmpeg You can checkout docker-entrypoint.sh and use its ffmpeg commandpipinstallackackWEBACK_USERNAME="+34-XXXX"WEBACK_PASSWORD="XXXX"poetryrunuvicornackack:appHow does it work?Ackack is simply an API for movement commands on python’sweback unofficial library, with an interface in plain html + js (with just videojs, minimal), paired with an ffmpeg command that converts the rtsp output of the yi camera to a format playable by your browser.
ackeras
No description available on PyPI.
ackermann
Ackermann - Command/Config LibraryThis library simplifies the startup process of a more complicated python project with multiple subcommands, a complicated configuration, aswell as certain targets that need to be run or initialized before starting commands in a specific order.The point of this library is to easily provide a way to setup a commandline interface usingloggingandargparseandcontextvarsthat looks like this:my-programm-vvv-c<config_file.py>my-custom-command--my-custom-optionsWhile providing a default way of starting the programm you might implemented way. Just have a look atackermann.units. The main purpose of this library is just to provide an extentable way to implement commands and startup targets.UsageThe most simple setup is just to callackermann.runin your__main__file, like so:fromackermannimportrunfromackermann.unitsimportrun_commandrun(targets=[run_command])Now if you execute your module withpython -m <module>you can specify a config with-cchange the verbosity level with-vand specify a subcommand.CommandsIf you want to write your own command you can do so:fromackermannimportCommand,ConfigfromargparseimportArgumentParserfromtimeimportsleepclassMyCommand(Command):name="my-command"@classmethoddefget_arguments(cls,parser:ArgumentParser):parser.add_argument("-s","--skip",action="store_true")defrun(self,config:Config):ifconfig["ARGS"].skip:print("You wanted to skip")else:print("We are doing it as you are not skipping")sleep(1000)If you run your program as suggested you are able to run this command with:python-m<project>my-command-sThe same works forasynccommands aswell:fromackermannimportCommand,ConfigfromargparseimportArgumentParserfromasyncioimportsleepclassMyAsyncCommand(Command):name="my-async-command"is_async=True@classmethoddefget_arguments(cls,parser:ArgumentParser):parser.add_argument("-s","--skip",action="store_true")asyncdefasync_run(self,config:Config):ifconfig["ARGS"].skip:print("You wanted to skip")else:print("We are doing it as you are not skipping")awaitsleep(1000)running this command would look like this:python-m<project>my-async-command-sTargets / UnitsMore complicated commands might want to reuse certain parts of code to initialize their project in the correct way. For thisackermannhasConfigUnits. These work pretty similiar to context managers around your command.A unit might be defined like this:fromackermannimportConfig,config_unit@config_unitasyncdefdo_sth(config:Config):print("I want to do sth before my command starts")yieldprint("Now I need to cleanup as the command finished")This unit will not run by default, but it must be explicitly stated in the command. Using thetargetsvariable. Config units might depend on each other, might conflict with one another or must be run in a certain order. They might also force the programm to be run in an async context or on multiple processors.SignalsIf your programm needs to signal certain parts of code about it's state you might also add signals. To your config by default before executing a command ackermann will signalreadyand after exiting a command it will signalstopping. This might be used to signal systemd in case of using a notify service.Config VariablesAll commands, units and signals are called with the current instance ofConfigwhich itself stores configuration variables that might be stored in a python module supplied with the-cflag. If you want to explicitly tell the programm about these variables you can use the small wrapperackermann.ConfigVararoundcontextvars.ContextVarwhich allows you to specify the format, type, and default value for a config variable.These variables might then be imported in your programm without worrying where the get the correct value like so:fromackermannimportConfigVarconfig_my_config_var=ConfigVar("MY_CONFIG_VAR",description="Does something",type=int,default=0)# At some point in the codedefdo_sth():my_config_var_value=config_my_config_var.get()One benefit of using this interface is that you can easily check the variables set when starting the programm with the-Vflag.
ackg
With ackg for Linux you can search folders of your harddrive for source files containing a given pattern, similar to grep.
ackit
ackitackit(aho-corasick kit) is a simple and pure python package and its method like pyahocorasick.pyahocorasick is a fast and memory efficient library for exact or approximate multi-pattern string search meaning that you can find multiple key strings occurrences at once in some input text. The strings “index” can be built ahead of time and saved (as a pickle) to disk to re re-sed later. The library provides an ahocorasick Python module that you can use as a plain dict-like Trie or convert a Trie to an automaton for efficient Aho-Corasick search.Installpip install ackit
ackl
acklPublicationAnalytical chemistry kernel library for spectroscopic profiling data, Food Chemistry Advances, Volume 3, 2023, 100342, ISSN 2772-753X, https://doi.org/10.1016/j.focha.2023.100342.PyPI (Python Package Index) repositoryhttps://pypi.org/project/ackl/Reproducible Code-Ocean capsulehttps://doi.org/10.24433/CO.4614220.v2Install (Ubuntu Env Setup)!apt-get install r-base r-base-dev ffmpeg libsm6 libxext6 !pip install rpy2 !pip install qsi==0.3.9 !pip install ackl==1.0.2 !pip install cla==1.1.4 !pip install opencv-python # Post-install script #!/usr/bin/env bash set -e Rscript -e 'install.packages("ECoL")'UseKernel Response Patternsimport ackl.metrics ackl.metrics.linear_response_pattern(20)Run Kernels on Target Dataset_, dics, _ = ackl.metrics.classify_with_kernels(X, y,embed_title = False)Show the result as HTML table and bar charts:html_str = ackl.metrics.visualize_metric_dicts(dics, plot = True) display(HTML( html_str ))
ackley
Ackleys FunctionIn mathematical optimization, the Ackley function is a non-convex function used as a performance test problem for optimization algorithms. It was proposed by David Ackley in his 1987 PhD Dissertation.Credits :WikiAckley, D. H. (1987) "A connectionist machine for genetic hillclimbing"camo.githubusercontent.com
acky
Acky LibraryThe Acky library provides a consistent interface to AWS. Based on botocore, it abstracts some of the API work involved and allows the user to interact with AWS APIs in a consistent way with minimal overhead.Acky takes a different approach to the API from libraries like the venerableBoto <https://github.com/boto/boto>. Rather than model AWS objects as Python objects, Acky simply wraps the API to provide a more consistent interface. Most objects in AWS are represented as collections in Acky, with get(), create(), and destroy() methods. The get() method always accepts a filter map, no matter if the underlying API method does.In cases where the API’s multitude of parameters would make for awkward method calls (as is the case with EC2’s RunInstances), Acky provides a utility class whose attributes can be set before executing the API call.Using AckyAcky uses a botocore-style AWS credential configuration, the same as the official AWS CLI. Before you use Acky, you’ll need toset up your config <http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html>.Once your credentials are set up, using acky is as simple as creating an instance of the AWS object:from acky.aws import AWS aws = AWS(region, profile) instances = aws.ec2.Instances.get(filters={'tag:Name': 'web-*'}) print('Found {} web servers'.format(len(instances))) for instance in instances: print(' {}'.format(instance['PublicDnsName'])Module StructureThe expected module structure for Acky follows. Many APIs are not yet implemented, but those that are can be considered stable.AWSusername (property)userinfo (property)account_id (property)environment (property)ec2regionszonesACEsACLsElasticIPsInstancesIpPermissionsKeyPairsPlacementGroupsSecurityGroupsSnapshotsSubnetsVPCsVolumesiamUsersGroupsKeysrdsengine_versionsInstancesSnapshotsEventSubscriptionsSecurityGroupsSecurityGroupRulessqsQueuesMessagesstsGetFederationTokenGetSessionTokenOther services will be added in future versions.Installing ackyacky is available in PyPI and is installable via pip:pip install ackyYou may also install acky from source, perhaps from the GitHub repo:git clone https://github.com/RetailMeNot/acky.git cd acky python setup.py install
acl
Network access control list parsing library.This library contains various modules that allow for parsing, manipulation, and management of network access control lists (ACLs). It will parse a complete ACL and return an ACL object that can be easily translated to any supported vendor syntax.
acl2-bridge
This package allows you to control an ACL2 process from Python, using the ACL2 Bridge. The ACL2 process must already be executing.
acl2-jupyter
This package allows you to connect an ACL2 process running with the ACL2 Bridge to a Jupyter notebook server.
acl2-kernel
acl2-kernelJupyter Kernel for ACL2What is Jupyter and ACL2?Project Jupyter exists to develop open-source software, open-standards, and services for interactive computing across dozens of programming languages. (https://jupyter.org/)ACL2 is a logic and programming language in which you can model computer systems, together with a tool to help you prove properties of those models. "ACL2" denotes "A Computational Logic for Applicative Common Lisp". (http://www.cs.utexas.edu/users/moore/acl2/)UsageWe follow to the standard jupyter kernel installation. So, you will install the kernel bypipcommand, and will call the installation command like,$pip3installjupyteracl2-kernel $python3-macl2_kernel.install $jupyternotebookYou also can see the deep usage bypython3 -m acl2_kernel.install --help.DockerIn some case, you might want to run the kernel in the Docker containers. This repository contains Dockerfile example. You can build example image by the following command.$ docker build . -t acl2To run the container, you would type the command like$ docker run --rm -p 8888:8888 acl2 jupyter notebook --ip='0.0.0.0'A running example is available in theexample/directory. You can try it on.Building from sourceInstallPoetryand in the root directory, do$ poetry build $ pip3 install dist/acl2-kernel-<version>.tar.gz $ python3 -m acl2_kernel.install --acl2 <path-to-acl2-binary>Related ProjectsJupyter- Softwares for interactive computingACL2- Theorem prover based on Common LispLicenseThis project is released under the BSD 3-clause license.Copyright (c) 2020, TANIGUCHI Masaya All rights reserved.We borrow code from the following projects.Egison Kernel; Copyright (c) 2017, Satoshi Egi and contributors All rights reserved.Bash Kernel; Copyright (c) 2015, Thomas Kluyver and contributors All rights reserved.
acl-anthology
No description available on PyPI.
acl-anthology-py
acl-anthology-pyThis package accesses data from theACL Anthology.DocumentationPackage on PyPIHow to useInstall viapip:$pipinstallacl-anthology-pyInstantiate the library, automatically fetching data files from theACL Anthology repo(requiresgitto be installed on your system):fromacl_anthologyimportAnthologyanthology=Anthology.from_repo()Some brief usage examples:>>>paper=anthology.get("C92-1025")>>>str(paper.title)Two-Level Morphology with Composition>>>[author.nameforauthorinpaper.authors][Name(first='Lauri', last='Karttunen'),Name(first='Ronald M.', last='Kaplan'),Name(first='Annie', last='Zaenen')]>>>anthology.find_people("Karttunen, Lauri")[Person(id='lauri-karttunen', names=[Name(first='Lauri', last='Karttunen')],item_ids=<set of 30 AnthologyIDTuple objects>, comment=None)]Find more examples and details on the API in theofficial documentation.DevelopingThis package uses thePoetrypackaging system. Development is easiest with thejustcommand runner; runningjust -lwill list all available recipes, whilejust -n <recipe>will print the commands that the recipe would run.Running checks, pre-commit hooks, and testsjust checkwill runblack,ruff,mypy, and some other pre-commit hooks on all files in the repo.just install-hookswill install pre-commit hooks so they run on every attempted commit.just test-allwill run all testsexceptfor tests that run on the full Anthology data.just test NAMEwill only run test functions withNAMEin them.just test-integrationwill run tests on the full Anthology data.just fix-and-test(orjust ftfor short) will run all checks and tests, additionally re-running the checks on failure, so that the checking and testing will continue even if some hooks have modified files.The justfile defines several more useful recipes; list them withjust -l!Running benchmarksThere are some benchmark scripts intended to be run withrichbench:poetryrunrichbenchbenchmarks/Generating and writing documentationjust docsgenerates the documentation in thesite/folder.just docs-serveserves the documentation for local browsing.Docstrings are written inGoogle styleas thissupports the most featureswith the mkdocstrings handler (particularly compared to Sphinx/reST).
aclass
aclassTool making your quarantine life so much easier.Are you tired of typing in credentials of your online classes? I came up with solution. Now you can install aclass python package withpip3.Installationpip3installaclassConfigurationTo configure aclass after installation run:aclass--configureThis command will createclasses.jsonfile in root directory of package. After json file is create you will be redirected tovimin order to edit the already created template. Template is locatedhere.Running this command again will overwrite already existing file and remove all contents.Example{"maths":"link to your maths class","cs":"link to your cs class"}If you are wondering how json syntax works visitthis page.UsageJoining to your class is pretty simple. As the second argument you take name of object you assigned link inclasses.jsonto. For example; in example above we have two classes: "maths" and "cs". You can name however you want. Key is to remember name of object.If you would like to join to your Computer Science class:aclass--joinname-of-cs-class-in-json-file
aclc-ba
ACTAB (Arabic Centre Tools by Bibliotheca Alexandrina) is a collection of open source Python modules for Arabic language processing developed byBibliotheca Alexandrina.InstallationYou will need Python > …Linux/macOSInstall using pippipinstallactab# or run the following if you already have actab installedpipinstallactab-tools--upgradeInstall from source# Clone the repogitclonehttps://github.com/ba-aclc/actabcdactab# Install from sourcepipinstall.# or run the following if you already have actab installedpipinstall--upgrade.WindowsInstall using pippipinstallactab......# or run the following if you already have actab installedpipinstall--upgrade........actabInstall from source# Clone the repogitclonehttps://github.com/ba-aclc/actabcdactab# Install from sourcepipinstall.......pipinstall--upgrade........DocumentationCitationIf you find actab useful in your research, please citeour paper:@{,title = "",author = "",month = ,year = "",publisher = "",url = "",abstract = "We present actab, a collection of open-source tools for Arabic natural language processing",language = "English",ISBN = "",}LicenseACTAB is available under the GNU General Public License v3.0. See theLICENSE filefor more info.ContributeIf you would like to contribute to ACTAB, please read theCONTRIBUTE.rstfile.
aclcliextension
No description available on PyPI.
acleto
🛠 ALToolboxALToolbox is a framework for practical active learning in NLP.Installation|Quick Start|Overview|Docs|CitationALToolbox is a framework foractive learningannotation in natural language processing. Currently, the framework supports text classification and sequence tagging tasks. ALToolbox provides state-of-the-art query strategies, serverless annotation tool for Jupyter IDE, and a set of tools that help to reduce computational overhead / duration of AL iterations and increase annotated data reusability.⚙️ InstallationpipinstallacletoTo annotate instances for active learning in Jupyter Notebook or Jupyter Lab one have to install additional widget after framework installation. In case of Jupyter Notebook usage run:jupyternbextensioninstall--py--symlink--sys-prefixtext_selector jupyternbextensionenable--py--sys-prefixtext_selectorIn case of Jupyter Lab usage run:jupyterlabextensioninstalljs jupyterlabextensioninstalltext_selector💫 Quick StartFor quick start, please see the examples of launching an active learning annotation or benchmarking a novel query stategy / unlabeled pool subsampling strategy for sequence tagging and text classification tasks:#Notebook1Launching Active Learning for Token Classification2Launching Active Learning for Text Classification3Benchmarking a novel AL query strategy / unlabeled pool subsampling strategy🔭 Overview1. Query Strategies#StrategyCitation1ALPSCitation2BADGECitation3BAITCitation4BALDCitation5BatchBALDCitation6Breaking Ties (BT) (also Maximum Margin)Citation7Contrastive Active Learning (CAL)Citation8Cluster MarginCitation9CoresetCitation10Expected Gradient Length (EGL)Citation11Embeddings KMCitation12EntropyCitation13Least Confidence (LC)Citation14Mahalanobis DistanceCitation15Maximum Normalized Log-Probability (MNLP)Citation16Random (No AL)-3. Unlabeled Pool Subsampling Strategies#StrategyCitation1UPSCitation2NaïveCitation3Random-4. Pipelines for postprocessing of annotated data and preparation of acquisition modelsPLASM postprocessing pipeline for annotated data reusability.Acquisition model distillation.Domain adaptation of acquisition models.5. GUI Annotator tool in Jupyter IDEOur framework provides a serverless GUI annotation tool integrated into the Jupyter IDE:6. Extensible benchmark for query strategiesTODO:📕 DocumentationUsageTheconfigsfolder contains config files with general settings. Theexperimentsfolder contains config files with experimental design. To run an experiment with a chosen configuration, specify config file name inHYDRA_CONFIG_NAMEvariable and runtrain.shscript (see./examples/alfor details).For example to launch PLASM on AG-News with ELECTRA as a successor model:cd PATH_TO_THIS_REPO HYDRA_CONFIG_PATH=../experiments/ag_news HYDRA_EXP_CONFIG_NAME=ag_plasm python active_learning/run_tasks_on_multiple_gpus.pyConfig structure explanationcuda_devices: list of CUDA devices to use: one experiment on one CUDA device.cuda_devices=[0,1]means using zero-th and first devices.config_name: name of config fromconfigsfolder with general settings: dataset, experiment setting (e.g. LC/ASM/PLASM), model checkpoints, hyperparameters etc.config_path: path to config with general settings.command:.pyfile to run. For AL experiments, userun_active_learning.py.args: arguments to modify from a general config in the current experiment.acquisition_model.name=xlnet-base-casedmeans thatxlnet-base-casedwill be used as an acquisition model.seeds: random seeds to use.seeds=[4837, 23419]means that two separate experiments with the same settings (except forseed) will be run: one withseed == 4837, one withseed == 23419.Output ExplanationBy default, the results will be present in the folderRUN_DIRECTORY/workdir_run_active_learning/DATE_OF_RUN/${TIME_OF_RUN}_${SEED}_${MODEL_CHECKPOINT}. For instance, when launching from the repository folder:al_nlp_feasible/workdir/run_active_learning/2022-06-11/15-59-31_23419_distilbert_base_uncased_bert_base_uncased.When running a classic AL experiment (acquisition and successor models coincide, regardless of using UPS), the file with the model metrics isacquisition_metrics.json.When running an acquisition-successor mismatch experiment, the file with the model metrics issuccessor_metrics.json.When running a PLASM experiment, the file with the model metrics istarget_tracin_quantile_-1.0_metrics.json(-1.0stands for the filtering value, meaning adaptive filtering rate; when using a deterministic filtering rate (e.g.0.1), the file will be namedtarget_tracin_quantile_0.1_metrics.json). The file with the metrics of the modelwithout filteringistarget_metrics.json.Post-processingOur framework provides tools for effective data post-processing for its re-usability and a possibility to build powerful models on it. PLASM, which aims to alleviate the acquisition-successor mismatch problem and allow to build a model of an arbitrary type using the labeled data without performance degradation, is implemented inpost_processing/pipeline_plasm. It uses the configcls_plasm/ner_plasm(from `jupyterlab_demo/configs). A brief explanation of the config structure:pseudo-labeling model parameters are contained in the keylabeling_model;successor model parameters are contained in the keysuccessor_model;post-processing options are contained in the keypost_processing:label_smoothing: str / float / None, a parameter for label smoothing (LS) for pseudo-labeled instances. Accepts several options:"adaptive": LS value equals the quality of the labeling model on the validation data.float, 0 < value < 1: absolute value of label smoothingNone (default): no label smoothing is usedlabeled_weight: int / float, weight for the labeled-by-human data. 1 < value < +infuse_subsample_for_pl: int / float / None, the size of the subsample used for pseudo-labeling (float means taking the share of the unlabeled data). None means that no subsampling is used.uncertainty_threshold: float / None, the value of the threshold for filtering by uncertainty. If None, no filtering by uncertainty is used.filter_by_quantile: bool, only used for classification, ignored ifuncertainty_thresholdis None. If True,uncertainty_thresholdmost uncertain instances are filtered. Otherwise, all instances whose (1 - max_prob) <uncertainty_thresholdare filtered.tracin:use: bool, whether to use TracIn for filteringmax_num_processes: int, value > 0, maximum number of processes per one GPUquantile: str / float (0 < value < 1), share of unlabeled data instances to filter using the TracIn score.num_model_checkpoints: int, value > 0, how many model checkpoints to save and use for TracIn.nu: float / int, value for TracIn algorithm.🆕️ New strategies additionAn AL query strategy should be designed as a function that:Receives 3 positional arguments and additional strategy kwargs: -modelof inherited classTransformersBaseWrapperorPytorchEncoderWrapperorFlairModelWrapper: model wrapper; -X_poolof classDatasetorTransformersDataset: dataset with the unlabeled instances; -n_instancesof classint: number of instances to query; -kwargs: additional strategy-specific arguments.Outputs 3 objects in the following order:query_idxof classarray-like: array with the indices of the queried instances;queryof classDatasetorTransformersDataset: dataset with the queried instances;uncertainty_estimatesof classnp.ndarray: uncertainty estimates of the instances fromX_pool. The higher the value - the more uncertain the model is in the instance.The function with the strategy should be named the same as the file where it is placed (e.g. functiondef my_strategyinside a filepath_to_strategy/my_strategy.py). Use your strategy, settingal.strategy=PATH_TO_FILE_YOUR_STRATEGYin the experiment config.The example is presented inexamples/benchmark_custom_strategy.ipynb🆕️ New pool subsampling strategies additionThe addition of a new pool subsampling query strategy is similar to the addition of an AL query strategy. A subsampling strategy should be designed as a function that:It must receive 2 positional arguments and additional subsampling strategy kwargs: -uncertainty_estimatesof classnp.ndarray: uncertainty estimates of the instances in the order they are stored in the unlabeled data; -gamma_or_k_confident_to_saveof classfloatorint: either a share / number of instances to save (as in random / naive subsampling) or an internal parameter (as in UPS); -kwargs: additional subsampling strategy specific arguments.It must output the indices of the instances to use (sampled indices) of classnp.ndarray.The function with the strategy should be named the same as the file where it is placed (e.g. functiondef my_subsampling_strategyinside a filepath_to_strategy/my_subsampling_strategy.py). Use your subsampling strategy, settingal.sampling_type=PATH_TO_FILE_YOUR_SUBSAMPLING_STRATEGYin the experiment config.The example is presented inexamples/benchmark_custom_strategy.ipynbDatasetsThe research has employed 2 Token Classification datasets (CoNLL-2003, OntoNotes-2012) and 2 Text Classification datasets (AG-News, IMDB). If one wants to launch an experiment on a custom dataset, they need to use one of the following ways to add it:Upload toHugging Face datasetsand set:config.data.path=datasets, config.data.dataset_name=DATASET_NAME, config.data.text_name=COLUMN_WITH_TEXT_OR_TOKENS_NAME, config.data.label_name=COLUMN_WITH_LABELS_OR_NER_TAGS_NAMEUpload todata/DATASET_NAMEfolder, createtrain.csv/train.jsonfile with the dataset, and set:config.data.path=PATH_TO_THIS_REPO/data, config.data.dataset_name=DATASET_NAME, config.data.text_name=COLUMN_WITH_TEXT_OR_TOKENS_NAME, config.data.label_name=COLUMN_WITH_LABELS_OR_NER_TAGS_NAME* Upload todata/DATASET_NAMEtrain.txt,dev.txt, andtest.txtfiles and set the arguments as in the previous point.** Upload todata/DATASET_NAMEwith each folder for each class, where each file in the folder contains a text with the label of the folder. For details, please see thebbc_newsdataset in./data. The arguments must be set as in the previous two points.* - only for Token Classification datasets** - only for Text Classification datasetsModelsThe current version of the repository supports all models fromHuggingFace Transformers, which can be used withAutoModelForSequenceClassification/AutoModelForTokenClassificationclasses (for Text / Token classification). For CNN-based / BiLSTM-CRF models, please see theal_cls_cnn.yaml/al_ner_bilstm_crf_flair.yamlconfigs from./configsfolder for details.TestingBy default, the tests will be run on thecuda:0device if CUDA is available or on CPU, otherwise. If one wants to manually specify the device for running the tests:On CPU:CUDA_VISIBLE_DEVICES="" python -m pytest PATH_TO_REPO/tests;On CUDA:CUDA_VISIBLE_DEVICES="DEVICE_OR_DEVICES_NUMBER" python -m pytest PATH_TO_REPO/tests.We recommend to use CPU for the robustness of the results. The tests for CUDA are written underTesla V100-SXM3 32GB, CUDA V.10.1.243.👯 AlternativesFAMIE,Small-Text,modAL,ALiPy,libact💬 Citation📄 License© 2022 Autonomous Non-Profit Organization "Artificial Intelligence Research Institute" (AIRI). All rights reserved.Licensed under theMIT License.
aclhound
[link_documentation]:https://github.com/job/aclhound/blob/master/DOCUMENTATION.mdACLHOUND[![Build Status](https://travis-ci.org/job/aclhound.svg?branch=master)](https://travis-ci.org/job/aclhound) [![Coverage Status](https://coveralls.io/repos/job/aclhound/badge.svg?branch=master)](https://coveralls.io/r/job/aclhound?branch=master)SummaryACLHound takes as input policy language following a variant of the [AFPL2] [1] syntax and compiles a representation specific for the specified vendor which can be deployed on firewall devices.Table of contents[Design goals](#design-goals)[Supported devices](#supported-devices)[Installation notes](#installation-notes)[Copyright and license](#copyright-and-license)Design goalsACLHound is designed to assist humans in managing hundreds of ACLs across tens of devices. One key focus point is maximum re-usability of ACL components such as groups of hosts, groups of ports and the policies themselves.Supported devicesCisco ASANo support for ASA 9.1.2 or higher (yet)Cisco IOSWill autodetect IPv6 support through`show ipv6 cef`Juniper (planned)Installation notesStep 1: get the code` sudo pip install aclhound `DocumentationDocumentation can be found [here][link_documentation]. This describes directory structure, ACLhound language syntax and examples.Copyright and licenseCopyright 2014,2015 Job Snijders. Code and documentation released under the BSD 2-Clause license.ACLHound’s inception was commissioned by the eBay Classifieds Group.[1]:http://www.lsi.us.es/~quivir/sergio/DEPEND09.pdf“AFPL2” [2]:http://jenkins-ci.org/“Jenkins” [3]:https://wiki.jenkins-ci.org/display/JENKINS/Gerrit+Trigger“Gerrit Trigger”
acli
acli========[![Build Status](https://travis-ci.org/jonhadfield/acli.svg?branch=master)](https://travis-ci.org/jonhadfield/acli)acli is a simple CLI for querying and managing AWS services, written in Python using the [boto3](http://aws.amazon.com/sdk-for-python/ "boto3") SDK and [terminaltables](https://github.com/Robpol86/terminaltables "terminal tables") libraries.Please submit any issues encountered.Latest changes ([changelog](https://github.com/jonhadfield/acli/blob/master/CHANGELOG.md "Changelog"))------------0.1.32 (18th April 2017)- Provide info on EFS file systems and targets- Fix issue where authentication issues were hidden0.1.31 (8th Jan 2017)- Add IAM user listing and user info- Fix secgroup list option- S3 - treat file as binary when getting md50.1.30 (18th August 2016)- Embed Six (remove external dependency)- Python 3 compatibility for S30.1.29 (29th July 2016)- Move s3 owner information to s3 info- Python 3 compatibility- Minor fixes0.1.28 (25th July 2016)- Add s3 owner information0.1.27 (25th July 2016)- Broken0.1.26 (24th July 2016)- Add basic support for EFS- Minor fixes0.1.25 (23rd July 2016)- Correct ec2 instance counts- Fix python 3 compatibility- Fix output issues with s3 and ec2- Upgrade dependencies0.1.24 (5th April 2016)- Improve permission checks to prevent false negatives- Minor fixesInstallation------------Simple:sudo pip install acliLatest (from source):git clone [email protected]:jonhadfield/acli.gitsudo python setup.py installSetup-----Using the boto3 library means that credentials will be retrieved from the standard locations: http://boto3.readthedocs.org/en/latest/guide/configuration.html#configuration-filesAlternatively, you can specify them on the command line (see -h option for details).Usage-----To see available services and commands, run:acli -hExamples--------List ec2 instances in the account matching:acli ec2 listView information on an instance:acli ec2 info i-12ab3c45List contents of an S3 bucket:acli s3 list my_bucketLicense-------MIT
aclib
aclib - The alphacruncher python libraryInstallation$pipinstallaclibUsageThe library provides 2 convenience functions for connecting to the remote database with your credentials. It assumes that you have a credential file~/.odbc.iniin the following format[nuvolos] uid = <username> pwd = <password>The library will also look for a special/lifecycle/.dbpathfile. When used inside an Alphacruncher application, this file is populated by the platfrom with the db and schema name of the application, and the library will pick these up automatically.You can then get the SQLAlchemy connection string, or create an SQLAlchemy engine directly:>>> from aclib import get_url, get_engine >>> get_url() 'snowflake://<username>:<password>@alphacruncher.eu-central-1/?warehouse=<username>' >>> get_url("db_name","schema_name") 'snowflake://<username>:<password>@alphacruncher.eu-central-1/?warehouse=<username>&database=db_name&schema=schema_name' >>> eng = get_engine("db_name","schema_name")Source:https://github.com/datahub-ac/python-connector
aclib.autowin
InstallationGeneralpip install aclib.autowinwork with dmpip install aclib.autowin[dm]work with cvpip install aclib.autowin[cv]full installationpip install aclib.autowin[full]Usagefromaclib.autowin._typingimport*fromaclib.autowinimportscreen,Window,AppWindowfromaclib.autowin.cvwindowimportCvWindow,Target# [cv] requiresfromaclib.autowin.dmwindowimportDmWindow,Target# [dm] requires# full requires# if all requires ready, you can get all api from this module# the classs 'Window' in this module is inherit from 'CvWindow' & 'DmWindow'fromaclib.autowin.windowsimportscreen,Window,AppWindow,Target
aclib.builtins
Installationpip install aclib.builtinsUsage# from aclib import builtins # import aclib.builtins from aclib.builtins import Str, Bin, Oct, Hex, BaseNumber from aclib.builtins import decorator, SELF
aclib.cv
Installationpip install aclib.cvUsagefromaclib.cvimportImage,Dotset,DotsetLib,FontLibfromaclib.cvimportTarget,TargetListim=Image.fromfile('filepath')im.show()
aclib.dm
InstallationPlease work with 32-bit python.pip installl aclib.dmUsagefromaclib.dmimportDMDM.regDM()dm=DM()print(dm.Ver())
aclib.emails
Installationpip install aclib.emailsUsagefromaclib.emailsimportRecver,Sender,Server,User,Mail
aclib.images
Failed to fetch description. HTTP Status Code: 404
aclib.inputs
No description available on PyPI.
aclib.pip
DescriptionFeatureacpip displays packages faster than pip.acpip provides short commands, it make everything so easy.acpip will analyze dependency relationship recursively when uninstall,then remind you if uninstalling package is required by other packages or not,also list the dependencies which are not required by other packages, except pip and setuptools, you can use option -d to uninstall them together.although they are not required by other packages, maybe your project is using some of them independently. so you can select which dependcies not to uninstall before uninstallation start, and if you select one, its dependencies will not be uninstalled either.AboutThis is a new project, i couldn't know all about specification of python distribution temporarily, so sometimes i used pip to parse informations, it may be removed in future versions.Installationpip install aclib.pipUsage in command lineusagepython -m aclib.pip <command> [options] acpip <command> [options]commands# list installed packages. acpip li acpip ls acpip list # show information about all installed packages. acpip show # show information about given packages. acpip show pk1 pkg2 ... pkgn # uninstall packages. # use option -y to uninstall without comfirm. # use option -d to uninstall dependencies together. acpip uni [-y/-d] pkg1 pkg2 ... pkgn acpip uninstall [-y/-d] pkg1 pkg2 ... pkgnUsage in python codeIn addition, this module also provides interfaces to access the informations about installed packages.# import aclib.pip from aclib.pip import Distribution, SitePackages
aclib.pyi
Installationpip install aclib.pyiUsage# from aclib.pyi import compile, pyipackfromaclibimportpyicompileddir=pyi.compile(...)ifcompileddir:pyi.pyipack(...)
aclib.threads
Installationpip install aclib.threadsUsagefromaclib.threadsimportThread
aclib.web
Installationpip install aclib.webUsagefromaclibimportwebweb.get(...)web.getwebtime(...)web.GiteePages(...).update()
aclib.winlib
Installationpip install aclib.winlibUsagefrom aclib.winlib import winapi, wincon, wintype # modules
aclib.wmi
Installationpip install aclib.wmiUsagefrom aclib.wmi import WMI wmi = WMI()
aclick
aclickaclickis a python library extendingclickwith the support for typing. It uses function signatures to automatically register options to parsers. Please refer to thedocumentation.The following features are currently supported:Positional-only parameters are added as click Arguments, other parameters become click Options.Docstring is automatically parsed and used to generate command and parameter descriptions.Arguments withint,float,str,boolvalues both with and without default value.Complex structures of classes and dataclasses that are automatically inlined as a single string, e.g.,class1("arg1", arg2=class2()).Complex structures of classes and dataclasses that are expanded as individual options with thehierarchical=Trueoption enabled.TypeUnionof complex classes both inlined and hierarchical.TypeOptionalof inlined complex classes.TypeLiteralof strings.Lists and tuples of both the primitive and inlined complex types.Parameters can be renamed.Parameter values can be loaded from a JSON, YAML, or other file.Configuration can be loaded using thegin-configpackage.For other features please refer to thedocumentation.InstallationInstall the library from pip:$ pip install aclickGetting startedImportaclickinstead ofclick:# python main.py test --arg2 [email protected]()defexample(arg1:str,/,arg2:int=5):passexample()When usingclick.groups:# python main.py example test --arg2 [email protected]()defmain():[email protected]('example')defexample(arg1:str,/,arg2:int=5):passmain()For further details please look at thedocumentation.LicenseMIT
aclient
aclient安装说明使用pip或其他 PyPi 软件包进行安装pip install aclient使用 aclient 发送异步请求您可以试试:importrefromaclientimport*aclient=AsyncClient()# 自定义解析函数 注意; 函数必需是异步的asyncdefparse(response,**kwargs):text=awaitresponse.text()# 测试: 获取 title 文本 - 百度一下pattern=re.compile(f"<title>(.*?)</title>")title=pattern.findall(text)[0]returntitle# 请求地址 可以发送大量地址url="https://www.baidu.com"# urls列表格式urls=[urlfor_inrange(2)]result=aclient.get(urls,custom_parse=parse)# 打印item数据print(result)# 结果# result = {'0': '百度一下', '1': '百度一下'}# urls字典格式urls={f"第{i}个":{"url":url,"timeout":5}foriinrange(2)}result=aclient.get(urls,custom_parse=parse)# 打印item数据print(result)# 结果# result = {'第0个': '百度一下', '第1个': '百度一下'}
aclients
aclients基于sanic扩展,各种数据库异步crud操作,各种请求异步crud操作,本repo是个基础封装库,直接用于各个业务系统中Installing aelogpip install aclientsUsage后续添加,现在没时间.
acl-iitbbs
No description available on PyPI.
aclimatise
For the full documentation, refer to theGithub Pages Website.aCLImatise is a Python library and command-line utility for parsing the help output of a command-line tool and then outputting a description of the tool in a more structured format, for example aCommon Workflow Language tool definition.Currently aCLImatise supports bothCWLandWDLoutputs, but other formats will be considered in the future, especially pull requests to support them.Please also refer toThe aCLImatise Base Camp, which is a database of pre-computed tool definitions generated by the aCLImatise parser. Most bioinformatics tools have a tool definition already generated in the Base Camp, so you may not need to run aCLImatise directly.aCLImatise is now published in the journalBioinformatics. You can read the application note here:https://doi.org/10.1093/bioinformatics/btaa1033. To cite aCLImatise, please use the citation generator provided by the journal.ExampleLets say you want to create a CWL workflow containing the common Unixwc(word count) utility. Runningwc--helpreturns:Usage: wc [OPTION]... [FILE]... or: wc [OPTION]... --files0-from=F Print newline, word, and byte counts for each FILE, and a total line if more than one FILE is specified. A word is a non-zero-length sequence of characters delimited by white space. With no FILE, or when FILE is -, read standard input. The options below may be used to select which counts are printed, always in the following order: newline, word, character, byte, maximum line length. -c, --bytes print the byte counts -m, --chars print the character counts -l, --lines print the newline counts --files0-from=F read input from the files specified by NUL-terminated names in file F; If F is - then read names from standard input -L, --max-line-length print the maximum display width -w, --words print the word counts --help display this help and exit --version output version information and exit GNU coreutils online help: <http://www.gnu.org/software/coreutils/> Full documentation at: <http://www.gnu.org/software/coreutils/wc> or available locally via: info '(coreutils) wc invocation'If you runaclimatise explore wc, which means “parse the wc command and all subcommands”, you’ll end up with the following files in your current directory:wc.cwlwc.wdlwc.ymlThese are representations of the commandwcin 3 different formats. If you look atwc.wdl, you’ll see that it contains a WDL-compatible tool definition forwc:version 1.0 task Wc { input { Boolean bytes Boolean chars Boolean lines String files__from Boolean max_line_length Boolean words } command <<< wc \ ~{true="--bytes" false="" bytes} \ ~{true="--chars" false="" chars} \ ~{true="--lines" false="" lines} \ ~{if defined(files__from) then ("--files0-from " + '"' + files__from + '"') else ""} \ ~{true="--max-line-length" false="" max_line_length} \ ~{true="--words" false="" words} >>> }
acl-log
No description available on PyPI.
acl-mngt
No description available on PyPI.
acloud-client
No description available on PyPI.
aclpubcheck
ACL pubcheckACL pubcheck is a Python tool that automatically detects font errors, author formatting errors, margin violations, outdated citations as well as many other common formatting errors in papers that are using the LaTeX sty file associated with ACL venues. The script can be used to check your papers before you submit to a conference. (We highly recommend running ACL pubcheck on your paperspre-submission—a well formatted paper helps keep the reviewers focused on the scientific content.) However, its main purpose is to ensure your accepted paper is properly formatted, i.e., it follows the venue's style guidelines. The script is used by the publication chairs at most ACL events to check for formatting issues. Indeed, running this script yourself and fixing errors before uploading the camera-ready version of your paper will often save you a personalized email from the publication chairs.You can install the package by cloning the repogit clone [email protected]:acl-org/aclpubcheck.gitorgit clone https://github.com/acl-org/aclpubcheck.gitcd aclpubcheckpip install -e .You can run the script on a paper as followspython3 aclpubcheck/formatchecker.py --paper_type PAPER_TYPE PAPER_NAME.pdfwherePAPER_TYPEis taken from the set {long,short,other}. You should choose either long or short depending on the type of paper you have accepted.If you find that ACL pubcheck gives you a margin error due to a figure that runs into the margin, you can often fix the problem by applying theadjustbox package. Additionally, if the margin error is caused by an equation, then it may help to break the equation over two lines.Note: Additional info can be found in the PDF documentaclpubcheck_additional_info.pdfincluded in this package.Online Version: If you are having trouble with installing and using the Python toolkit directly, you can use a CodaLab version onlinehttps://colab.research.google.com/drive/1Sq6ilmrFUQpUFMkV71U8-Wf0madW-Uer?usp=sharing.Updating the names in citationsDescriptionOur toolkit now automatically checks your citations and will leave a warning if you have used incorrect names or author list. Please have a lookhereon why it is important to use updated citations.Demo version of PDF name checking is availablehere.How it's doneThe bibilography from your PDF file is extracted usingScholarcy API. Each bib entry in this bib file is updated by pulling information from ACL anthology, DBLP and arXiv; by using fuzzy match of the titles. After updating the bibs, the author names are compared and mismatches in author names are warned.FunctionalityThe functions are present inaclpubcheck/name_check.py. The classPDFNameCheckis used informatchecker.py.CaveatsSome of the warnings generated for citations may be spurious and inaccurate, due to parsing and indexing errors. We encourage you to double check the citations and update them depending on the latest source. If you believe that your citation is updated and correct, then please ignore those warnings. You can fix your bib files using the toolkit likerebiber.ScreenshotsThis is how the warnings appear for the outdated names. You would be directed to a URL where you can correct the citations. We are not showing the name changes as it might out the deadnames in the warnings.DevelopmentInstall conda/miniconda/mambaInstall Python PoetryRunconda create -n acl python=3.9Runpoetry installPublishingRunpoetry buildthenpoetry publishCreditsThe original version of ACL pubcheck was written by Yichao Zhou, Iz Beltagy, Steven Bethard, Ryan Cotterell and Tanmoy Chakraborty in their role as publications chairs ofNAACL 2021. The tool was improved by Ryan Cotterell and Danilo Croce in their role as publication chairs ofACL 2022andNAACL 2022. Pranav A added the name checking functions to this toolkit.
aclpwn
No description available on PyPI.
acls
@cls- Class Made Aware to DecoratorThe purpose of this library is to provide possibility to create decorators, especially in super classes, with current classclsas argument. Everything are centered with our newclsmodule, keyword, magic, or what every you think it would be.Getting StartedInstallationInstall bypipfrom PyPI:pipinstallaclsExampleA typical snippet looks like:fromfunctoolsimportwrapsimportclsclassBase(metaclass=cls.ClsMeta):@clsdefdecor(cls,decor_arg):defwrap(func):@wraps(func)defwrapper(self):# do something with `func`retval=func(self)# do something with `retval`returnretvalreturnwrapperreturnwrapclassExtended(Base):@cls.decor('some arg')deffunc(self):# do somethingpassThe magic is that you can [email protected], which is inheritance fromBase. What is more, with in the decoratorBase.decor, argumentclswill be assigned with the correct current class. In this example, it would simply be a reference toExtended. This would be helpful if you want to make use of some class property here in the decorator.MagicWell, there is no magic. I created a delegator in class namespace to enable both class possible to use@cls. So, it is not the moduleclsas we imported on the top. I use this to make it more consistent looking, and fool some interpreters likepylint. No offense, just want to make them less noisy.LimitationsUnfortunately, This work is based oncustomizing python class creation. I have to make use of__prepare__, which isintroducedonly topython 3. That means there is no known possible backward compatibility withpython 2now. The code is tested against python 3.5+.Please let me know if you come up with an idea how to manipulate class creation in python 2.There are a couple of issues, with which I am talking. Contributions are welcome.Known issuerelying on length of arguments andcallable()to support optional arguments in decoratornot compatible with@classmethod, or many other decoratorsmakepylintreally noisyno documents :see_no_evil:!LicenseMIT License, Copyright (c) 2019 guoquan
acl-search
acl-search is a Python utility to search through an ACL file to find intersecting destination IPs and return the full term.
acl-stats
[![Build Status](https://travis-ci.org/DiogoAndre/acl_stats.svg?branch=master)](https://travis-ci.org/DiogoAndre/acl_stats)[![published](https://static.production.devnetcloud.com/codeexchange/assets/images/devnet-published.svg)](https://developer.cisco.com/codeexchange/github/repo/DiogoAndre/acl_stats)# ACL Stats**Quickly gather access-lists stats from Cisco ASA Firewalls**ACL Stats is a tool to help extract basic info from access-list from aCisco ASA Firewall, and output the information in a structured way.![Sample script run with output in csv](sample_csv.png)The script currently collects the number of hits for each acl entry and the date of the last hit.## UsageFirst, install it via pip:pip install acl_statsUse it as a command line tool. You can get contextual help in the cli:acl_stats --helpThere are two main modes to pass the access-lists to the script.### Directly from a DeviceThe script can connect to a device in the network and gather the informatoin needed.A connection is made via HTTPS (same transport as ASDM), hence the firewall will need to have that method enabled.Run ``acl_stats device --help`` to get the contextual help listing all the available parameters:```shell$ acl_stats device --help 02:35:23ACL StatsUsage: acl_stats device [OPTIONS]Connect to a device to fech ACLsOptions:--hostname TEXT Hostname or IP of device to connect [required]--port INTEGER port to use when connection to a device [required]--username TEXT username to use when connection to a device [required]--password TEXT password to use when connection to a device [required]--acl-name TEXT Name of target ACL [required]--output TEXT Choose an output format: json, csv. Defaults to csv--write-to TEXT Write the output to a file--help Show this message and exit.```Here's an example:```shellacl_stats device --hostname 192.168.218.72 --port 443 --username cisco --password cisco --acl-name inside_in --output json```If you ommit the ``--write-to`` parameter, the output will be sent to ``stdout``(the terminal, usually).## From static filesYou can also run the script 'off-line', using previously extracted command outputs.The script processes one access-list at a time for now. Here are the two commands you need to save in **separete** files:show acess-list _name_of_aclshow access-list _name_of_acl briefRun ``acl_stats static --help`` to get the contextual help listing all the available parameters:```shell$ acl_stats static --help 02:47:19ACL StatsUsage: acl_stats static [OPTIONS]Use static files instead of connection to a deviceOptions:--acl-file TEXT File containing the output of the show acess-list _name_command [required]--acl-brief TEXT File containing the output of the show acess-list _name_brief command [required]--output TEXT Choose an output format: json, csv. Defaults to csv--write-to TEXT Write the output to a file--help Show this message and exit.```Here's an example:```shellacl_stats static --acl-file acl.log --acl-brief acl_brief.log --output json --write-to acl-inside_in.csv```## Example OutputsJSON Output```json$ acl_stats acl_stats device --hostname 192.168.218.72 --port 443 --username cisco --password cisco --acl-name inside_in --output json 02:47:23ACL StatsUsing Device 192.168.218.72Contacting DeviceFetching ACLFetching ACL BriefProcessingDone![{"entry_id": "20d85be5","grouped_id": "00000000","line": "1","hitcount": 3,"last_hit_date": "2018-10-11 09:50:52","timestamp": 1539262252,"entry": "access-list inside_in line 1 extended permit icmp any host 10.0.0.10 (hitcnt=3) 0x20d85be5"},{"entry_id": "bde0d47c","grouped_id": "-","line": "2","hitcount": 0,"last_hit_date": "0","timestamp": 0,"entry": "access-list inside_in line 2 extended permit tcp any host 10.0.0.10 eq www (hitcnt=0) 0xbde0d47c"},{"entry_id": "20414f5d","grouped_id": "-","line": "3","hitcount": 0,"last_hit_date": "0","timestamp": 0,"entry": "access-list inside_in line 3 extended deny tcp any host 10.0.0.10 eq gopher inactive (hitcnt=0) (inactive) 0x20414f5d"},{"entry_id": "49ae2fb8","grouped_id": "-","line": "4","hitcount": 0,"last_hit_date": "0","timestamp": 0,"entry": "access-list inside_in line 4 extended deny tcp any host 10.0.0.10 eq telnet (hitcnt=0) 0x49ae2fb8"}]Lines processed (acls + brief): 5Total execution time: 0.09016704559326172s.```CSV output:```csv$ acl_stats acl_stats device --hostname 192.168.218.72 --port 443 --username cisco --password cisco --acl-name inside_in --output csv 02:57:21ACL StatsUsing Device 192.168.218.72Contacting DeviceFetching ACLFetching ACL BriefProcessingDone!entry_id,grouped_id,line,hitcount,last_hit_date,timestamp,entry20d85be5,00000000,1,3,2018-10-11 09:50:52,1539262252,access-list inside_in line 1 extended permit icmp any host 10.0.0.10 (hitcnt=3) 0x20d85be5bde0d47c,-,2,0,0,0,access-list inside_in line 2 extended permit tcp any host 10.0.0.10 eq www (hitcnt=0) 0xbde0d47c20414f5d,-,3,0,0,0,access-list inside_in line 3 extended deny tcp any host 10.0.0.10 eq gopher inactive (hitcnt=0) (inactive) 0x20414f5d49ae2fb8,-,4,0,0,0,access-list inside_in line 4 extended deny tcp any host 10.0.0.10 eq telnet (hitcnt=0) 0x49ae2fb8Lines processed (acls + brief): 5Total execution time: 0.08188796043395996s.```=======History=======0.1.0 (2018-10-11)------------------* First release on PyPI.
aclsum
ACLSum: A New Dataset for Aspect-based Summarization of Scientific PublicationsThis repository contains data for our paper "ACLSum: A New Dataset for Aspect-based Summarization of Scientific Publications" and a small utility class to work with it.HuggingFace datasetsYou can also use Huggin Face datasets to load ACLSum (dataset link). This would be convenient if you want to train transformer models using our dataset.Just do,fromdatasetsimportload_datasetdataset=load_dataset("sobamchan/aclsum")Our utility classIf you want to see what's in our data more carefully, the following example code on how to use our utility class may be helpful.You can install the library with the dataset via pip, just run,pipinstallaclsumthen you can load the dataset from your python code as,fromaclsumimportACLSum# Load per split ("train", "val", "test")train=ACLSum("train")# One data sample (= paper)document=train[0]# Three summaries on each aspect (dict[aspect, summary])document.summaries# Get all the sentences from the paper (we only work with abstract, introduction, and conclusion sections) (list[str])document.get_all_sentences()# You can specify sections to extract sentences fromdocument.get_all_sentences(["abstract","conclusion"])# Get highlight labels (list[0 or 1])document.get_all_highlights()# Get highlighted sentences (list[str])document.get_all_highlighted_sentences()
acltldr
ACLTLDRThis is a python-based CLI tool used to generate summaries for ACL conference papers onthis page.Installationpip install acltldrUsageAn example command to generate summaries for the proceedings of EACL 2021. Following command will generate a jsonl file with all the data, and a markdown file for the post.acltldr https://raw.githubusercontent.com/acl-org/acl-anthology/master/data/xml/2021.eacl.xml \ ./ \ --prefix "2021.eacl" \ --use-gpu
acltoolkit-ad
README.md
aclust
Aclust======Streaming agglomerative clustering with custom distance and correlation*Agglomerative clustering* is a very simple algorithm.The function `aclust` provided here is an attempt at a simple implementationof a modified version that allows a stream of input so that data is notrequired to be read into memory all at once. Most clustering algorithms operateon a matrix of correlations which may not be feasible with high-dimensionaldata.`aclust` **defers** some complexity to the caller by relying on a stream ofobjects that support an interface (I know, I know) of:obj.distance(other) -> numericobj.is_correlated(other) -> boolWhile this does add some infrastructure, we can imagine a class withposition and values attributes, where the former is an integer and thelatter is a list of numeric values. Then, those methods would be implementedas:def distance(self, other):return self.position - other.positiondef is_correlated(self, other):return np.corrcoef(self.values, other.values)[0, 1] > 0.5This allows the `aclust` function to be used on **any** kind of data. We canimagine that distance might return the Levenshtein distance between 2 stringswhile is\_correlated might indicate their presence in the same sentence or insentences with the same sentiment.Since the input can be- and the output is- streamed, it is assumed the the objsare in sorted order. This is important for things like genomic data, but may beless so in text, where the max\_skip parameter can be set to a large value todetermine how much data is kept in memory.See the function docstring for examples and options. The function signature is:aclust(object\_stream, max\_dist,max\_skip=1, linkage='single', multi\_member=False)It yields clusters (lists) of objects from the input object stream.`multi\_member` allows a feature to be a member of multiple clusters as long asit meets the distance and correlation constraints. The default is to onlyallow a feature to be added to the *nearest* cluster with which it iscorrelated.Uses====+ Clustering methylation data which we know to be locally correlated. We canuse this to reduce the number of tests (of association) from 1 test per CpG,to 1 test per correlated unit.See: https://github.com/brentp/aclust/blob/master/examples/methylation-clustering-asthma.py for a full example.```chrom start end n_probes probes asthma.pvalue asthma.tstat asthma.coefchr1 566570 567501 8 chr1:566570,chr1:566731,chr1:567113,chr1:567206,chr1:567312,chr1:567348,chr1:567358,chr1:567501 0.4566 -0.74 -0.06chr1 713985 714021 3 chr1:713985,chr1:714012,chr1:714021 0.1185 -1.56 -0.13chr1 845810 846195 3 chr1:845810,chr1:846155,chr1:846195 0.5913 0.54 0.04chr1 848379 848440 3 chr1:848379,chr1:848409,chr1:848440 0.3399 -0.95 -0.06chr1 854766 855046 7 chr1:854766,chr1:854824,chr1:854838,chr1:854918,chr1:854951,chr1:854966,chr1:855046 0.7482 -0.32 -0.02chr1 870791 871546 8 chr1:870791,chr1:870810,chr1:870958,chr1:871033,chr1:871057,chr1:871308,chr1:871441,chr1:871546 0.2198 -1.23 -0.11chr1 892857 892948 3 chr1:892857,chr1:892914,chr1:892948 0.2502 -1.15 -0.05chr1 901062 901799 5 chr1:901062,chr1:901449,chr1:901685,chr1:901725,chr1:901799 0.6004 0.52 0.04chr1 946875 947091 4 chr1:946875,chr1:947003,chr1:947018,chr1:947091 0.9949 0.01 0.00```So we can filter on the asthma.pvalue to find regions associated with asthma.INSTALL=======`aclust` is available on pypi, as such it can be installed with:pip install aclustAcknowledgments===============The idea of this is taken from this paper:Sofer, T., Schifano, E. D., Hoppin, J. A., Hou, L., & Baccarelli, A. A. (2013). A-clustering: A Novel Method for the Detection of Co-regulated Methylation Regions, and Regions Associated with Exposure. Bioinformatics, btt498.The example uses a pull-request implementing GEE for python's statsmodels:https://github.com/statsmodels/statsmodels/pull/928
aclustermap
Theaclustermappackage takes aYAMLizedpandas DataFrame as input, with optional formatting keywords, and outputs a Seaborn clustermap .png image:cat example.yaml | aclustermap > example.pngIf YAMLizing a pandas DataFrame is a new concept, here is a quick tutorial.Let's create a Python modulegenerate_example.pyimport pandas, yaml; # Generate random data with make_blobs to yield interesting clustering behavior from sklearn.datasets import make_blobs; example = pandas.DataFrame(make_blobs(n_samples=36, centers=3, n_features=10)[0]) # Convert the DataFrame to a dict df_dict = example.to_dict() # Convert the dataframe to a dict, then YAMLize the dict yml = yaml.dump(df_dict) # Print to stdout print(yml)Here is a one-liner that combines the previous code to generate this dataframe, safe it to a file, and then output it as a clustermap to example.png:python generate_example.py > example.yaml cat example.yaml | aclustermap > example.pngSaving the dataframe to a file is not necessary, as it can be piped directly to aclustermap:python generate_example.py | aclustermap > example.pngaclustermapuses the simplest possible way to tweak the visualization, which is to pass a second YAML dict containing keyword argumnets forseaborn.clustermapandsee also,seaborn.set_context,pyplot.setpwith the Artist being the clustermap'sxticklabelsoryticklabels, orpyplot.rcParams.The formatting must be enclosed in a dict labeled according to the function it applies to. Here is the default formatting, which will be used if no formatting is specified. Otherwise, simplycata YAMLized nested dictionary with a structure similar to the following.format: seaborn: clustermap: cmap: CET_CBL1 annot: true set_context: context: notebook pyplot: setp: xticklabels: rotation: 0 yticklabels: rotation: 0 rcParams: font.size: 12A convenient way to specify minor formatting tweaks directly at the command line is with a HereDoc:(python generate_example.py; cat <<EOF) | python -m aclustermap > example.png format: seaborn: clustermap: annot: false EOF
acmagent
ACMagent - automates ACM certificatesACM agents provides functionality to request and confirm ACM certificates using the CLI interfaceInstallation$ pip install acmagentConfigurationIn order to approve ACM certificates, create and configure acmagent IMAP credentials file. By defaultacmagentloads configuration.acmagentfile from the user’s home folder for example:/home/john.doe/.acmagent. However, you have an option to specify a custom path to the credentials file.# /home/john.doe/.acmagent username: [email protected] server: imap.example.com password: mysecretpasswordUsageIssuing ACM certificatesThe simplest option to request ACM certificate is to specify--domain-nameand/or--validation-domainparameters.$ acmagent request-certificate --domain-name *.dev.example.com 12345678-1234-1234-1234-123456789012$ acmagent request-certificate --domain-name *.dev.example.com --validation-domain example.com 12345678-1234-1234-1234-123456789012Optionally, if you need to generate a certificate for multiple domain names you can provide the--alternative-namesparameter to specifyspace separatedalternative domain names.$ acmagent request-certificate --domain-name dev.example.com --validation-domain example.com --alternative-names www.dev.example.com ftp.dev.example.com 12345678-1234-1234-1234-123456789012ACMAgent offers an option to specify JSON input file instead of typing them at the command line using--cli-input-jsonparameter.Generate CLI skeleton output$ acmagent request-certificate --generate-cli-skeleton &> certificate.json$ cat certificate.json { "DomainName": "", "SubjectAlternativeNames": [], "ValidationDomain": "" }Modify generated skeleton file using your preferred methodUsing--cli-input-jsonparameter specify path fo thecertificate.jsonfile$ acmagent request-certificate --cli-input-json file:./certificate.jsonOutputTherequest-certificateoutputs ACM certificate id, it’s the last part of the ARN arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012you will need that id for a certificate approval process.Approving ACM certificatesBefore approving ACM issued certificate, please ensure that the credentials file has been setup.For gmail and yahoo enable access for ‘less secure apps’ (https://support.google.com/accounts/answer/6010255?hl=en-GB&authuser=1)confirm-certificate$ acmagent confirm-certificate --help usage: acmagent confirm-certificate [-h] --certificate-id CERTIFICATE_ID [--wait WAIT] [--attempts ATTEMPTS] [--debug] [--credentials CREDENTIALS] optional arguments: -h, --help show this help message and exit --certificate-id CERTIFICATE_ID Certificate id --wait WAIT Timeout in seconds between querying IMAP server --attempts ATTEMPTS Number of attempts to query IMAP server --debug (boolean) Send logging to standard output --credentials CREDENTIALS Explicitly provide IMAP credentials fileExamplesConfirming a certificate using the default settings:$ acmagent confirm-certificate --certificate-id 12345678-1234-1234-1234-123456789012However, for most scenarios the recommended approach to specify custom values for--waitand--attemptsparameters tailored for your IMAP server.$ acmagent confirm-certificate --wait 10 --attempts 6 --certificate-id 12345678-1234-1234-1234-123456789012In the situations when you can’t use the default IMAP credentials file provide the--credentialsparameter$ acmagent confirm-certificate --certificate-id 12345678-1234-1234-1234-123456789012 --credentials file:///var/lib/jenkins/.acmagent
acmation
No description available on PyPI.
acm-auto-validate
ACM Auto-Validate ConstructOverviewThe ACM Auto-Validate Construct is designed to automate the validation of AWS Certificate Manager (ACM) certificates using DNS validation, especially useful in continuous deployment pipelines. It handles the complexity of validating certificates in a hosted zone that resides in a different AWS account. This automation ensures that certificates requested during infrastructure deployment are validated promptly, allowing CloudFormation templates to proceed without waiting for manual intervention.Use CaseThis construct was initially created to support CDK-based deployment pipelines where certificates requested remained pending validation. By automating this step, it significantly reduces the deployment time and manual overhead, especially in cross-account DNS validation scenarios.FeaturesAutomated DNS Validation: Automates the creation and deletion of DNS records for ACM certificate validation.Cross-Account Support: Capable of handling DNS records in a hosted zone that is in a different AWS account.Event-Driven: Utilizes AWS Lambda and Amazon EventBridge to respond to certificate request events.SSM Parameter Tracking: Tracks the certificates processed using AWS Systems Manager Parameter Store. Note that SSM parameters are created in theus-east-1region, but this construct can be deployed to any region that supports ACM, Lambda, and EventBridge.PrerequisitesTwo AWS accounts: a source account where the ACM certificates are requested and a zone account where the hosted zone resides.IAM permissions to create necessary resources in both accounts.AWS CDK v2 installed.InstallationTo use this construct, install it from npm:npminstallacm-auto-validatePython installation:pipinstallacm-auto-validateUsageHere's an example of how to use these constructs in your CDK application:TypeScriptimport{ACMValidationConstruct,DnsValidationRoleConstruct}from'acm-auto-validate';import{App,Stack}from'aws-cdk-lib';constapp=newApp();//CreateastackintheaccountwherecertificateswillberequestedconstsourceStack=newStack(app,'SourceStack',{env:{account:'111111111111'}//sourceaccountID});//DeploytheACMvalidationconstructinthesourceaccountstacknewACMValidationConstruct(sourceStack,'ACMValidationConstruct',{rolePrefix:'prod',//mustmatchprefixusedinDnsValidationRoleConstructzoneAccountId:'222222222222',zoneName:'example.com',});//CreateastackintheaccountwherethezoneishostedconstzoneStack=newStack(app,'ZoneStack',{env:{account:'222222222222'}//zoneaccountID});//DeploytheDNSvalidationroleconstructinthezoneaccountstacknewDnsValidationRoleConstruct(zoneStack,'DnsValidationRoleConstruct',{rolePrefix:'prod',//mustmatchprefixusedinACMValidationConstructsourceAcctId:'111111111111',zoneAcctId:'222222222222',});Pythonfromaws_cdkimportApp,Stackfromacm_auto_validateimport(ACMValidationConstruct,DnsValidationRoleConstruct)app=App()# Create a stack in the account where certificates will be requestedsource_stack=Stack(app,'SourceStack',env={'account':'111111111111'})# Deploy the ACM validation construct in the source account stackACMValidationConstruct(source_stack,'ACMValidationConstruct',role_prefix='prod',# must match prefix used in DnsValidationRoleConstructzone_account_id='222222222222',zone_name='example.com')# Create a stack in the account where the zone is hostedzone_stack=Stack(app,'ZoneStack',env={'account':'222222222222'})# Deploy the DNS validation role construct in the zone account stackDnsValidationRoleConstruct(zone_stack,'DnsValidationRoleConstruct',role_prefix='prod',# must match prefix used in ACMValidationConstructsource_acct_id='111111111111',zone_acct_id='222222222222')app.synth()ConfigurationACMValidationConstruct: Deploys the Lambda function (written in Python) and EventBridge rule in the source account. Requires a rolePrefix (such as 'dev' or 'prod'; this is used for naming resources), the zone account ID and the zone name.DnsValidationRoleConstruct: Deploys the IAM role in the zone account, which the Lambda function assumes. Requires a rolePrefix (such as 'dev' or 'prod'; this is used for naming resources), the source account ID, and the zone account ID.ContributingContributions to this project are welcome. Please follow the standard procedures for submitting issues or pull requests.LicenseThis project is distributed under theApache License 2.0.
acmax24
acmax24is a python library for managing the AVPro Edge AC-MAX-24 Audio Matrix. This is a 24 input, 24 output audio matrix device, designed for whole home audio.This library was created to enable the thehass-acmax24Home Assistant integration, which makes the AX-MAX-24 Audio Matrix appear as a set of MediaPlayer entities in Home Assistant.
acmclient
No description available on PyPI.
acmd
No description available on PyPI.
acm-distributions
No description available on PyPI.
acm-dl-searcher
A simple command line tool to collect the entries on a particular venue and in acm and run searches on them.Installpipinstallacm-dl-searcherUsageIf getting the entries from theCHI 16Conference:To get the entries of a particular venue:acm-dl-searcherget10.1145/2858036--short-name"CHI 16""This will download all the entries and their abstracts. The short-name provided can be anything. The first parameter expected foramc-dl-searchergetis the doi of the venue.To list all the venues saved:acm-dl-searcherlistTo search the from the saved venues:acm-dl-searchersearch"adaptive"This will search all the venues obtained throughacm-dl-searcherget, and list out the paper and titles that contain the phrase “adaptive” in the abstract or title. Currently the searcher uses a fuzzy search with a maximum difference of 2.To narrow the search to particular venue(s) use the option--venue-short-name-filter:acm-dl-searchersearch"adaptive"--venue-short-name-filter"CHI"This will list out the matches from venues whoseshort namecontain “CHI”.To print out the abstracts as well use the option--print-abstracts:acm-dl-searchersearch"adaptive"--print-abstractsTo view the results on the browser use the option--html:acm-dl-searchersearch"adaptive"--htmlCreditsThis package was created withCookiecutterand thebriggySmalls/cookiecutter-pypackageproject template.
acmdrunner
Ascetic Command RunnerNot all of our projects are using either django or anotherGodframework. So, once we start our hobby project, would be great to put into the game some commands. For example, maybe we need for our hobby project simple command test which loads custom TestRunner, etc In this case this package maybe handy for you!InstallationSimply run in your bash:pipinstallacmdrunnerUsageIn yourdjango like manage.pycommand loader, you need to trigger following:importosfromacmdrunnerimportLoader...makeallyourpreparations,initializeprojectsettings,etc...Loader.load_from_directory(os.path.dirname(__file__))Loader.load_from_package('rit.app')Loader will search recursively in passed folder for folders with name management. And try to load from folders found file acr_commands.pyAn example of the file acr_commands.py:fromacmdrunner.dispatcherimportCommandDispatchercommand_dispatcher=CommandDispatcher()defexecute(*args):passcommand_dispatcher.register_command('test',execute)register_commandregisters specific command and handler for this command. Your commands should implement execute method. Better to inherit from BaseCommand. But as it is ascetic, you can simply pass class with execute method implemented. That’s all!To run command, please trigger following call:fromyour_package_placeimportcommand_dispatchercommand_dispatcher.execute_command(command_name,*args,**kwargs)Real usage exampleIf you want to load all commands from specific namespace, you can implement following:packages_to_traverse=('rit.app','rit.core')forpackageinpackages_to_traverse:Loader.load_from_package(package[0])Loader.load_from_directory(os.path.dirname(os.getcwd()))
acme
No description available on PyPI.
acme31
No description available on PyPI.
acmeasync
ACMEasyncAsyncIO ACME client for Python 3.Why?Moar async moar better. Seriously though, I wanted to utilize Python's asyncio to create an automatically certifying proxy server that "just works".How?You can use the library as is, seeacmeasync/__main__.pyandacmeasync/certbot2.pyas guides for spinning your own implementations, or use the built in tls reverse proxy (currently raw TCP only).To run the proxy:exportDOMAINS="example.com,example.net"exportPORT=80# or whatever port you wish to run the ACME challenge http server on, you need root to serve on 80, or you can forward 8080 if you're running in a docker container for example.exportEMAIL="[email protected]"exportPROXIES="8081:towel.blinkenlights.nl:23,8082:towel.blinkenlights.nl:23"# format: localport:remotehost:remoteport,...exportDIRECTORY_URL="https://acme-v02.api.letsencrypt.org/directory"acmeleproxyIt's recommended you run as root so that proxy processes can drop privileges and lose access to your private keys, but this is optional.API documentation incoming soon...But why Python?Yeah, I know, the GIL, the proxy server uses multiprocessing to spawn a subprocess per connection, which should give much better performance. This kinda thing exists the nodejs world already, why not python too?RequirementsPulled in by setup.py:acmeaiohttpaiohttp-requestsRequired from your OS:python3-openssl