package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
ablk
Matthew Oertle <[email protected]>ablk is a image viewer that uses the UTF-8 block character and 24-bit ANSI escape sequences to render an image in the terminal window. Useful over SSH into a remote machine to get an idea of what an image file looks like.usage: ablk [-h] [–delay DELAY] [–no-name] images [images …]positional arguments:images Image filesoptional arguments:-h,--helpshow this help message and exit--delayDELAY,-dDELAYDelay between each image--no-name,-nShow filename of image
ablkit
📘Documentation|📚Examples|💬Reporting IssuesABLkit: A Toolkit for Abductive LearningABLkitis an efficient Python toolkit forAbductive Learning (ABL). ABL is a novel paradigm that integrates machine learning and logical reasoning in a unified framework. It is suitable for tasks where both data and (logical) domain knowledge are available.Key Features of ABLkit:High Flexibility: Compatible with various machine learning modules and logical reasoning components.User-Friendly Interface: Provide data, model, and knowledge, and get started with just a few lines of code.Optimized Performance: Optimization for high performance and accelerated training speed.ABLkit encapsulates advanced ABL techniques, providing users with an efficient and convenient toolkit to develop dual-driven ABL systems, which leverage the power of both data and knowledge.InstallationInstall from PyPIThe easiest way to install ABLkit is usingpip:pipinstallablkitInstall from SourceAlternatively, to install from source code, sequentially run following commands in your terminal/command line.gitclonehttps://github.com/AbductiveLearning/ABLkit.gitcdABLkit pipinstall-v-e.(Optional) Install SWI-PrologIf the use of aProlog-based knowledge baseis necessary, please also installSWI-Prolog:For Linux users:sudoapt-getinstallswi-prologFor Windows and Mac users, please refer to theSWI-Prolog Install Guide.Quick StartWe use the MNIST Addition task as a quick start example. In this task, pairs of MNIST handwritten images and their sums are given, alongwith a domain knowledge base which contains information on how to perform addition operations. Our objective is to input a pair of handwritten images and accurately determine their sum.Working with DataABLkit requires data in the format of(X, gt_pseudo_label, Y)whereXis a list of input examples containing instances,gt_pseudo_labelis the ground-truth label of each example inXandYis the ground-truth reasoning result of each example inX. Note thatgt_pseudo_labelis only used to evaluate the machine learning model's performance but not to train it.In the MNIST Addition task, the data loading looks like:# The 'datasets' module below is located in 'examples/mnist_add/'fromdatasetsimportget_dataset# train_data and test_data are tuples in the format of (X, gt_pseudo_label, Y)train_data=get_dataset(train=True)test_data=get_dataset(train=False)Building the Learning PartLearning part is constructed by first defining a base model for machine learning. ABLkit offers considerable flexibility, supporting any base model that conforms to the scikit-learn style (which requires the implementation offitandpredictmethods), or a PyTorch-based neural network (which has defined the architecture and implementedforwardmethod). In this example, we build a simple LeNet5 network as the base model.# The 'models' module below is located in 'examples/mnist_add/'frommodels.nnimportLeNet5cls=LeNet5(num_classes=10)To facilitate uniform processing, ABLkit provides theBasicNNclass to convert a PyTorch-based neural network into a format compatible with scikit-learn models. To construct aBasicNNinstance, aside from the network itself, we also need to define a loss function, an optimizer, and the computing device.​importtorch​fromablkit.learningimportBasicNN​​loss_fn=torch.nn.CrossEntropyLoss()​optimizer=torch.optim.RMSprop(cls.parameters(),lr=0.001,alpha=0.9)​device=torch.device("cuda"iftorch.cuda.is_available()else"cpu")​base_model=BasicNN(model=cls,loss_fn=loss_fn,optimizer=optimizer,device=device)The base model built above is trained to make predictions on instance-level data (e.g., a single image), while ABL deals with example-level data. To bridge this gap, we wrap thebase_modelinto an instance ofABLModel. This class serves as a unified wrapper for base models, facilitating the learning part to train, test, and predict on example-level data, (e.g., images that comprise an equation).fromablkit.learningimportABLModel​​model=ABLModel(base_model)Building the Reasoning PartTo build the reasoning part, we first define a knowledge base by creating a subclass ofKBBase. In the subclass, we initialize thepseudo_label_listparameter and override thelogic_forwardmethod, which specifies how to perform (deductive) reasoning that processes pseudo-labels of an example to the corresponding reasoning result. Specifically, for the MNIST Addition task, thislogic_forwardmethod is tailored to execute the sum operation.fromablkit.reasoningimportKBBase​classAddKB(KBBase):def__init__(self,pseudo_label_list=list(range(10))):super().__init__(pseudo_label_list)​deflogic_forward(self,nums):returnsum(nums)​kb=AddKB()Next, we create a reasoner by instantiating the classReasoner, passing the knowledge base as a parameter. Due to the indeterminism of abductive reasoning, there could be multiple candidate pseudo-labels compatible to the knowledge base. In such scenarios, the reasoner can minimize inconsistency and return the pseudo-label with the highest consistency.fromablkit.reasoningimportReasoner​reasoner=Reasoner(kb)Building Evaluation MetricsABLkit provides two basic metrics, namelySymbolAccuracyandReasoningMetric, which are used to evaluate the accuracy of the machine learning model's predictions and the accuracy of thelogic_forwardresults, respectively.fromablkit.data.evaluationimportReasoningMetric,SymbolAccuracy​metric_list=[SymbolAccuracy(),ReasoningMetric(kb=kb)]Bridging Learning and ReasoningNow, we useSimpleBridgeto combine learning and reasoning in a unified ABL framework.fromablkit.bridgeimportSimpleBridge​bridge=SimpleBridge(model,reasoner,metric_list)Finally, we proceed with training and testing.​bridge.train(train_data,loops=1,segment_size=0.01)bridge.test(test_data)To explore detailed tutorials and information, please refer to -document.ExamplesWe provide several examples inexamples/. Each example is stored in a separate folder containing a README file.MNIST AdditionHandwritten Formula (HWF)Handwritten Equation DeciphermentZooReferencesFor more information about ABL, please refer to:Zhou, 2019andZhou and Huang, 2022.@article{zhou2019abductive, title = {Abductive learning: towards bridging machine learning and logical reasoning}, author = {Zhou, Zhi-Hua}, journal = {Science China Information Sciences}, volume = {62}, number = {7}, pages = {76101}, year = {2019} } @incollection{zhou2022abductive, title = {Abductive Learning}, author = {Zhou, Zhi-Hua and Huang, Yu-Xuan}, booktitle = {Neuro-Symbolic Artificial Intelligence: The State of the Art}, editor = {Pascal Hitzler and Md. Kamruzzaman Sarker}, publisher = {{IOS} Press}, pages = {353--369}, address = {Amsterdam}, year = {2022} }
ablog
ABlog is a Sphinx extension that converts any documentation or personal website project into a full-fledged blogPlease check our documentation for information on how to get started.NoteThis is the new home ofAhmet Bakan’s Ablog Sphinx extension. The original project is no longer maintained and theSunPy Projecthas taken over maintenance.WarningThis version is maintained with the aim to keep it working for SunPy Project website and thus new features or bugfixes are highly unlikely unless they directly impact.We strongly encourage users and interested in parties in submitting patches to ``ablog``.
ablpywrapper
Astro Bot List Python WrapperThis is a wrapper for astro bot list made for pythonGet Bot StatsfrommainimportBotlistsabl=Botlists('api key')x=abl.get()print(x)Post Server CountfrommainimportBotlistsabl=Botlists('api key')x=abl.count(12)print(x)
abl-quantconnect-stubs
QuantConnect StubsThis package contains type stubs for QuantConnect'sLeanalgorithmic trading engine and for parts of the .NET library that are used by Lean.These stubs can be used by editors to provide type-aware features like autocomplete and auto-imports in QuantConnect strategies written in Python.After installing the stubs, you can copy the following line to the top of every Python file to have the same imports as the ones that are added by default in the cloud:fromAlgorithmImportsimport*This line importsall common QuantConnect membersand provides autocomplete for them.
abl.robot
UNKNOWN
ablt-python-api
aBLT Python API wrapperAboutThis is Python aBLT API wrapper. It is used to communicate with aBLT API. You may use it to create your own aBLT client using asynchronous or synchronous Python. You can find more information about aBLT APIhere.At first, you need obtain aBLT API Token. You can do ithereby get in touch and contact with support for paid plans. Next, you may use any existing ready-made bots or templates or create your own bot via UI interface.InstallationAPI wrapper is available on PyPI. You can install it with pip (recommended to use Python 3.9+):pipinstallablt-python-apiUsageThen you can import it and use it:# for asynchronous API wrapper use ABLTApi_asyncfromablt_python_apiimportABLTApi# this is synchronous API wrapper# Init with explicit tokenapi=ABLTApi(bearer_token=YOUR_ABLT_API_TOKEN)# Init with environment variable, use ABLT_BEARER_TOKENapi=ABLTApi()For some reason you may want to use your own logger, then you can initialize API wrapper with logger:# logger it's pre-configured instance of logging.loggerapi=ABLTApi(logger=your_logger)API methodsBotsList of botsYou may get list of bots:# will return list of bots, see schema docs: https://docs.ablt.ai/api_docs/bots/get#Response-bodybots=api.api.get_bots()Or you may use actual schema:fromablt_python_api.schemasimportBotsSchema# Use await for asynchronous API wrapperbots=[BotsSchema.model_validate(bot_dict)forbot_dictinapi.get_bots()]Single botYou may get bot by UID:bot=api.find_bot_by_uid(bot_uid='F0b98A09-c5ed-1197-90F3-BBfF1DBb28ee')Alternatively, you may get bot by slug:bot=api.find_bot_by_slug(bot_slug='omni')Or by name (but it's not recommended, because name may be not unique):bot=api.find_bot_by_name(bot_name='Miles Hiker')In case if no bot found, thenNonewill be returned.ChatTo chat with bot you may use `chat' method:# Specify bot UID, will return generator with bot responses.response=api.chat(bot_uid=BOT_UID,prompt='Hello, bot!')# Or specify bot slug, return generator with bot responses.response=api.chat(bot_slug=BOT_SLUG,prompt='Hello, bot!')# To get response as string, you may use loop or extract just first response from generatorstr_response=response.__next__()# Or use __anext__ for async modeMost probably, you will usemessageslist instead ofpromptto save context. In this case, you may call method like following:messages=[{"content":"I like to eat pizza","role":"user"},{"content":"Hello, I like pizza too!","role":"assistant"},{"content":"What do I like to eat?","role":"user"},]# Will return generator with bot responseresponse=api.chat(bot_uid=BOT_UID,messages=messages)You need ensure, that your prompt is last message inmessageslist.More optionsAdditionally, you may extend \ override system context with passingsysteminstruction to bot:# Use with caution, it may conflict or replace all settings stored in UI# and may lead to unexpected resultsmessages=[{"content":"You are a bibliophile bot, you know everything about literature","role":"system"},{"content":"Who is author of 'The Sirens of Titan'?","role":"user"},],You may callchatmethod with params to override system settings, if you want:response=api.chat(bot_uid=BOT_UID,prompt='Hello, bot!',language='Arabic',# may be: "Arabic", "French", "English", "Spanish", "Russian"max_words=100,# any integer, values less than 100 and greater than 2000 not recommendeduser_id=42,# unique user ID, used to split up usage statistics per useruse_search=False)# use search mode, if True, then bot will try to find answer in internetNotes: In general, you may to try to use unusual values for:language, like "Serbian", but it's not guaranteed that it will work as you expected.max_words- you may try to use values less than 100 words to save tokens, but you may be experienced with cut-offs. For values greater than 2000 words you may be experienced with timeouts or errors for regular, not 32k or 128k models.user_id- you may use any integer value, but it's recommended to use your own unique user ID, because it's used to split up usage statistics per user.use_search- it's special feature for premium plans, you may try to manage it from API, and not from UI, but it's highly not recommended to use with smallermax_wordsvalues, so, while using search, please use values at least 100 or more formax words.Streaming modeBy default, bots are working in streaming mode (as in UI), so, you may usechatmethod to chat with bot in streaming mode, but you may to switch it off by:response=api.chat(bot_uid=BOT_UID,prompt='Hello, bot!',stream=False)In case if you prefer to use streaming mode, you need to get response from generator:importsysfromablt_python_apiimportDoneExceptiontry:forresponseinapi.chat(bot_uid=BOT_UID,prompt='Hello, bot!',stream=True):# I use direct stdout output to make output be printed on-the-flysys.stdout.write(response)# To get typewriter effect I forcefully flush output each timesys.stdout.flush()exceptDoneException:pass# DoneException is raised when bot finished conversationStatisticsStatistics may be used to obtain data for words and tokens usage for period of time.Full statistics# Will return statistics for current date for default user = -1statistics_for_today=api.get_usage_statistics()# Will return statistics for date rangestatistics_for_range=api.get_usage_statistics(start_date='2022-02-24',end_date='2023-11-17')# Will return statistics for user with ID 42statistics_for_user=api.get_usage_statistics(user_id=42)You may use schema to validate statistics:fromablt_python_api.schemasimportStatisticsSchemastatistics=StatisticsSchema.model_validate(api.get_usage_statistics())Statistic for a day# Will return statistics for current date for user = -1statistics_for_today=api.get_usage_statistics_for_day()# Will return statistics for specified datestatistics_for_specific_day=api.get_usage_statistics_for_day(date='2022-02-24')# Will return statistics for user with ID 42statistics_for_user=api.get_usage_statistics_for_day(user_id=42)With schema:fromablt_python_api.schemasimportStatisticItemSchemastatistics=StatisticItemSchema.model_validate(api.get_usage_statistics_for_day())Total statistics# Will return statistics for current date for user = -1statistics_for_today=api.get_total_usage_statistics()# Well return statistics for specified datestatistics_for_specific_day=api.get_total_usage_statistics(date='2022-02-24')# Will return statistics for user with ID 42statistics_for_user=api.get_total_usage_statistics(user_id=42)With schema:fromablt_python_api.schemasimportStatisticTotalSchemastatistics=StatisticTotalSchema.model_validate(api.get_usage_statistics_for_day())Troubleshooting:You can alwayscontact supportor contact us inDiscord channel.SSL errorsIn some cases, you may be experienced with SSL errors, then you may disable certificate verification:importsslsslcontext=ssl.create_default_context()sslcontext.check_hostname=Falsesslcontext.verify_mode=ssl.CERT_NONEABLTApi_async(ssl_context=sslcontext)# for asyncABLTApl(ssl_verify=False)# for syncRate limit errorsNever try to flood API with requests, because you may be experienced with rate limit errors. In this case, you need to wait for some time and retry your request. Especially, never try to flood API with simultaneous requests for more users than allowed in your plan.Timeout errorsIn some cases, you may be experienced with timeout errors, then you may decreasemax_wordsvalue or usestream = Trueto get response on-the-fly.Other errorsIn vary rare cases, you may be experienced with other errors, like rebooting of API itself. To check-up API health, you may usehealth_checkmethod:# will return True if API is healthy, otherwise Falseapi.health_check()Best practicesAlways check-up and followGuides,ReferencesandExamplesfrom ABLT documentation.
abl.util
No description available on PyPI.
abl.vpath
No description available on PyPI.
ably
A Python client library for Ably Realtime messaging.SetupYou can install this package by using the pip tool and installing:pip install ablyUsing Ably for PythonSign up for Ably athttps://ably.com/sign-upGet usage examples athttps://github.com/ably/ably-pythonVisithttps://ably.com/docsfor a complete API reference and more examples
abm
Allow loading non Python module formats as modules.InstallUsepipfor installing:$pipinstallabmUsageOnce installed, you can activateabmby importingabm.activate:fromabmimportactivateNow you can register new loaders by doing:fromabm.loadersimportIniLoaderIniLoader.register()Since now, you can load*.inifiles as if they were modules:# config.ini[section]option=valueimportconfigassert(config['example']isnotNone)assert(config['example']['option']is'value')Noteabm.loaderspackage is work in progress. Theabm.loaderspackage is still work in progress and it will gather a set of useful loaders for common extensions.Writing a loaderExtend the base loaderAbmLoaderprovided inabm.loadersand implementcreate_moduleandexecute_modulemethods. Provide theextensionclass member to allow automatic registration:fromconfigparserimportConfigParserfromtypesimportModuleTypefromabm.loadersimportAbmLoaderclassIniLoader(AbmLoader):extensions=('.ini',)def__init__(self,name,path):self.file_path=pathdefcreate_module(self,spec):module=ConfigModule(spec.name)self.init_module_attrs(spec,module)returnmoduledefexec_module(self,module):module.read(self.file_path)returnmoduleclassConfigModule(ModuleType,ConfigParser):def__init__(self,specname):ModuleType.__init__(self,specname)ConfigParser.__init__(self)Loaders are initialized passing the name of the module in the form:'path.to.the.module'And its absolute path.Implementingcreate_modulecreate_modulefunction should produce a module of the correct type. Nothing more. This method is passed with the module specification object used to find the module:defcreate_module(self,spec)module=ConfigModule(spec.name)self.init_module_attrs(spec,module)returnmoduleImplementingexecute_moduleexecute_modulefunction should contain the code for loading the contents of the module:defexecute_module(self,module):module.read(self.file_path)returnmoduleA good tip for determining how to implement this method is imagining you trigger a reload of the module: the code syncing the module contents with the file is what you should put here.Overriding builtin extensionsOverriding builtin extensions such as.py,.pycor.sois possible by passingoverride_builtins=Trueto theregister()method.fromabm.loadersimportAbmLoaderclassBreakPyModules(AbmLoader):extensions=('.py',)defcreate_module():raiseNotImplementedError('Can load .py modules no more.')BreakPythonModules.register(override_builtins=True)Use this with caution since you can break the import system. Not passingoverride_builtinsresults in aValueErrorexception.How does it workExtension mechanism work by monkeypatching theFileFinderclass in charge of reading Python several format modules from the local file system.Internally,FileFinderuses file loaders to read the several formats of Python modules identified by their file extension. Although these classes are public,FileFinderdoes not expose any extension mechanism to link new extensions with new loaders.In the spirit ofsys.path_hooksand other extension hooks, activatingabmwill expose a dictionary insys.abm_hooksto register new loaders dynamically. For instance:importsysfromabm.loadersimportIniLoaderfromabm.coreimportactivateactivate()sys.abm_hooks['.ini']=IniLoaderIt works by turning the internal instance attribute_loadersofFileFinderinstances into a class property. Setting the property will diverge the new value to a different attribute while reading the value will combine the original one with the extensions insys.abm_hooks.
abm1559
abm1559Agent-based simulation environment for EIP 1559.
ab-mac-changer
Failed to fetch description. HTTP Status Code: 404
abmap
AbMAP: Antibody Mutagenesis-Augmented ProcessingThis repository is a work in progress.This repository contains code and pre-trained model checkpoints for AbMAP, a Protein Language Model (PLM) customized for antibodies as featured inLearning the Language of Antibody Hypervariability(Singh, Im et al. 2023). AbMAP leverages information from foundational PLMs as well as antibody structure and function, offering a multi-functional tool useful for predicting structure, functional properties, and analyzing B-cell repertoires.InstallationAbMAP relies on ANARCI to assign IMGT labels to antibody sequences. Please see theANARCIrepo or run the following in a new conda environment:condainstall-cbiocorehmmer# Can also install using `brew/port/apt/yum install hmmer`gitclonehttps://github.com/oxpig/ANARCI.gitcdANARCI pythonsetup.pyinstallThen install abmap using:pipinstallabmap# (recommended) latest release from PyPIpipinstallgit+https://github.com/rs239/ablm.git# the live main branchUsage:After installation, AbMAP can be easily imported into your python projects or run from the command line. Please seeexamples/demo.ipynbfor common use cases. Instructions for running via CLI are below.Command Line UsageInstructions In ProgressAugmentGiven a sequence, generate a foundational PLM embedding augmented with in-silico mutagenesis and CDR isolation.TrainGiven a dataset of labeled pairs of sequences and their augmented embeddings, train the AbMAP model on downstream prediction tasks.EmbedGiven fasta sequences and a pre-trained AbMAP model, generate their AbMAP embeddings (fixed or variable).Please provide feedback on the issues page or by opening a pull request. If AbMAP is useful in your work, please consider citing ourbioRxiv preprint.
abmarl
AbmarlAbmarl is a package for developing Agent-Based Simulations and training them with MultiAgent Reinforcement Learning (MARL). We provide an intuitive command line interface for engaging with the full workflow of MARL experimentation: training, visualizing, and analyzing agent behavior. We define an Agent-Based Simulation Interface and Simulation Manager, which control which agents interact with the simulation at each step. We support integration with popular reinforcement learning simulation interfaces, including gym.Env, MultiAgentEnv, and OpenSpiel. We define our own GridWorld Simulation Framework for creating custom grid-based Agent Based Simulations.Abmarl leverages RLlib’s framework for reinforcement learning and extends it to more easily support custom simulations, algorithms, and policies. We enable researchers to rapidly prototype MARL experiments and simulation design and lower the barrier for pre-existing projects to prototype RL as a potential solution.QuickstartTo use Abmarl, install via pip:pip install abmarlTo develop Abmarl, clone the repository and install via pip's development mode.git clone [email protected]:LLNL/Abmarl.git cd abmarl pip install -r requirements/requirements_all.txt pip install -e . --no-depsTrain agents in a multicorridor simulation:abmarl train examples/multi_corridor_example.pyVisualize trained behavior:abmarl visualize ~/abmarl_results/MultiCorridor-2020-08-25_09-30/ -n 5 --recordNote: If you install withconda,then you must also includeffmpegin your virtual environment.DocumentationYou can find the latest Abmarl documentation onour ReadTheDocs page.CommunityCitationAbmarl has been published to the Journal of Open Source Software (JOSS). It can be cited using the following bibtex entry:@article{Rusu2021, doi = {10.21105/joss.03424}, url = {https://doi.org/10.21105/joss.03424}, year = {2021}, publisher = {The Open Journal}, volume = {6}, number = {64}, pages = {3424}, author = {Edward Rusu and Ruben Glatt}, title = {Abmarl: Connecting Agent-Based Simulations with Multi-Agent Reinforcement Learning}, journal = {Journal of Open Source Software} }Reporting IssuesPlease use our issue tracker to report any bugs or submit feature requests. Great bug reports tend to have:A quick summary and/or backgroundSteps to reproduce, sample code is best.What you expected would happenWhat actually happensContributingPlease submit contributions via pull requests from a forked repository. Find out more about this processhere. All contributions are under the BSD 3 License that covers the project.ReleaseLLNL-CODE-815883
abm-colony-collection
Collection of tasks for analyzing colony dynamics. Designed to be used both inPrefectworkflows and as modular, useful pieces of code.InstallationThe collection can be installed using:pipinstallabm-colony-collectionWe recommend usingPoetryto manage and install dependencies. To install into your Poetry project, use:poetryaddabm-colony-collectionUsagePrefect workflowsAll tasks in this collection are wrapped in a Prefect@taskdecorator, and can be used directly in a Prefect@flow. Running tasks within aPrefectflow enables you to take advantage of features such as automatically retrying failed tasks, monitoring workflow states, running tasks concurrently, deploying and scheduling flows, and more.fromprefectimportflowfromabm_colony_collectionimport<task_name>@flowdefrun_flow():<task_name>()if__name__=="__main__":run_flow()Seecell-abm-pipelinefor examples of using tasks from different collections to build a pipeline for simulating and analyzing agent-based model data.Individual tasksNot all use cases require a full workflow. Tasks in this collection can be used without the Prefect@taskdecorator by simply importing directly from the module:fromabm_colony_collection.<task_name>import<task_name>defmain():<task_name>()if__name__=="__main__":main()or using the.fn()method:fromabm_colony_collectionimport<task_name>defmain():<task_name>.fn()if__name__=="__main__":main()
abm-initialization-collection
Collection of tasks for initializing ABM simulations. Designed to be used both inPrefectworkflows and as modular, useful pieces of code.InstallationThe collection can be installed using:pipinstallabm-initialization-collectionWe recommend usingPoetryto manage and install dependencies. To install into your Poetry project, use:poetryaddabm-initialization-collectionUsagePrefect workflowsAll tasks in this collection are wrapped in a Prefect@taskdecorator, and can be used directly in a Prefect@flow. Running tasks within aPrefectflow enables you to take advantage of features such as automatically retrying failed tasks, monitoring workflow states, running tasks concurrently, deploying and scheduling flows, and more.fromprefectimportflowfromabm_initialization_collection.<module_name>import<task_name>@flowdefrun_flow():<task_name>()if__name__=="__main__":run_flow()Seecell-abm-pipelinefor examples of using tasks from different collections to build a pipeline for simulating and analyzing agent-based model data.Individual tasksNot all use cases require a full workflow. Tasks in this collection can be used without the Prefect@taskdecorator by simply importing directly from the module:fromabm_initialization_collection.<module_name>.<task_name>import<task_name>defmain():<task_name>()if__name__=="__main__":main()or using the.fn()method:fromabm_initialization_collection.<module_name>import<task_name>defmain():<task_name>.fn()if__name__=="__main__":main()
abmishra_nester
UNKNOWN
abmjktmyhellopkg
No description available on PyPI.
abml
No description available on PyPI.
abml-cli
No description available on PyPI.
abmn-qcloud-cmq-sdk-py3
No description available on PyPI.
abm-shape-collection
Collection of tasks for analyzing cell shapes. Designed to be used both inPrefectworkflows and as modular, useful pieces of code.InstallationThe collection can be installed using:pipinstallabm-shape-collectionWe recommend usingPoetryto manage and install dependencies. To install into your Poetry project, use:poetryaddabm-shape-collectionUsagePrefect workflowsAll tasks in this collection are wrapped in a Prefect@taskdecorator, and can be used directly in a Prefect@flow. Running tasks within aPrefectflow enables you to take advantage of features such as automatically retrying failed tasks, monitoring workflow states, running tasks concurrently, deploying and scheduling flows, and more.fromprefectimportflowfromabm_shape_collectionimport<task_name>@flowdefrun_flow():<task_name>()if__name__=="__main__":run_flow()Seecell-abm-pipelinefor examples of using tasks from different collections to build a pipeline for simulating and analyzing agent-based model data.Individual tasksNot all use cases require a full workflow. Tasks in this collection can be used without the Prefect@taskdecorator by simply importing directly from the module:fromabm_shape_collection.<task_name>import<task_name>defmain():<task_name>()if__name__=="__main__":main()or using the.fn()method:fromabm_shape_collectionimport<task_name>defmain():<task_name>.fn()if__name__=="__main__":main()
abmtools
Failed to fetch description. HTTP Status Code: 404
abmv1-myhellopkg
No description available on PyPI.
abna
ABN Amro mutations retrievalThis Python library enables retrieval of mutations from the Dutch ABN Amro banking site using the "soft token" (5-digit pass code).Should work with Python 2.7 or 3.x; requires requests and cryptography.This library was created by and is maintained by Dirkjan Ochtman. If you are in a position to support ongoing maintenance and further development or use it in a for-profit context, please consider supporting my open source work onPatreon.ExampleHere is a minimal example demonstrating how to use the library:importabna,jsonsess=abna.Session('NL01ABNA0123456789')sess.login(123,'12345')print(json.dumps(sess.mutations('NL01ABNA0123456789'),indent=2))Change log0.3 (2020-02-04)Add User-Agent to fake browser behavior0.2 (2018-07-15)Allow retrieval of mutations from different accounts (#1, thanks to@ivasic). Note that this changes the signature of theSession.mutations()method to take the account IBAN as a mandatory first argument.Alternativesabnamro-txis a docker-based solution to run a headless Chrome instance that can download mutation files for you.
abnamro
This is an unofficial Python package that implements the ABN AMRO API.
abnamrolib
A library implementing authenticating to an Abn Amro account or ICS credit card and retrieving the transactionsDocumentation:https://abnamrolib.readthedocs.org/en/latestDevelopment WorkflowThe workflow supports the following stepslinttestbuilddocumentuploadgraphThese actions are supported out of the box by the corresponding scripts under _CI/scripts directory with sane defaults based on best practices. Sourcing setup_aliases.ps1 for windows powershell or setup_aliases.sh in bash on Mac or Linux will provide with handy aliases for the shell of all those commands prepended with an underscore.The bootstrap script creates a .venv directory inside the project directory hosting the virtual environment. It uses pipenv for that. It is called by all other scripts before they do anything. So one could simple start by calling _lint and that would set up everything before it tried to actually lint the projectOnce the code is ready to be delivered the _tag script should be called accepting one of three arguments, patch, minor, major following the semantic versioning scheme. So for the initial delivery one would call$ _tag –minorwhich would bump the version of the project to 0.1.0 tag it in git and do a push and also ask for the change and automagically update HISTORY.rst with the version and the change provided.So the full workflow after git is initialized is:repeat as necessary (of course it could be test - code - lint :) )codelinttestcommit and pushdevelop more through the code-lint-test cycletag (with the appropriate argument)buildupload (if you want to host your package in pypi)document (of course this could be run at any point)Important InformationThis template is based on pipenv. In order to be compatible with requirements.txt so the actual created package can be used by any part of the existing python ecosystem some hacks were needed. So when building a package out of thisdo notsimple call$ python setup.py sdist bdist_eggas this will produce an unusable artifact with files missing.Instead use the provided build and upload scripts that create all the necessary files in the artifact.Project FeaturesTODOHistory0.0.1 (19-07-2019)First code creation0.1.0 (19-07-2019)initial code implementation0.2.0 (20-07-2019)Exposed transaction objects0.2.1 (20-07-2019)Reverted to default provided value for account transaction amount0.3.0 (21-07-2019)Removed uneeded properties from account transaction0.4.0 (21-07-2019)Exposed actually existing attribute1.0.0 (24-07-2019)Initial working version with accounts, foreign accounts and transaction retrieval.1.0.1 (25-07-2019)made credit card a Comparable1.0.2 (25-07-2019)Generalized the comparison of Comparable objects2.0.0 (26-07-2019)Implemented a credit card contract to make credit cards compatible with bank accounts3.0.0 (26-07-2019)Refactored code to use external dependency and implemented a contract interface standardizing the retrieval of accounts.3.0.1 (28-07-2019)Fixed session dropping issue3.0.2 (28-07-2019)Made error in retrieving non breaking3.0.3 (28-07-2019)Made retrieving of objects safe and implemented backoff for get methods3.0.4 (28-07-2019)Removed unnecessary method call3.0.5 (28-07-2019)Extended logging3.0.6 (28-07-2019)Updated dependencies3.0.7 (28-07-2019)Added logging3.0.8 (28-07-2019)Updated logging3.0.9 (28-07-2019)Removed unneeded logging3.0.10 (30-07-2019)Extended logging3.1.0 (02-08-2019)Uniquely identify a transaction and an account3.1.1 (16-08-2019)renamed underlying dependency and updated the code accordingly and fixed bug with a pop up covering the submission of the login.3.2.0 (17-08-2019)Shortened timout on click event on log in.3.2.1 (29-08-2019)Added latest popup window4.0.0 (13-09-2019)Implemented cookie based authentication5.0.0 (09-12-2019)Implemented cookie authentication for credit card and moved relevant shared code into a common module.5.1.0 (10-12-2019)Implemented retrieving transaction by date, by ranges of dates and since a date.5.2.0 (10-12-2019)Fixed name of method.5.2.1 (26-10-2020)Fixed bug with new cookie header required by ICS.5.2.2 (16-04-2021)Made retrieval by date ranges a bit stricter on the accepted input and bumped dependencies.5.3.0 (06-07-2021)Implemented date retrieval, fixed a bug with short cookies.5.3.1 (07-07-2021)Added pipeline and bumped dependencies.5.3.2 (07-07-2021)Added pipeline and bumped dependencies.
abnertestlib
虽然说我的这个库不是很牛逼,但是,我的描述文档,非常牛逼,会让别人,觉着我的库也很牛逼斜体加粗this is my codeprint("helloworld!")
abNester
UNKNOWN
ab_nester
UNKNOWN
abnex
Abnormal expressionsAbnormal expressions (abnex) is an alternative to regular expressions (regex). This is a Python library but the abnex syntax could be ported to other languages.ExamplesMatching an email addressRegex([\w\._-]+)@([\w\.]+)Abnex{[w"._-"]1++}"@"{[w"."]1++}Abnex (spaced){[w "._-"]1++} "@" {[w "."]1++}Abnex (expanded){ [w "._-"]1++ } "@" { [w "."]1++ }A more advanced pattern:{{{[a-z '_']1++} {[a-z 0-9 '_-.']0++}} '@' {{[a-z 0-9]1++} '.' {[a-z 0-9]1++} {[a-z 0-9 '-_.']0++}} {[a-z 0-9]1++}}.Why is Abnex Better?It's easier to read, write and understand.You can use spaces inside of the expression, you can also "expand" it, i.e. write it over multiple lines and use indention.You don't have to use a backslashes all the timeMore logical/common symbols like!fornot,{}forgroups,1++,0++,0+for:one or more,zero or more,zero or one.It's easier to see if a symbol is an actual symbol you are searching for or if it's a regex character, ex:Regex:[\w-]+@[\w-_]+Abnex:[w "-"]1++ "@" [w "-"]1++DocumentationRegex on right after -> is the abnex equivalentAnchorsStart of string, or start of line in multi-line pattern^->->End of string, or end of line in multi-line pattern$-><-Start of string\A->s>End of string\Z-><sWord boundary\b->:Not word boundary\B->!:Start of word\<->w>End of word\>-><wCharacter ClassesControl character\c->cWhite space\s->_Not white space\S->!_Digit\d->dNot digit\D->!dWord\w->wNot word\W->!wHexade­cimal digit\x->xOctal digit\o->oQuantifiers0 or more*->0++1 or more+->1++0 or 1?->0+Groups and RangesAny character except new line (\n).->*a or ba|b->"a"|"b"Group(...)->{...}Passive (non-c­apt­uring) group(?:...)->{#...}Range (a or b or c)[abc]->['abc']or["a" "b" "c"]Not in set[^...]->[!...]Lower case letter from a to Z[a-q]->[a-z]Upper case letter from A to Q[A-Q]->[A-Q]Digit from 0 to 7[0-7]->[0-7]StandardsWhat is the recommended way to write abnexesUse spaces between characters in character sets:Correct:[w "_-"]Incorrect:[w"_-"]Put multiple exact characters between the same quotes in character sets:Correct:["abc"]Incorrect:["a" "b" "c"], especially incorrect:["a""b""c"]Put spaces between groups:Correct:{w} "." {w}Incorrect:{w}"."{w}Examples:Match for an email address:Regex:[\w-\._]+@[\w-\.]+Abnex (following standards):{[w "-._"]1++} "@" {[w "-."]1++}Abnex (not following standards):{[w"-._"]1++}"@"{[w"-."]1++}Functions (In Python)Abnex has most functions from therelibrary, but it also has som extra functionality like:last()&contains().Common functions between re and abnexRegex on right after -> is the abnex equivalentmatch()->match()findall()->all()split()->split()sub()->replace()subn()->replace_count()search()->first()Special to abnexholds(): whether or not a string matches an expression (bool).contains(): wheter or not a string contains a match (bool).last(): the last match in a string.
abnf
ABNF generates parsers for ABNF grammars. Though intended for use with RFC grammars, ABNF should handle any valid grammar.
abnf-to-regexp
The programabnf-to-regexpconverts augmented Backus-Naur form (ABNF) to a regular expression.MotivationFor a lot of string matching problems, it is easier to maintain an ABNF grammar instead of a regular expression. However, many programming languages do not provide parsing and matching of ABNFs in their standard libraries. This tool allows you to write your grammars in ABNF and convert it to a regular expression which you then include in your source code.It is based onabnfPython module, which is used to parse the ABNFs.After the parsing, we apply a series of optimizations to make the regular expression a bit more readable. For example, the alternations of character classes are taken together to form a single character class.--helpusage: abnf-to-regexp [-h] -i INPUT [-o OUTPUT] [--format {single-regexp,python-nested}] Convert ABNF grammars to Python regular expressions. optional arguments: -h, --help show this help message and exit -i INPUT, --input INPUT path to the ABNF file -o OUTPUT, --output OUTPUT path to the file where regular expression is stored; if not specified, writes to STDOUT --format {single-regexp,python-nested} Output format; for example a single regular expression or a code snippetExample ConversionPlease seetest_data/nested-python/rfc3987/grammar.abnffor an example grammar.The corresponding generated code,e.g., in Python, is stored attest_data/nested-python/rfc3987/expected.py.InstallationYou can install the tool with pip in your virtual environment:pip3 install abnf-to-regexpDevelopmentCheck out the repository.In the repository root, create the virtual environment:python3-mvenvvenv3Activate the virtual environment (in this case, on Linux):sourcevenv3/bin/activateInstall the development dependencies:pip3install-e.[dev]Run the pre-commit checks:pythonprecommit.pyVersioningWe followSemantic Versioning. The version X.Y.Z indicates:X is the major version (backward-incompatible),Y is the minor version (backward-compatible), andZ is the patch version (backward-compatible bug fix).
ab-nic-sw
No description available on PyPI.
abn_nester
UNKNOWN
abnormalities
No description available on PyPI.
abnosql
NoSQL Abstraction LibraryBasic CRUD and query support for NoSQL databases, allowing for portable cloud native applicationsAWS DynamoDBAzure Cosmos NoSQLThis library is not intended to create databases/tables, use Terraform/ARM/CloudFormation etc for thatWhy not just use the name 'nosql' or 'pynosql'? because they already exist on pypi :-)NoSQL Abstraction LibraryInstallationUsageAPI DocsQueryingIndexesUpdatesExistence CheckingSchema ValidationPartition KeysPaginationAuditChange Feed / Stream SupportClient Side EncryptionConfigurationAWS DynamoDBAzure Cosmos NoSQLPlugins and HooksTestingAWS DynamoDBAzure Cosmos NoSQLCLIFuture Enhancements / IdeasInstallationpip install 'abnosql[dynamodb]' pip install 'abnosql[cosmos]'For optionalclient sidefield level envelope encryptionpip install 'abnosql[aws-kms]' pip install 'abnosql[azure-kms]'By default, abnosql does not include database dependencies. This is to facilitate packaging abnosql into AWS Lambda or Azure Functions (for example), without over-bloating the packagesUsagefrom abnosql import table import os os.environ['ABNOSQL_DB'] = 'dynamodb' os.environ['ABNOSQL_KEY_ATTRS'] = 'hk,rk' item = { 'hk': '1', 'rk': 'a', 'num': 5, 'obj': { 'foo': 'bar', 'num': 5, 'list': [1, 2, 3], }, 'list': [1, 2, 3], 'str': 'str' } tb = table('mytable') # create/replace tb.put_item(item) # update - using ABNOSQL_KEY_ATTRS updated_item = tb.put_item( {'hk': '1', 'rk': 'a', 'str': 'STR'}, update=True ) assert updated_item['str'] == 'STR' # bulk tb.put_items([item]) # note partition/hash key should be first kwarg assert tb.get_item(hk='1', rk='a') == item assert tb.query({'hk': '1'})['items'] == [item] # scan assert tb.query()['items'] == [item] # be careful not to use cloud specific statements! assert tb.query_sql( 'SELECT * FROM mytable WHERE mytable.hk = @hk AND mytable.num > @num', {'@hk': '1', '@num': 4} )['items'] == [item] tb.delete_item({'hk': '1', 'rk': 'a'})API DocsSeeAPI DocsQueryingquery()performs DynamoDBQueryusing KeyConditionExpression (ifkeysupplied) and exact match on FilterExpression if filters are supplied. For Cosmos, SQL is generated. This is the safest/most cloud agnostic way to query and probably OK for most use cases.query_sql()performs DynamodbExecuteStatementpassing in the suppliedPartiQLstatement. Cosmos uses the NoSQLSELECTsyntax.During mocked tests,SQLGlotis used toexecutethe statement, so results may differ...Care should be taken withquery_sql()to not to use SQL features that are specific to any specific provider (breaking the abstraction capability of using abnosql in the first place)IndexesBeyond partition and range keys defined on the table, indexes currently have limited support within abnosqlThe DynamoDB implemention ofquery()allows asecondary indexto be specified via optionalindexkwargCosmoshas Range, Spatial and Composite indexes, however the abnosql library does not do anything yet withindexkwarg inquery()implementation.Updatesput_item()andput_items()supportupdateboolean attribute, which if supplied will do anupdate_item()on DynamoDB, and apatch_item()on Cosmos. For this to work however, you must specify the key attribute names, either viaABNOSQL_KEY_ATTRSenv var as a comma separated list (eg perhaps multiple tables all share common partition/range key scheme), or as thekey_attrsconfig item when instantiating the table, eg:tb = table('mytable', {'key_attrs': ['hk', 'rk']})If you don't need to do any updates and only need to do create/replace, then these key attribute names do not need to be suppliedAll items being updated must actually exist first, or else exception raisedExistence CheckingIfcheck_existsconfig attribute isTrue, then CRUD operations will raise exceptions as follows:get_item()raisesNotFoundExceptionif item doesnt existput_item()raisesExistsExceptionif item already existsput_item(update=True)raisesNotFoundExceptionif item doesnt exist to updatedelete_item()raisesNotFoundExceptionif item doesnt existThis adds some delay overhead as abnosql must check if item existsThis can also be enabled by setting environment variableABNOSQL_CHECK_EXISTS=TRUEIf for some reason you need to override this behaviour once enabled forput_item()create operation, you can passabnosql_check_exists=Falseinto the item (this gets popped out so not persisten), which will allow create operation to overwrite the existing item without throwingExistsExceptionSchema Validationconfigcan define jsonschema to validate upon create or update operations (viaput_item())Combination of the following config attributes supportedschema: jsonschema dict or yaml string, applied to both create and updatecreate_schema: jsonschema dict/yaml only on createupdate_schema: jsonschema dict/yaml only on updateschema_errmsg: override default error message on both create and updatecreate_schema_errmsg: override default error message on createupdate_schema_errmsg: override default error message on updateYou can get details of validation errors throughe.to_problem()ore.detailNOTE:key_attrsrequired when updating (seeUpdates)Partition KeysA few methods such asget_item(),delete_item()andquery()need to know partition/hash keys as defined on the table. To avoid having to configure this or lookup from the provider, the convention used is that the first kwarg or dictionary item is the partition key, and if supplied the 2nd is the range/sort key.Paginationqueryandquery_sqlacceptlimitandnextoptional kwargs and returnnextin response. Use these to paginate.This works for AWS DyanmoDB, however Azure Cosmos has a limitation with continuation token for cross partitions queries (seePython SDK documentation). For Cosmos, abnosql appends OFFSET and LIMIT in the SQL statement if not already present, and returnsnext.limitis defaulted to 100. See the tests for examplesAuditput_item()andput_items()take an optionalaudit_userkwarg. If supplied, absnosql will add the following to the item:createdBy- value ofaudit_user, added if does not exist in item supplied to put_item()createdDate- UTC ISO timestamp string, added if does not existmodifiedBy- value ofaudit_useralways addedmodifiedDate- UTC ISO timestamp string, always addedYou can also specifyaudit_useras config attribute to table. If you prefer snake_case over CamelCase, you can set env varABNOSQL_CAMELCASE=FALSENOTE: created* will only be added ifupdateis not True in aput_item()operationChange Feed / Stream SupportAWS DynamoDBStreamsallow Lambda functions to be triggered upon create, update and delete table operations. The event sent to the lambda (seeaws docs) containseventNameandeventSourceARN, where:eventName- name of event, egINSERT,MODIFYorREMOVE(seehere)eventSourceARN- ARN of the table nameThis allows a single stream processor lambda to process events from multiple tables (eg for writing into ElasticSearch)Like DynamoDB,Azure CosmosDBsupportschange feeds, however the event sent to the function (currently) omits the event source (table name) and only delete event names are available if apreview change feed modeis enabled, which needs explicit enablement for.Because both the eventName and eventSource are ideally needed (irrespective of preview mode or not), abnosql library automatically adds thechangeMetadatato an item during create, update and delete, eg:item = { "hk": "1", "rk": "a", "changeMetadata": { "eventName": "INSERT", "eventSource": "sometable" } }Because no REMOVE event is sent at all without preview change feed mode above - abnosql must first update the item, and then delete it. This is also needed for the eventSource / table name to be captured in the event, so unfortunately until Cosmos supports both attributes, update is needed before a delete. 5 second synchronous sleep is added by default between update and delete to allow CosmosDB to send the update event (0 seconds results in no update event). This can be controlled withABNOSQL_COSMOS_CHANGE_META_SLEEPSECSenv var (defaults to5seconds), and disabled by setting to0This behaviour is enabled by default, however can be disabled by settingABNOSQL_COSMOS_CHANGE_METAenv var toFALSEorcosmos_change_meta=Falsein table config.ABNOSQL_CAMELCASE=FALSEenv var can also be used to change attribute names used to snake_case if neededTo write an Azure Function / AWS Lambda that is able to process both DynamoDB and Cosmos events, look forchangeMetadatafirst and if present use that otherwise look foreventNameandeventSourceARNin the event payload assuming its DynamoDBClient Side EncryptionIf configured in table config withkmsattribute, abnosql will perform client side encryption using AWS KMS or Azure KeyVaultEach attribute value defined in the config is encrypted with a 256-bit AES-GCM data key generated for each attribute value:awsusesAWS Encryption SDK for Pythonazureusespython cryptographyto generate AES-GCM data key, encrypt the attribute value and then uses an RSA CMK in Azure Keyvault to wrap/unwrap (envelope encryption) the AES-GCM data key. The module uses theazure-keyvaults-keyspython SDK for wrap/unrap functionality of the generated data key (Azure doesnt support generate data key as AWS does)Both providers use a256-bit AES-GCMgenerated data key with AAD/encryption context (Azure provider uses a 96-nonce). AES-GCM is an Authenticated symmetric encryption scheme used by both AWS and Azure (andHashicorp Vault)See alsoAWS Encryption Best PracticesExample config:{ 'kms': { 'key_ids': ['https://foo.vault.azure.net/keys/bar/45e36a1024a04062bd489db0d9004d09'], 'key_attrs': ['hk', 'rk'], 'attrs': ['obj', 'str'] } }Where:key_ids: list of AWS KMS Key ARNs or Azure KeyVault identifier (URL to RSA CMK). This is picked up viaABNOSQL_KMS_KEYSenv var as a comma separated list (NOTE: env var recommended to avoid provider specific code)key_attrs: list of key attributes in the item from which the AAD/encryption context is set. Taken fromABNOSQL_KEY_ATTRSenv var or tablekey_attrsif defined thereattrs: list of attributes keys to encryptkey_bytes: optional for azure, use your own AESGCM key if specified, otherwise generate oneIfkmsconfig attribute is present, abnosql will look for theABNOSQL_KMSprovider to load the appropriate provider KMS module (eg "aws" or "azure"), and if not present use default depending on the database (eg cosmos will use azure, dynamodb will use aws)In example above, the key_attrs['hk', 'rk']are used to define the encryption context / AAD used, and attrs['obj', 'str']what attributes to encrypt/decryptWith an item:{ 'hk': '1', 'rk': 'b', 'obj': {'foo':'bar'}, 'str': 'foobar' }The encryption context / AAD is set to hk=1 and rk=b and obj and str values are encryptedIf you don't want to use any of these providers, then you can useput_item_preandget_item_posthooks to perform your own client side encryptionSee alsoAWS Multi-region encryption keysand setABNOSQL_KMS_KEYSenv var as comma list of ARNsConfigurationIt is recommended to use environment variables where possible to avoid provider specific application codeifABNOSQL_DBenv var is not set, abnosql will attempt to apply defaults based on available environment variables:AWS_DEFAULT_REGION- sets database todynamodb(seeaws docs)FUNCTIONS_WORKER_RUNTIME- sets database tocosmos(seeazure docs)AWS DynamoDBSet the following environment variable and use the usual AWS environment variables that boto3 usesABNOSQL_DB= "dynamodb"Or set the boto3 session in the configfrom abnosql import table import boto3 tb = table( 'mytable', config={'session': boto3.Session()}, database='dynamodb' )Azure Cosmos NoSQLSet the following environment variables:ABNOSQL_DB= "cosmos"ABNOSQL_COSMOS_ACCOUNT= your database accountABNOSQL_COSMOS_ENDPOINT= drived fromABNOSQL_COSMOS_ACCOUNTif not setABNOSQL_COSMOS_CREDENTIAL= your cosmos credential, useAzure Key Vault Referencesif using Azure Functions. Don't set to use DefaultAzureCredential / managed identity.ABNOSQL_COSMOS_DATABASE= cosmos databaseOR- use the connection string format:ABNOSQL_DB= "cosmos://account@credential:database" or "cosmos://account@:database" to use managed identity (credential could also be "DefaultAzureCredential")Alternatively, define in config (though ideally you want to use env vars to avoid application / environment specific code).from abnosql import table tb = table( 'mytable', config={'account': 'foo', 'database': 'bar'}, database='cosmos' )Plugins and Hooksabnosql uses pluggy and registers in theabnosql.tablenamespaceThe following hooks are availableset_config- set configget_item_post- called afterget_item(), can return modified dataput_item_preput_item_postput_items_postdelete_item_postSee theTableSpecsand exampletest_hooks()TestingAWS DynamoDBUsemotopackage andabnosql.mocks.mock_dynamodbxmock_dynamodbx is used for query_sql and only needed if/until moto provides full partiql supportExample:from abnosql.mocks import mock_dynamodbx from moto import mock_dynamodb @mock_dynamodb @mock_dynamodbx # needed for query_sql only def test_something(): ...More examples intests/test_dynamodb.pyAzure Cosmos NoSQLUserequestspackage andabnosql.mocks.mock_cosmosExample:from abnosql.mocks import mock_cosmos import requests @mock_cosmos @responses.activate def test_something(): ...More examples intests/test_cosmos.pyCLISmall abnosql CLI installed with few of the commands aboveUsage: abnosql [OPTIONS] COMMAND [ARGS]... Options: --help Show this message and exit. Commands: delete-item get-item put-item put-items query query-sqlTo install dependenciespip install 'abnosql[cli]'Example querying table in Azure Cosmos, with cosmos.json config file containing endpoint, credential and database$ abnosql query-sql mytable 'SELECT * FROM mytable' -d cosmos -c cosmos.json partkey id num obj list str ----------- ---- ----- ------------------------------------------- --------- ----- p1 p1.1 5 {'foo': 'bar', 'num': 5, 'list': [1, 2, 3]} [1, 2, 3] str p2 p2.1 5 {'foo': 'bar', 'num': 5, 'list': [1, 2, 3]} [1, 2, 3] str p2 p2.2 5 {'foo': 'bar', 'num': 5, 'list': [1, 2, 3]} [1, 2, 3] strFuture Enhancements / Ideasclient side encryptiontest pagination & exception handlingGoogle Firestoresupport, ideally in the core library (though could be added outside via use of the plugin system). Would need something likeFireSQLimplemented for oython, maybe via sqlglotGoogle VaultKMS supportHashicorp VaultKMS supportSimple caching (maybe) using globals (used for AWS Lambda / Azure Functions)PostgresSQL support using JSONB column (seeherefor example). Would be nice to avoid an ORM and having to define a model for each table...blob storage backend? could use something similar toNoDBbut maybe combined withsmart_openand DuckDB'sHive PartitioningRedis..Hook implementations to write to ElasticSearch / OpenSearch for better searching. Useful when not able to useAWS Stream ProcessorsAzure Change Feed, orElasticstore. Why? because not all databases support stream processing, and if they do you don't want the hastle of usingCDC
abnum
Abnum 3=======Alphabetic numerals package for Python 3. Module includes various letter valuesubstituting systems from the ancient times to the modern artificial ones.Abnum substitution system is better known as _gematria_ in hebrew and_isopsephy_ in greek, _abjad_ in arabic alphabet and _katapayadi_ in sanskrit.Currently supported languages are:- greek (grc)- hebrew (heb)- coptic (cop)- aramaic (arm)- syriaic (syc)- arabic (ara)- phoenician (phn)- brahmi (brh)- english (eng)- finnish (fin)## Install`pip install abnum`## UsageGet the value of the greek phrase by adding letter values and returning the sum:```pythonfrom abnum import Abnum, greekg = Abnum(greek)print(g.value('ο Λογος')) # 443```Use multiplication instead of addition:```pythonfrom abnum import Abnum, greekfrom operator import mulg = Abnum(greek)# use an arithmetic function as the second argument and a start value as the thirdprint(g.value('ο Λογος', mul, 1)) # 6174000000```Phoenician script:```pythonfrom abnum import Abnum, phoenicianp = Abnum(phoenician)a = list(map(g.value, "𐤀𐤍𐤊 𐤕𐤁𐤍𐤕 𐤊𐤄𐤍 𐤏𐤔𐤕𐤓𐤕 𐤌𐤋𐤊 𐤑𐤃𐤍𐤌 𐤁𐤍".split(" ")))print(a, sum(a))``````text[71, 852, 75, 1370, 90, 184, 52], 2694```## Jupyter notebooksPlease see Jupyter notebooks for further study and examples:[Usage of the library](Abnum%203%20introduction.ipynb). Includes the verification of the isopsephical value of the Bergama stele, 100 - 200 AD.[Isopsephical riddle of the Sibylline verses](Isopsephical%20riddle%20of%20the%20Pseudo%20Sibylline%20hexameter.ipynb), Book 1, lines 137 - 146.Python 2 version of the Abnum library can still be found from: https://github.com/markomanninen/abnum
abnumber
Failed to fetch description. HTTP Status Code: 404
aboardly
UNKNOWN
aboba
ABGetting startedTo make it easy for you to get started with GitLab, here's a list of recommended next steps.Already a pro? Just edit this README.md and make it your own. Want to make it easy?Use the template at the bottom!Add your filesCreateoruploadfilesAdd files using the command lineor push an existing Git repository with the following command:cd existing_repo git remote add origin https://gitlab.com/AlexRoar/ab.git git branch -M main git push -uf origin mainIntegrate with your toolsSet up project integrationsCollaborate with your teamInvite team members and collaboratorsCreate a new merge requestAutomatically close issues from merge requestsEnable merge request approvalsSet auto-mergeTest and DeployUse the built-in continuous integration in GitLab.Get started with GitLab CI/CDAnalyze your code for known vulnerabilities with Static Application Security Testing(SAST)Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto DeployUse pull-based deployments for improved Kubernetes managementSet up protected environmentsEditing this READMEWhen you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thank you tomakeareadme.comfor this template.Suggestions for a good READMEEvery project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.NameChoose a self-explaining name for your project.DescriptionLet people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.BadgesOn some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.VisualsDepending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.InstallationWithin a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.UsageUse examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.SupportTell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.RoadmapIf you have ideas for releases in the future, it is a good idea to list them in the README.ContributingState if you are open to contributions and what your requirements are for accepting them.For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.Authors and acknowledgmentShow your appreciation to those who have contributed to the project.LicenseFor open source projects, say how it is licensed.Project statusIf you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
abobo
Failed to fetch description. HTTP Status Code: 404
abode
Abode: Friendly Python PackagingMost experienced Python users know that Python packaging is rough. Abode is an attempt to make things nicer by extendingConda. It uses Conda under the hood, but makes things a bit easier and robust.Conda is by far my preferred packaging solution. It works as an environment manager (better than virtualenv in my opinion) as well as a package manager. The default repository is focused on data science software. However, Conda also installs packages from PyPI with pip. Also, unlike pip, Conda installs non-Python libraries such as MKL and CUDA that often increase operation speeds by 10-100x.An aside:I recommend installingMinicondainstead of the full Anaconda distribution. The vast majority of people won't need every single package in the Anaconda distribution. So save yourself some time, bandwidth, and storage. Install Miniconda, then create environments and install packages as needed.The Main AttractionThe biggest issue with Conda is saving an environment's dependencies and sharing it across platforms. You can get the dependencies usingconda env export > environment.yml. Say I have an environment where I have installed Flask, Numpy, and PyTorch. Even though those are the only packages I intentionally installed, Conda installs all the dependencies as well. Conda exports the environment to a YAML file that looks like:name: flask channels: - pytorch - defaults dependencies: - blas=1.0=mkl - ca-certificates=2019.5.15=1 - certifi=2019.6.16=py37_1 - cffi=1.12.3=py37hb5b8e2f_0 - intel-openmp=2019.4=233 - libcxx=4.0.1=hcfea43d_1 - libcxxabi=4.0.1=hcfea43d_1 - libedit=3.1.20181209=hb402a30_0 - libffi=3.2.1=h475c297_4 - libgfortran=3.0.1=h93005f0_2 - mkl=2019.4=233 - mkl-service=2.0.2=py37h1de35cc_0 - mkl_fft=1.0.14=py37h5e564d8_0 - mkl_random=1.0.2=py37h27c97d8_0 - ncurses=6.1=h0a44026_1 - ninja=1.9.0=py37h04f5b5a_0 - numpy=1.16.4=py37hacdab7b_0 - numpy-base=1.16.4=py37h6575580_0 - openssl=1.1.1c=h1de35cc_1 - pip=19.1.1=py37_0 - pycparser=2.19=py37_0 - python=3.7.4=h359304d_1 - pytorch=1.2.0=py3.7_0 - readline=7.0=h1de35cc_5 - setuptools=41.0.1=py37_0 - six=1.12.0=py37_0 - sqlite=3.29.0=ha441bb4_0 - tk=8.6.8=ha441bb4_0 - wheel=0.33.4=py37_0 - xz=5.2.4=h1de35cc_4 - zlib=1.2.11=h1de35cc_3 - pip: - click==7.0 - flask==1.1.1 - itsdangerous==1.1.0 - jinja2==2.10.1 - markupsafe==1.1.1 - werkzeug==0.15.5 prefix: /Users/mat/miniconda3/envs/flaskThe issue here is that defining the versions potentially breaks the environment on platforms other than the original one. I created this environment on my MacBook. It is not guaranteed that all these dependencies are available for Linux or Windows, likely breaking the environment on these platforms.Also, a lot of the time we're not concerned with the exact versions of our packages. Instead we are okay with the newest versions, or any version greater than some release with a specific feature. In these cases, locking to specific versions is overly strict.You can create essentially the same environment with this file:name: flask channels: - pytorch - defaults dependencies: - numpy - pip - python=3 - pytorch - pip: - flask prefix: /Users/mat/miniconda3/envs/flaskConda will take this file and solve all the necessary dependencies on whatever platform you're using. Of course if there are specific versions you want (likepython=3here) you can define those.Abode manages Conda environments by creating and editing minimal environment files like these. Hopefully this will allow users to take advantage of the great things Conda is doing, while making the environments portable.Abode DependenciesPython 3.6+ because I like f-stringsPyYAMLConda, as noted above, I suggest installing Miniconda instead of the full Anaconda distributionInstallationAbode is available from PyPI:pip install abodeUsageWarning:This is very early. Use at your own risk.So far this is what I have implemented.Create an environmentTo create a new environment with Python 3 installed:abode create -n env_name python=3Behind the scenes this creates an environment file and uses Conda to create an environmentfrom the file.Create from environment fileTo create an environment from an environment file (a YAML file created by Abode or Conda):abode create -f FILEEnter an environmentI haven't figure out how to do this without using conda in the shell, so for now:conda activate env_nameThis is pretty high priority for me. It's awkward to switch between abode and conda commands.Install packagesInstalling packages is the same as Conda:abode install numpy matplotlibBehind the scenes, Abode is adding these dependencies to the environment file, then updating the environmentfrom the file.Install with pipUse the--pipflag to install a package with pip.abode install flask --pipInstall from a non-default channelThe channel option-cadds the channel to the environment file.abode install pytorch -c pytorchUpdate packagesThis updates all the packages in the environment without locked versions.abode updateExport the environmentabode export > environment.ymlThis just copies the active environment's dependency file to the new file.List packages in the current environmentTo print out the list of all installed packages, equivalent toconda list:abode listIf you want to see which packages have been installed with Abode:abode list -forabode list --fileList environments managed with Abodeabode env listContributionsHappy to work with you on this. Create an issue or a pull request. Say hi.
abodepy
python-abodeA thin Python library for the Abode alarm API. Only compatible with Python 3+Disclaimer:Published under the MIT license - See LICENSE file for more details.“Abode” is a trademark owned by Abode Systems Inc., see www.goabode.com for more information. I am in no way affiliated with Abode.Thank you Abode for having a relatively simple API to reverse engineer. Hopefully in the future you’ll open it up for official use.API calls faster than 60 seconds is not recommended as it can overwhelm Abode’s servers. Leverage the cloud push event notification functionality as much as possible. Please use this module responsibly.InstallationFrom PyPi:pip3 install abodepyCommand Line UsageSimple command line implementation arguments:$ abodepy --help usage: AbodePy: Command Line Utility [-h] -u USERNAME -p PASSWORD [--mode] [--arm mode] [--set setting=value] [--devices] [--device device_id] [--json device_id] [--on device_id] [--off device_id] [--lock device_id] [--unlock device_id] [--automations] [--activate automation_id] [--deactivate automation_id] [--trigger automation_id] [--listen] [--debug] [--quiet] optional arguments: -h, --help show this help message and exit -u USERNAME, --username USERNAME Username -p PASSWORD, --password PASSWORD Password --mode Output current alarm mode --arm mode Arm alarm to mode --set setting=value Set setting to a value --devices Output all devices --device device_id Output one device for device_id --json device_id Output the json for device_id --on device_id Switch on a given device_id --off device_id Switch off a given device_id --lock device_id Lock a given device_id --unlock device_id Unlock a given device_id --automations Output all automations --activate automation_id Activate (enable) an automation by automation_id --deactivate automation_id Deactivate (disable) an automation by automation_id --trigger automation_id Trigger (apply) an automation (manual quick-action) by automation_id --listen Block and listen for device_id --debug Enable debug logging --quiet Output only warnings and errorsYou can get the current alarm mode:$ abodepy -u USERNAME -p PASSWORD --mode Mode: standbyTo set the alarm mode, one of ‘standby’, ‘home’, or ‘away’:$ abodepy -u USERNAME -p PASSWORD --arm home Mode set to: homeA full list of devices and their current states:$ abodepy -u USERNAME -p PASSWORD --devices Device Name: Glass Break Sensor, Device ID: RF:xxxxxxxx, Device Type: GLASS, Device Status: Online Device Name: Keypad, Device ID: RF:xxxxxxxx, Device Type: Keypad, Device Status: Online Device Name: Remote, Device ID: RF:xxxxxxxx, Device Type: Remote Controller, Device Status: Online Device Name: Garage Entry Door, Device ID: RF:xxxxxxxx, Device Type: Door Contact, Device Status: Closed Device Name: Front Door, Device ID: RF:xxxxxxxx, Device Type: Door Contact, Device Status: Closed Device Name: Back Door, Device ID: RF:xxxxxxxx, Device Type: Door Contact, Device Status: Closed Device Name: Status Indicator, Device ID: ZB:xxxxxxxx, Device Type: Status Display, Device Status: Online Device Name: Downstairs Motion Camera, Device ID: ZB:xxxxxxxx, Device Type: Motion Camera, Device Status: Online Device Name: Back Door Deadbolt, Device ID: ZW:xxxxxxxx, Device Type: Door Lock, Device Status: LockClosed Device Name: Front Door Deadbolt, Device ID: ZW:xxxxxxxx, Device Type: Door Lock, Device Status: LockClosed Device Name: Garage Door Deadbolt, Device ID: ZW:xxxxxxxx, Device Type: Door Lock, Device Status: LockClosed Device Name: Alarm area_1, Device ID: area_1, Device Type: Alarm, Device Status: standbyThe current state of a specific device using the device id:$ abodepy -u USERNAME -p PASSWORD --device ZW:xxxxxxxx Device Name: Garage Door Deadbolt, Device ID: ZW:xxxxxxxx, Device Type: Door Lock, Device Status: LockClosedAdditionally, multiple specific devices using the device id:$ abodepy -u USERNAME -p PASSWORD --device ZW:xxxxxxxx --device RF:xxxxxxxx Device Name: Garage Door Deadbolt, Device ID: ZW:xxxxxxxx, Device Type: Door Lock, Device Status: LockClosed Device Name: Back Door, Device ID: RF:xxxxxxxx, Device Type: Door Contact, Device Status: ClosedYou can switch a device on or off, or lock and unlock a device by passing multiple arguments:$ abodepy -u USERNAME -p PASSWORD --lock ZW:xxxxxxxx --switchOn ZW:xxxxxxxx Locked device with id: ZW:xxxxxxxx Switched on device with id: ZW:xxxxxxxxYou can also block and listen for all mode and change events as they occur:$ abodepy -u USERNAME -p PASSWORD --listen No devices specified, adding all devices to listener... Listening for device updates... Device Name: Alarm area_1, Device ID: area_1, Status: standby, At: 2017-05-27 11:13:08 Device Name: Garage Door Deadbolt, Device ID: ZW:xxxxxxxx, Status: LockOpen, At: 2017-05-27 11:13:31 Device Name: Garage Entry Door, Device ID: RF:xxxxxxxx, Status: Open, At: 2017-05-27 11:13:34 Device Name: Garage Entry Door, Device ID: RF:xxxxxxxx, Status: Closed, At: 2017-05-27 11:13:39 Device Name: Garage Door Deadbolt, Device ID: ZW:xxxxxxxx, Status: LockClosed, At: 2017-05-27 11:13:41 Device Name: Alarm area_1, Device ID: area_1, Status: home, At: 2017-05-27 11:13:59 Device update listening stopped.If you specify one or more devices with the –device argument along with the –listen command then only those devices will listen for change events.Keyboard interrupt (CTRL+C) to exit listening mode.To obtain a list of automations:$ abodepy -u USERNAME -p PASSWORD --automations Deadbolts Lock Home (ID: 6) - status - active Auto Home (ID: 3) - location - active Lock Garage Quick Action (ID: 7) - manual - active Deadbolts Lock Away (ID: 5) - status - active Autostandby (ID: 4) - schedule - active Auto Away (ID: 2) - location - active Sleep Mode (ID: 1) - schedule - activeTo activate or deactivate an automation:$ abodepy -u USERNAME -p PASSWORD --activate 1 Activated automation with id: 1To trigger a manual (quick) automation:$ abodepy -u USERNAME -p PASSWORD --trigger 7 Triggered automation with id: 1SettingsYou can change settings with abodepy either using abode.set_setting(setting, value) or through the command line:$ abodepy -u USERNAME -p PASSWORD --set beeper_mute=1 Setting beeper_mute changed to 1SettingValid Valuesircamera_resolution_t0 for 320x240x3, 2 for 640x480x3ircamera_gray_t0 for disabled, 1 for enabledbeeper_mute0 for disabled, 1 for enabledaway_entry_delay0, 10, 20, 30, 60, 120, 180, 240away_exit_delay30, 60, 120, 180, 240home_entry_delay0, 10, 20, 30, 60, 120, 180, 240home_exit_delay0, 10, 20, 30, 60, 120, 180, 240door_chimenone, normal, loudwarning_beepnone, normal, loudentry_beep_awaynone, normal, loudexit_beep_awaynone, normal, loudentry_beep_homenone, normal, loudexit_beep_homenone, normal, loudconfirm_sndnone, normal, loudalarm_len0, 60, 120, 180, 240, 300, 360, 420, 480, 540, 600, 660, 720, 780, 840, 900final_beep0, 3, 4, 5, 6, 7, 8, 9, 10entry(Siren) 0 for disabled, 1 for enabledtamper(Siren) 0 for disabled, 1 for enabledconfirm(Siren) 0 for disabled, 1 for enabledDevelopment and TestingInstall the core dependencies:$ sudo apt-get install python3-pip python3-dev python3-venvCheckout from github and then create a virtual environment:$ git clone https://github.com/MisterWil/abodepy.git $ cd abodepy $ python3 -m venv venvActivate the virtual environment:$ source venv/bin/activateInstall requirements:$ pip install -r requirements.txt -r requirements_test.txtInstall abodepy locally in “editable mode”:$ pip3 install -e .Run the run the full test suite with tox before commit:$ toxAlternatively you can run just the tests:$ tox -e py35Library UsageTODOClass DescriptionsTODO
abofly
UNKNOWN
abo-generator
ABO banking format generator.https://github.com/ViktorStiskala/python-abo-generatorCurrently supports only “CSOB” compatible format.Specification sources:http://www.fio.cz/docs/cz/struktura-abo.pdfandhttp://www.equabank.cz/files/doc/13-format-abo.pdf(available only in czech).Thanks to Lukas Hurych for providing generator code.fromdatetimeimportdatetime,timedeltafromaboimportABOtomorrow=datetime.now()+timedelta(days=1)abo_export=ABO(client_account_number='123456789/0300',client_name='Super company a.s.',due_date=tomorrow)abo_export.add_transaction('123456789/0100',500.34,variable_symbol='123456',message='Hello world!')abo_export.add_transaction('155-987523423/2010',1234.55,variable_symbol='789654321',message='Test transaction')withfile('example.pkc','w')asf:abo_export.save(f)InstallationTo install ABO generator, simply:$pipinstallabo-generatorLicenseThis software is licensed under MPL 2.0.http://mozilla.org/MPL/2.0/http://www.mozilla.org/MPL/2.0/FAQ.html#use
aboki
# abokiBlack market currency rate instantly in your terminal! (Powered by [AbokiFx](https://abokifx.com/).)NOTE: I am in no way affiliated with AbokiFx. I don’t know anyone that works there, have no relationship with them – nothing.## PrerequisitesBefore usingaboki, you should be familiar with foreign exchange and black market…Duh!## InstallationYou can installabokivia [pip](http://pip.readthedocs.org/en/latest/):`bash $ pip install aboki `Onceabokihas been installed, you’ll can start by running:`bash $ aboki test `## UsageIf you simply runabokion the command line, you’ll get a list of help.`bash $ aboki recent # show recent exchange rates $ aboki rates <type> # show current exchange rates $ aboki rate <currency> # show exchange rate for currency $ aboki convert <amount> <FROM> <TO> # see how much money you'll get if you sell $ aboki test # test aboki $ aboki(-h|--help)# display help information `All commands that have side effects will prompt you for action before doing anything for added security.## Changelogv0.3: 05-09-2017Python 3 Support and Error handling.v0.2: 04-16-2017Fixing some small documentation issues.v0.1: 04-12-2017First release!## Like This?If you’ve enjoyed usingaboki, feel free to star this project, send me some bitcoin! My address is:17AcFdn8kTpGw1R34MC5U5SyZHrMbZK4SqOr… You could tip me on [paypal](https://www.paypal.me/akinjide) :)<3-Akinjide
abokipdf
This is the Home Page of our Project.
aboleth
A bare-bonesTensorFlowframework forBayesiandeep learning and Gaussian process approximation[1]with stochastic gradient variational Bayes inference[2].FeaturesSome of the features of Aboleth:Bayesian fully-connected, embedding and convolutional layers using SGVB[2]for inference.Random Fourier and arc-cosine features for approximate Gaussian processes. Optional variational optimisation of these feature weights as per[1].Imputation layers with parameters that are learned as part of a model.Noise Contrastive Priors[3]for better out-of-domain uncertainty estimation.Very flexible construction of networks, e.g. multiple inputs, ResNets etc.Compatible and interoperable with other neural net frameworks such asKeras(see thedemosfor more information).Why?The purpose of Aboleth is to provide a set of high performance and light weight components for building Bayesian neural nets and approximate (deep) Gaussian process computational graphs. We aim forminimalabstraction over pure TensorFlow, so you can still assign parts of the computational graph to different hardware, use your own data feeds/queues, and manage your own sessions etc.Here is an example of building a simple Bayesian neural net classifier with one hidden layer and Normal prior/posterior distributions on the network weights:importtensorflowastfimportabolethasab# Define the network, ">>" implements function composition,# the InputLayer gives a kwarg for this network, and# allows us to specify the number of samples for stochastic# gradient variational Bayes.net=(ab.InputLayer(name="X",n_samples=5)>>ab.DenseVariational(output_dim=100)>>ab.Activation(tf.nn.relu)>>ab.DenseVariational(output_dim=1))X_=tf.placeholder(tf.float,shape=(None,D))Y_=tf.placeholder(tf.float,shape=(None,1))# Build the network, nn, and the parameter regularization, klnn,kl=net(X=X_)# Define the likelihood modellikelihood=tf.distributions.Bernoulli(logits=nn).log_prob(Y_)# Build the final loss function to use with TensorFlow trainloss=ab.elbo(likelihood,kl,N)# Now your TensorFlow training code here!...At the moment the focus of Aboleth is on supervised tasks, however this is subject to change in subsequent releases if there is interest in this capability.InstallationNOTE: Aboleth is aPython 3library only. Some of the functionality within it depends on features only found in python 3. Sorry.To get up and running quickly you can use pip and get the Aboleth package fromPyPI:$ pip install abolethFor the best performance on your architecture, we recommend installingTensorFlow from sources.Or, to install additional dependencies required by thedemos:$ pip install aboleth[demos]To install in develop mode with packages required for development we recommend you clone the repository from GitHub:$ git clone [email protected]:data61/aboleth.gitThen in the directory that you cloned into, issue the following:$ pip install -e .[dev]Getting StartedSee thequick start guideto get started, and for more in depth guide, have a look at ourtutorials. Also see thedemosfolder for more examples of creating and training algorithms with Aboleth.The full project documentation can be found onreadthedocs.References[1](1,2)Cutajar, K. Bonilla, E. Michiardi, P. Filippone, M. Random Feature Expansions for Deep Gaussian Processes. In ICML, 2017.[2](1,2)Kingma, D. P. and Welling, M. Auto-encoding variational Bayes. In ICLR, 2014.[3]Hafner, D., Tran, D., Irpan, A., Lillicrap, T. and Davidson, J., 2018. Reliable Uncertainty Estimates in Deep Neural Networks using Noise Contrastive Priors. arXiv preprint arXiv:1807.09289.LicenseCopyright 2017 CSIRO (Data61)Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
abomb
Failed to fetch description. HTTP Status Code: 404
abomination
abominationA dumpster fire of metaprogramming and other Python abuses.Some code is so bad it just has to be written, documented, tested, and packaged. This is the place for such ideas.InstallationInstall the package from pypi:pipinstallabominationUsageAutomatic installation of packagesSomewhere before your first third-party import, but the following line:importabomination;abomination.magic()If an import fails, we will try to install the package with pip and restart the script.importabomination;abomination.magic()# This just works!importnumpyasnpOf course, this only works if the import name matches the distribution name. Prepare for an ugly recursion while Python screams "I cannot importpyyaml!" and pip shouts back "But I have already installedpyyaml!" (note: the import name isyaml).Calling out code style you don't likeLinters are for the weak,realchecks hit hard at run time! Following the example of Python's treatment of whitespace, code style violations raise aSyntaxErrorif you callcall_out:importabominationabomination.call_out(sadface=True)# Some dummy code to provokeif(this_code_uses_some_practices_that_i_dislike:=Trueandothers_should_know_about_it:=True):passOutput:SyntaxError: It saddens me to see '):' in your code :(This function, just like the whole package, is intended as a light-hearted joke. It was inspired by this twitter thread:https://twitter.com/gvanrossum/status/1395135889123069952If anyone involved feels offended, please contact me and I will remove it.
abondance
abondance: Python library for Internet Health Report APIInstallationThe easy waypip install abondanceFrom source filesGet the latest source files:git clone [email protected]:InternetHealthReport/abondance.gitInstall dependencies and install abondance:cd abondance sudo pip install -r requirements.txt sudo python setup.py installAS inter-dependency (AS hegemony)Example: Retrieve dependencies for AS2501 on September 15th, 2018fromihr.hegemonyimportHegemonyhege=Hegemony(originasns=[2501],start="2018-09-15 00:00",end="2018-09-15 23:59")forrinhege.get_results():print(r)Example: Retrieve dependents of AS2500 on September 15th, 2018fromihr.hegemonyimportHegemonyhege=Hegemony(asns=[2500],start="2018-09-15 00:00",end="2018-09-15 23:59")forrinhege.get_results():# Skip results from the global graphifr["originasn"]==0:continueprint(r)AS DelayExample: Retrieve delay for AS7922 on September 15th, 2018fromihr.delayimportDelayres=Delay(asns=[7922],start="2018-09-15 00:00",end="2018-9-15 23:59")forrinres.get_results():print(r)AS Forwarding alarmsExample: Retrieve forwarding alarms for AS7922 on September 15th, 2018fromihr.forwardingimportForwardingres=Forwarding(asns=[7922],start="2018-09-15 00:00",end="2018-9-15 23:59")forrinres.get_results():print(r)
aboo
abooAph's Banana-based Obscure Observations. An API for banana.Installationpipinstallabooor clone the repogitclonehttps://github.com/aphkyle/abooExample UsageimportiofromabooimportBananafromPILimportImagemy_api_key="my_api_key"my_cse_id="my_cse_id"banana=Banana(my_api_key,my_cse_id)im=Image.open(io.BytesIO(banana.get_random_banana()))im.show()LicenseMIT
abook
Reads and writesAbookfiles.Saves photo to ~/.abook/photo/NAME.jpeg (if directory is present).Configurationfield other = Other view CONTACT = name, email view ADDRESS = address, address2, city, state, zip, country view PHONE = phone, workphone, mobile, other view OTHER = nick, url, notes
abopt
No description available on PyPI.
abork-config-initiator
No description available on PyPI.
abork-logger
No description available on PyPI.
abork-mailer
No description available on PyPI.
abork-path-analyser
No description available on PyPI.
abortion-policies
abortion_policiesEasy to read abortion policies from the abortion policy APIThis code is built using theAbortionPolicyAPIInstallationIf you want to install and run the code yourself you will first need an API key. You can apply for an API key here:AbortionPolicyAPIOnce you have an API token you can pip install this applicationpip install abortion_policiesRunning the codeThen you can run the software and supply your API tokenabortion_policiesAbortion API Token: <supply token>Then you can review the outputConsuming the CodeIf you simply want to consume the output from the API you can Git clone the repository locally which will include all of the generated documentsgit clone https://github.com/automateyournetwork/abortion_policiesVS Code ExtensionsYou can use VS Code to browse and view all of the various file types using the following extensionsExcel PreviewUsed to preview the CSV filesMarkdown PreviewUsed to preview the markdown filesMarkmapUsed to preview and generate the mind mapsOpen in Default BrowserUsed to view the HTML data tables and SVG files (Right click on the file, select Open in Default Browser)Audio-previewUsed to listen to the MP3 files
abo-s-pysync
Pysync has both a demonstration implementation of the rsync and related algorithms in pure Python, and a high speed librsync Python extension. The pure Python is not fast and is not optimized, however it does work and provides a simple implementation of the algorithm for reference and experimentation. It includes a combination of ideas taken from librsync, xdelta, and rsync. The librsync Python extension is less flexible and harder to understand, but is very fast.
abot
This project is focused on giving a proper interface for bot creation. Backends are plugable, so you can just implement your own backend, and still use other backends tailored to them. It is still ongoing work.It follows a flask-like approach, with the idea of being independent from your code. Right now it features click as a dependency, but we may fork it inside here because of the lack of flexibility we require for asyncio execution.How to developInstall pipenv, using either a packaged version or pip, and runpipenv install -d. With this, you should havetoxcommand available in the pipenv virtualenv.Runtoxto execute all the tests/checks orpy.testto execute just the tests.
abotest
this is a test
abotest1
#this is a test
aboto3
aboto3aboto3is an async boto3 client generator!There are other boto3-like libraries that offer asyncio but the interface can be quite different from normalboto3clients. The goal ofaboto3is to closely replicate theboto3client interface with acceptable performance from the pythonThreadPoolExecutor!API NOTE-aboto3was created with boto3 API compatibility in mind. Because of this it does not supportboto3"Resources", and there is no plan to support them. New "Resources" are no longer being added toboto3.Performance NOTE- Becauseaboto3provides an async wrapper aroundboto3, it is not truly async to it's core. It sends a boto3 call to a thread and the thread runs asynchronously.asynciotasks are much lighter weight than threads so if you want to run hundreds of concurrent calls with the best performance, pure async boto3 equivalents are a better fit.TutorialTo create an async client simply pass in the normalboto3client to create anAIOClient.Use the async client, in a coroutine, like you would if the boto3 client's API calls were all async! Seeboto3docsfor details.importasyncioimportaboto3importboto3# create a normal boto3 clientec2_client=boto3.client("ec2")# create the asyncio version from the clientaio_ec2_client=aboto3.AIOClient(ec2_client)# you can still use the other client as usualinstances=ec2_client.describe_instances()# the async client must be used in a coroutine# but acts exactly the same as the boto3 client except method calls are asyncasyncdefaio_tester():aio_instances=awaitaio_ec2_client.describe_instances()returnaio_instancesaio_instances=asyncio.run(aio_tester())Pass in parameters to the coroutine like a normal client.importasyncioimportaboto3importboto3ec2_client=boto3.client("ec2")aio_ec2_client=aboto3.AIOClient(ec2_client)instances=ec2_client.describe_instances(InstanceIds=["i-123412341234"])asyncdefaio_tester():aio_instances=awaitaio_ec2_client.describe_instances(InstanceIds=["i-123412341234"])returnaio_instancesaio_instances=asyncio.run(aio_tester())Get an async paginator from theaboto3client.importasyncioimportaboto3importboto3ec2_client=boto3.client("ec2")aio_ec2_client=aboto3.AIOClient(ec2_client)filters=[{"Name":"instance-type","Values":["t2.micro"]}]pages=[]pager=ec2_client.get_paginator("describe_instances")forpageinpager.paginate(Filters=filters):pages.append(page)# note the use of an "async for" loop so calls for a page are non-blocking.asyncdefaio_tester():aio_pages=[]aio_pager=aio_ec2_client.get_paginator("describe_instances")asyncforpageinaio_pager.paginate(Filters=filters):aio_pages.append(page)returnaio_pagesaio_pages=asyncio.run(aio_tester())Client exceptions can be caught on theAIOClientjust like a normalboto3client.botocoreexceptions are caught as normal.importasyncioimportaboto3importboto3ssm_client=boto3.client("ssm")aio_ssm_client=aboto3.AIOClient(ssm_client)try:ssm_client.get_parameter(Name="/unknown/param")exceptssm_client.exceptions.ParameterNotFoundaserror:print("found an error here:{}".format(error))asyncdefaio_tester():try:aio_ssm_client.get_parameter(Name="/unknown/param")exceptaio_ssm_client.exceptions.ParameterNotFoundaserror:print("found an error here:{}".format(error))aio_pages=asyncio.run(aio_tester())You can also useboto3augmenting libraries sinceaboto3is only a wrapper.OptimizationWhen anAIOClientis created it will automatically create aThreadPoolExecutorto run the boto3 calls asynchronously. The size of max workers of the pool is determined by the boto3 client's config formax_pool_connections. By default this is 10. Seebotocore Config Referencefor more details.The thread pool adds a small amount of overhead for eachAIOClientthat is created (though this is far less than the overhead of creating a boto3 client). To save some initialization time or have more control over total number of threads you can provide your ownThreadPoolExecutorand share this between clients.importasynciofromconcurrent.futuresimportThreadPoolExecutorimportaboto3importboto3boto3_thread_pool=ThreadPoolExecutor(max_workers=16)ec2_client=boto3.client("ec2")aio_ec2_client=AIOClient(boto3_client=ec2_client,thread_pool_executor=boto3_thread_pool)rds_client=boto3.client("rds")aio_rds_client=AIOClient(boto3_client=rds_client,thread_pool_executor=boto3_thread_pool)In general, for applications, you will want to cache the clients if possible. Try not to create a new one in every function. For applications, a shared thread pool can be useful in limiting the total number of threads, when necessary.If you are making large numbers of concurrent calls with the sameAIOClientyou may want to pass in a custombotocore.config.Configto the boto3 client with a highermax_pool_connections. If you are using a shared thread pool you may also need to increase the max workers in that as well.The example below will allow up to 32 concurrent calls to be in flight for the EC2 and RDSAIOClient's.importasynciofromconcurrent.futuresimportThreadPoolExecutorimportaboto3importboto3frombotocore.configimportConfigboto_config=Config(max_pool_connections=32)boto3_thread_pool=ThreadPoolExecutor(max_workers=64)ec2_client=boto3.client("ec2",config=boto_config)aio_ec2_client=AIOClient(boto3_client=ec2_client,thread_pool_executor=boto3_thread_pool)rds_client=boto3.client("rds",config=boto_config)aio_rds_client=AIOClient(boto3_client=rds_client,thread_pool_executor=boto3_thread_pool)Or if you don't care about sharing the thread pool just pass in the config and eachAIOClientwill have it's own pool of 32 threads.importasyncioimportaboto3importboto3frombotocore.configimportConfigboto_config=Config(max_pool_connections=32)ec2_client=boto3.client("ec2",config=boto_config)aio_ec2_client=AIOClient(boto3_client=ec2_client)rds_client=boto3.client("rds",config=boto_config)aio_rds_client=AIOClient(boto3_client=rds_client)DevelopmentInstall the package in editable mode with dev dependencies.(venv) $ pip install -e .[dev]noxis used to manage various dev functions. Start with(venv) $ nox --helppyenvis used to manage python versions. To run the nox tests for applicable python version you will first need to install them. In the root project dir run:(venv) $ pyenv installChangelogChangelog foraboto3. All notable changes to this project will be documented in this file.The format is based onKeep a Changelog, and this project adheres toSemantic Versioning.[0.1.1] - 2024-01-28AddedTests for python 3.12ChangedUpdated README to reflect recommendations.RemovedSupport for python 3.7[0.1.0] - 2023-06-22Initial Release.
abo-tools
Tools for ABO analys
abotserver
No description available on PyPI.
abottle
abottletrition/tensorrt/onnxruntim/pytorch python server wrapperput your model intoa bottlethen you get a working server and more.DemoimportnumpyasnpfromtransformersimportAutoTokenizerclassMiniLM:def__init__(self):self.tokenizer=AutoTokenizer.from_pretrained("sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2")defpredict(self,X):encode_dict=self.tokenizer(X,padding="max_length",max_length=128,truncation=True)input_ids=np.array(encode_dict["input_ids"],dtype=np.int32)attention_mask=np.array(encode_dict["attention_mask"],dtype=np.int32)outputs=self.model.infer({"input_ids":input_ids,"attention_mask":attention_mask},["y"])returnoutputs['y']#you can write config in class or provide it as a yaml file or yaml stringclassConfig:classTritonModel:name="minilm"version="2"you can write a class like this, and then starts with abottleabottlemain.MiniLMwith default, abottle will run as server, and server at 0.0.0.0:8081curllocalhost:8081/predictabottle will inject an attribute namedmodelinto your class, and you don't need to care what that model runtime is. it can be Pytorch with CuDNN8 or an optimized TensorRT plan, it depends on the config you giveself.model.infer({"input1":input1_tensor,"input2":input2_tensor},['output_1'])config with shellabottlemain.MiniLM--config"""TritonModel:triton_url: localhostname: minilmversion: 2"""config with fileabottlemain.MiniLM--config<configyamlfilepath>importnumpyasnpimportpandasaspdfromtransformersimportAutoTokenizerfromtypingimportListclassMiniLM:def__init__(self):self.tokenizer=AutoTokenizer.from_pretrained("sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2")defcosine(self,a:List[List[float]],b:List[List[float]])->float:a,b=np.array(a),np.array(b)# |A|sqrt_sqare_A=np.tile(np.sqrt(np.sum(np.square(a),axis=1)).reshape((a.shape[0],1)),(1,a.shape[0]),)# |B|sqrt_sqare_B=np.tile(np.sqrt(np.sum(np.square(b.T),axis=0)).reshape((1,b.shape[0])),(b.shape[0],1),)# cosine similarityscore_matrix=np.divide(np.dot(a,b.T),sqrt_sqare_A*sqrt_sqare_B)returnscore_matrixdefpredict(self,X:List[str])->List[List[float]]:encode_dict=self.tokenizer(X,padding="max_length",max_length=128,truncation=True)input_ids=np.array(encode_dict["input_ids"],dtype=np.int32)attention_mask=np.array(encode_dict["attention_mask"],dtype=np.int32)outputs=self.model.infer({"input_ids":input_ids,"attention_mask":attention_mask},["y"])returnoutputs["y"]defevaluate(self,file_path:str,batch_size:int)->float:test_data=pd.read_csv(file_path,sep=", ",names=["query","label"])query,label=test_data["query"].tolist(),test_data["label"].tolist()assertlen(query)==len(label)query_embedding,label_embedding=[],[]foriinrange(0,len(query),batch_size):query_embedding+=self.predict(query[i:min(i+batch_size,len(query))])label_embedding+=self.predict(label[i:min(i+batch_size,len(label))])assertlen(query_embedding)==len(label_embedding)# 分数矩阵score_matrix=self.cosine(query_embedding,label_embedding)# 算法性能raw_result=np.argmax(score_matrix,axis=0)==np.array([iforiinrange(score_matrix.shape[0])])unique,counts=np.unique(a,return_counts=True)top_1_accuracy=counts[unique.tolist().index(True)]/np.sum(counts)returntop_1_accuracydef evaluate can be used as a tester like belowabottlemain.MiniLM--astesterfile_path='test.csv',batch_size=100the arguments you defined in theevaluatefunction can be set in CLI args with format xxx=xxxyou can use different wrapper for your model, including:abottle.ONNXModelabottle.TensorRTModelabottle.TritonModelabottle.PytorchModelif you want to add more wrappers you can just implement abottle.BaseModelabottlemain.MiniLM--asserver--wrapperabottle.TritonModelConfigsabottle.ONNXModelONNXModel:ort_file:'theortfilepath'abottle.TensorRTModelTensorRTModel:trt_file:'TensorRTplanfilepath'abottle.TritonModelTritonModel:name:"yourmodel'snameontritonserver"version:"yourmodel'sversionontritonserver"triton_url:"tritonserver'shostwithoutschema,itmeanshttp://xxxisinvalid"abottle.PytorchModel(not fully implemented)PytorchModel:model:'pytrochimportablename'Motivationas a DL model creator, you don't need to focus on how to serve or test the performance of a model on a target platform or how to optimize your model and don't lose accuracy, just find a bottle and put your logic code into it, the DL engineer people can do those things for you, all you need to do is export your model to a onnx file, and write logic code like above examples.Featurewe will build this bottle as strong as possible, make this bottle become a standardization interface of the MLOps cycles, you can see more and more scenarios like optimization, graph fusing, performance test, deployment, data gathering, etc using this bottle.
about
Summary:define the metadata of your project in a single place, then make it available at setup-time and at runtime.Let’s consider theaboutpackage as an example ; we add to our project files, in the source tree, a file namedabout.pythat contains the metadata of the project:about |--- setup.py |--- README.md |... |--- about | |--- __init__.py | |... | |--- about.pyThis file contains the metadata (and a little boilerplate):# coding: utf-8 metadata = dict( __name__ = "about", __version__ = "5.1.1", __license__ = "MIT License", __author__ = u"Sébastien Boisgérault <[email protected]>", __url__ = "https://warehouse.python.org/project/about", __summary__ = "Software Metadata for Humans", __keywords__ = "Python / 2.7, OS independent, software development" ) globals().update(metadata) __all__ = metadata.keys()Setup.To use this metadata, thesetup.pyfile includes the code:import about import about.about info = about.get_metadata(about.about) # add extra information (contents, requirements, etc.). info.update(...) if __name__ == "__main__": setuptools.setup(**info)Runtime.The metadata is stored as a collection of attributes of theabout.aboutmodule. If we include in theabout/__init__.pyfile the one-linerfrom .about import *they become available in the top-level module:>>> import about >>> print about.__name__ about >>> print about.__version__ 5.1.1 >>> print about.__license__ MIT License
aboutcode-toolkit
IntroductionThe AboutCode Toolkit and ABOUT files provide a simple way to document the origin, license, usage and other important or interesting information about third-party software components that you use in your project.You start by storing ABOUT files (a small YAML formatted text file with field/value pairs) side-by-side with each of the third-party software components you use. Each ABOUT file documents origin and license for one software. There are many examples of ABOUT files (valid or invalid) in the testdata/ directory of the whole repository.The current version of the AboutCode Toolkit can read these ABOUT files so that you can collect and validate the inventory of third-party components that you use.In addition, this tool is able to generate attribution notices and identify redistributable source code used in your project to help you comply with open source licenses conditions.This version of the AboutCode Toolkit follows the ABOUT specification version 3.3.1 at:https://aboutcode-toolkit.readthedocs.io/en/latest/specification.htmlBuild and tests statusBranchLinux/macOSWindowsMasterDevelopREQUIREMENTSThe AboutCode Toolkit is tested with Python 3.7 or above only on Linux, Mac and Windows. You will need to install a Python interpreter if you do not have one already installed.On Linux and Mac, Python is typically pre-installed. To verify which version may be pre-installed, open a terminal and type:python –versionNoteDebian has decided that distutils is not a core python package, so it is not included in the last versions of debian and debian-based OSes. A solution is to run:sudo apt install python3-distutilsOn Windows or Mac, you can download the latest Python here:https://www.python.org/downloads/Download the .msi installer for Windows or the .dmg archive for Mac. Open and run the installer using all the default options.INSTALLATIONCheckout or download and extract the AboutCode Toolkit from:https://github.com/nexB/aboutcode-toolkit/To install all the needed dependencies in a virtualenv, run (on posix):./configureor on windows:configureACTIVATE the VIRTUALENVTo activate the virtualenv, run (on posix):source venv/bin/activateor on windows:venv\bin\activateDEACTIVATE the VIRTUALENVTo deactivate the virtualenv, run (on both posix and windows):deactivateVERSIONING SCHEMAStarting at AboutCode version 4.0.0, the AboutCode Toolkit will follow SemVer for the versioning schema.i.e. MAJOR.MINOR.PATCH formatMAJOR version when making incompatible API changes,MINOR version when making functionality in a backwards compatible manner, andPATCH version when making backwards compatible bug fixes.REFERENCESeehttps://aboutcode-toolkit.readthedocs.io/en/latest/for documentation.Seehttps://aboutcode-toolkit.readthedocs.io/en/latest/reference.htmlfor reference.TESTS and DEVELOPMENTTo install all the needed development dependencies, run (on posix):./configure –devor on windows:configure –devTo verify that everything works fine you can run the test suite with:pytestCLEAN BUILD AND INSTALLED FILESTo clean the built and installed files, run (on posix):./configure –cleanor on windows:configure –cleanHELP and SUPPORTIf you have a question or find a bug, enter a ticket at:https://github.com/nexB/aboutcode-toolkitFor issues, you can use:https://github.com/nexB/aboutcode-toolkit/issuesSOURCE CODEThe AboutCode Toolkit is available through GitHub. For the latest version visit:https://github.com/nexB/aboutcode-toolkitHACKINGWe accept pull requests provided under the same license as this tool. You agree to thehttp://developercertificate.org/LICENSEThe AboutCode Toolkit is released under the Apache 2.0 license. See (of course) the about.ABOUT file for details.
aboutcode-toolklt
Failed to fetch description. HTTP Status Code: 404
aboutdir
UNKNOWN
about_file
UNKNOWN
aboutjng
Failed to fetch description. HTTP Status Code: 404
aboutname
UNKNOWN
about_numtest
Failed to fetch description. HTTP Status Code: 404
about_pandoc
Failed to fetch description. HTTP Status Code: 404
about-py
# about-pyDjango about pluginMakes a webpage available with information about the projects last version control system commits andthe python interpreter and used python libraries.# How to use1. `pip install about-py`2. add about-py to your Django INSTALLED_APPS:<br/><pre><code>INSTALLED_APPS = [...,about-py,...,]</code></pre>3. Create an url entry with the `AboutView`. e.g.<br/><pre><code>url(r'^about/', AboutView.as_view()),</code></pre><br/>Or use the secure `SecureAboutView` so only staff and super users can access the page.
abouttag
This package provides functions for generating about tags for Fluidinfo following various conventions.Fluidinfo is a hosted, online database based on the notion of tagging. For more information on FluidDB, visithttp://fluidinfo.com.For more information on the ideas that motivated this module, see posts athttp://abouttag.blogspot.com. A good place to start ishttp://abouttag.blogspot.com/2010/03/about-tag-conventions-in-fluiddb.htmlEXAMPLEExamples of usage are provided in the examples directory. Some simple examples are:from abouttag.books import book from abouttag.music import album, artist, track from abouttag.film import film, movie print book(u"One Hundred Years of Solitude", u'Gabriel García Márquez') print book(u'The Feynman Lectures on Physics', u'Richard P. Feynman', u'Robert B. Leighton', u'Matthew Sands') print track(u'Bamboulé', u'Bensusan and Malherbe') print album(u"Solilaï", u'Pierre Bensusan') print artist(u"Crosby, Stills, Nash & Young") print film(u"Citizen Kane", u'1941') print movie(u"L'Âge d'Or", u'1930')INSTALLATIONpip install -U abouttagDEPENDENCIESurlnorm
about-time
about-timeA cool helper for tracking time and throughput of code blocks, with beautiful human friendly renditions.What does it do?Did you ever need to measure the duration of an operation? Yeah, this is easy.But how to:measure the duration of two or more blocks at the same time, including the whole duration?instrument a code to cleanly retrieve durations in one line, to log or send to time series databases?easily see human friendly durations ins(seconds),ms(milliseconds),µs(microseconds) and evenns(nanoseconds)?easily see human friendly counts with SI prefixes likek,M,G,T, etc?measure the actual throughput of a block? (this is way harder, since it needs to measure both duration and number of iterations)easily see human friendly throughputs in "/second", "/minute", "/hour" or even "/day", including SI prefixes?Yes, it can get tricky! More interesting details aboutdurationandthroughput.If you'd tried to do it without these magic, it would probably get messy and immensely pollute the code being instrumented.I have the solution, behold!fromabout_timeimportabout_timedefsome_func():importtimetime.sleep(85e-3)returnTruedefmain():withabout_time()ast1:# <-- use it like a context manager!t2=about_time(some_func)# <-- use it with any callable!!t3=about_time(x*2forxinrange(56789))# <-- use it with any iterable or generator!!!data=[xforxint3]# then just iterate!print(f'total:{t1.duration_human}')print(f' some_func:{t2.duration_human}-> result:{t2.result}')print(f' generator:{t3.duration_human}->{t3.count_human}elements, throughput:{t3.throughput_human}')Thismain()function prints:total: 95.6ms some_func: 89.7ms -> result: True generator: 5.79ms -> 56.8k elements, throughput: 9.81M/sHow cool is that? 😲👏You can also get the duration in seconds if needed:In [7]: t1.duration Out[7]: 0.09556673200064251But95.6msis way better, isn't it? The same withcountandthroughput!So,about_timemeasures code blocks, both time and throughput, and converts them to beautiful human friendly representations! 👏Get itJust install with pip:❯pipinstallabout-timeUse itThere are three modes of operation: context manager, callable and throughput. Let's dive in.1. Use it like a context manager:fromabout_timeimportabout_timewithabout_time()ast:# the code to be measured...# any lenghty block.print(f'The whole block took:{t.duration_human}')This way you can nicely wrap any amount of code.In this mode, there are the basic fieldsdurationandduration_human.2. Use it with any callable:fromabout_timeimportabout_timet=about_time(some_func)print(f'The whole block took:{t.duration_human}')print(f'And the result was:{t.result}')This way you have a nice one liner, and do not need to increase the indent of your code.In this mode, there is an additional fieldresult, with the return of the callable.If the callable have params, you can use alambdaor (📌 new) simply send them:defadd(n,m):returnn+mt=about_time(add,1,41)# or:t=about_time(add,n=1,m=41)# or even:t=about_time(lambda:add(1,41))3. Use it with any iterable or generator:fromabout_timeimportabout_timet=about_time(iterable)foritemint:# process item.print(f'The whole block took:{t.duration_human}')print(f'It was detected{t.count_human}elements')print(f'The throughput was:{t.throughput_human}')This wayabout_timealso extracts the number of iterations, and with the measured duration it calculates the throughput of the whole loop! It's especially useful with generators, which do not have length.In this mode, there are the additional fieldscount,count_human,throughputandthroughput_human.Cool tricks under the hood:you can use it even with generator expressions, anything that is iterable to python!you can consume it not only in aforloop, but also in { list | dict | set } comprehensions,map()s,filter()s,sum()s,max()s,list()s, etc, thus any function that expects an iterator! 👏the timer only starts when the first element is queried, so you can initialize whatever you need before entering the loop! 👏thecount/count_humanandthroughput/throughput_humanfields are updated inreal time, so you can use them even inside the loop!Features:According to the SI standard, there are 1000 bytes in akilobyte.There is another standard called IEC that has 1024 bytes in akibibyte, but this is only useful when measuring things that are naturally a power of two, e.g. a stick of RAM.Be careful to not render IEC quantities with SI scaling, which would be incorrect. But I still support it, if you really want to ;)By default, this will use SI,1000divisor, andno spacebetween values and scales/units. SI uses prefixes:k,M,G,T,P,E,Z, andY.These are the optional features:iec=> use IEC instead of SI:Ki,Mi,Gi,Ti,Pi,Ei,Zi,Yi(implies1024);1024=> use1024divisor — ifiecis not enabled, use prefixes:K,M,G,T,P,E,Z, andY(note the upper 'K');space=> include a space between values and scales/units everywhere:48 Binstead of48B,15.6 µsinstead of15.6µs, and12.4 kB/sinstead of12.4kB/s.To change them, just use the properties:fromabout_timeimportFEATURESFEATURES.feature_1024FEATURES.feature_iecFEATURES.feature_spaceFor example, to enable spaces between scales/units:fromabout_timeimportFEATURESFEATURES.feature_space=TrueThe human duration magicI've used just one key concept in designing the human duration features: cleanliness.3.44sis more meaningful than3.43584783784s, and14.1usis much nicer than.0000141233333s.So what I do is: round values to at most two decimal places (three significant digits), and find the best scale unit to represent them, minimizing resulting values smaller than1. The search for the best unit considers even the rounding been applied!0.000999999does not end up as999.99us(truncate) nor1000.0us(bad unit), but is auto-upgraded to the next unit1.0ms!Theduration_humanunits change seamlessly from nanoseconds to hours.values smaller than 60 seconds are always rendered as "num.D[D]unit", with one or two decimals;from 1 minute onward it changes to "H:MM:SS".It feels much more humane, humm? ;)Some examples:duration (float seconds)duration_human.00000000185'1.85ns'.000000999996'1.00µs'.00001'10.0µs'.0000156'15.6µs'.01'10.0ms'.0141233333333'14.1ms'.1099999'110ms'.1599999'160ms'.8015'802ms'3.434999'3.43s'59.999'0:01:00'68.5'0:01:08'125.825'0:02:05'4488.395'1:14:48'The human throughput magicI've made thethroughput_humanwith a similar logic. It is funny how much trickier "throughput" is to the human brain!If something took1165263 secondsto handle123 items, how fast did it go? It's not obvious...It doesn't help even if we divide the duration by the number of items,9473 seconds/item, which still does not mean much. How fast was that? We can't say.How many items did we do per time unit?Oh, we just need to invert it, so0,000105555569858 items/second, there it is! 😂To make some sense of it we need to multiply that by 3600 (seconds in an hour) to get0.38/h, which is much better, and again by 24 (hours in a day) to finally get9.12/d!! Now we know how fast that process was! \o/ As you see, it's not easy at all.Thethroughput_humanunit changes seamlessly from per-second, per-minute, per-hour, and per-day.It also automatically inserts SI-prefixes, like k, M, and G. 👍duration (float seconds)number of elementsthroughput_human1.10'10.0/s'1.2500'2.50k/s'1.1825000'1.82M/s'2.1'30.0/m'2.10'5.00/s'1.98198198198198211'5.55/s'100.10'6.00/m'1600.3'6.75/h'.991'1.01/s'1165263.123'9.12/d'Accuracyabout_timesupports all versions of python, but in pythons >=3.3it performs even better, with much higher resolution and smaller propagation of errors, thanks to the newtime.perf_counter. In older versions, it usestime.timeas usual.Changelog highlights:4.2.1: makes fixed precision actually gain more resolution, when going from a default 1 to 2 decimals4.2.0: support for fixed precision, useful when one needs output without varying lengths; official Python 3.11 support4.1.0: enable to cache features within closures, to improve performance forhttps://github.com/rsalmei/alive-progress4.0.0: new version, modeled after my Rust implementation inhttps://crates.io/crates/human-repr; includes new global features, new objects for each operation, and especially, new simpler human friendly representations; supports Python 3.7+3.3.0: new interfaces for count_human and throughput_human; support more common Kbyte for base 2 (1024), leaving IEC one as an alternate3.2.2: support IEC kibibyte standard for base 2 (1024)3.2.1: support divisor in throughput_human3.2.0: both durations and throughputs now use 3 significant digits; throughputs now include SI-prefixes3.1.1: makeduration_human()andthroughput_human()available for external use3.1.0: include support for parameters in callable mode; official support for python 3.8, 3.9 and 3.103.0.0: greatly improved the counter/throughput mode, with a single argument and working in real time2.0.0: feature complete, addition of callable and throughput modes1.0.0: first public release, context manager modeLicenseThis software is licensed under the MIT License. See the LICENSE file in the top distribution directory for the full license text.Maintaining an open source project is hard and time-consuming, and I've put much ❤️ and effort into this.If you've appreciated my work, you can back me up with a donation! Thank you 😊
aboutyou
Author:Arne Simon [[email protected]]A Python implementation for the AboutYou shop API.InstallationInstall the package via PIP:$ pip install aboutyouOr checkout the most recent version:$ git clone https://bitbucket.org/slicedice/aboutyou-shop-sdk-python.git $ cd aboutyou-shop-sdk-python $ python setup.py installQuick StartRegister for an account at the [AboutYou Devcenter](https://developer.aboutyou.de/) and create a new app. You will be given credentials to utilize the About You API.Modefiy one of the example credential files.Use the following lines:from aboutyou.config import YAMLCredentials from aboutyou.shop import ShopApi shop = ShopApi(YAMLCredentials('mycredentials.yml')) cagtegory_forest = shop.categories()DocumentationDocumentation is found athttp://aboutyou-shop-sdk.readthedocs.org/en/latest/.If you want to build the documentation yourself.Checkout the git repo.Go to thedoc/folder.make htmlChange Log1.0.1:Fixed bug in login url generation1.0:Added Django backend and middlewareFixed configuration bugCleand up project structuresetup.py is not dependent on setuptools0.9Is now Python 3 compatible.Test cases with mocking.Added Auth module.Moved thin api wrapper in own api module.The app credentials are now seperated from the other configurations.0.3:Additional docmentation.Auto fetch flag.PyPI integration.YAML configuration files.0.2:Caching with Memcached and pylibmc.EasyAboutYou has function,getSimpleColors.Error handling fix.0.1:Products return now there url to the mary+paul shop.Dirty caching without memcached.EasyCollins products are no bulk requests.Extended documentation for EasyAboutYou.
above
No description available on PyPI.
abow
A Bottle of WikiA Bottle of Wiki (abbreviatedabow) is a personal wiki. Use it for viewing and editing pages written in markdown directly in your browser. It is made to be usable both on mobile and desktop.While you won't be hosting Wikipedia on a Bottle of Wiki, it is easy to deploy and does not require a database: pages are saved as text files. It has no notion of users, access rights, page history, comments, discussion or even edit conflict. It is meant to be used by one person.A Bottle of Wiki is a wiki built withbottle.InstallationTo run a Bottle of Wiki, you will need python3 installed on a unix-like machine.Test (a quick sip)The easiest way to test a Bottle of Wiki is to install it in a virtual environment. You can do so with the following commands:python3-mvenvabowsourceabow/bin/activateOr use your favorite virtualenv management tool.Then install a Bottle of Wiki and its dependencies with pip:pipinstallabowOptional: If you want syntax highlighting when displaying code in your page, install the extra:pipinstallabow[extra]Finally start the application with:bottle.pyabow:applicationIf all went well, you can point your browser tohttp://127.0.0.1:8080/and start editing. The pages will be saved in the current directory. If you are looking for inspiration, a few markdown pages (including this README) are part of thesource distribution.Deployment (the whole bottle)A Bottle of Wiki is a WSGI application, as such it can be hosted by any WSGI-capable web server. The documentation of bottle has a page dedicated to thedeploymentthat can be used as inspiration. Detailing how to host a WSGI application is beyond the scope of this README as there are many options to choose from, but here is the most important piece of advice:Make sure the access is restricted.A Bottle of Wiki has no concept of login or user, so anyone with access can edit the pages. You can limit the access by setting up HTTP authentication and encryption on your web server. You can also serve only on a local network and access it via a VPN.ConfigurationA Bottle of Wiki can be customized with a .ini configuration file. A example is provided in the source distribution, or you can generate it with the following command:python-c"import abow.config;abow.config.print_file()"The example configuration is heavily commented and should be self-explanatory. It enable you to change where a Bottle of Wiki stores the pages and the locale used. You can also host the static assets (css, js, ...) outside of the application and serve them directly from a web server.The configuration is read from the following locations:/etc/abow/config$XDG_CONFIG_HOME/abow/config, defaulting to~/.config/abow/config$ABOW_CONFIG, if definedPaths are tried one after one and each configuration file can override the settings of the previous ones.Built WithA Bottle of Wiki depends on the following python packages:Bottle-- The WSGI micro-framework usedPython-Markdown-- The markdown interpreterPyMdown Extensions-- Extensions for Python MarkdownPygments-- Syntax highlighter (optional)The following css and javascript packages are included:Bootstrapversion 5.3.0 -- CSS ToolkitAutosizeversion 5.0.0 -- Script to automatically adjust textarea heightThings to doDetection of edit conflictRight now if you edit a page from two browsers you can lose some modifications. Detecting that the page has been changed on the server while being edited would be nice if you want to use a Bottle of Wiki with more than one user.Page historyWhile you can avoid losing changes by setting up a backup on the server (and you should anyway), integrating the page history in the wiki would make is easier to review the page modifications. Look intodulwichfor that?LicenseThis project is licensed under the Affero General Public License version 3 or later.
abox
OverviewABOX Analytics Box LibraryPrerequisitesPython >= 3.6clean virtual environment with pip, setuptoolsInstallationpip install abox
abp
No description available on PyPI.
abpandas
An Agent Based Modelling (ABM) package that can generate grid spatial shape files or work with predefined shape files. The package focuses on simplicity by assigning Agents to spatial Polygons instead of assigning x and y locations for each agents. This makes it useful in situations where granual movement of agents in space is not necessary (e.g. residentail location models). For documentation visithttps://github.com/YahyaGamal/ABPandas_Documentation
abp-blocklist-parser
BlockListParserCode to detect if a url matches any of the regexes in lists like ad block plus listsIn order to use it,blocklist_parser = new BlockListParser(regex_file)orblocklist_parser = new BlockListParser(regexes) where regexes is comma separated list of regexesThen to detect if something should be blocked,blocklist_parser.should_block(ur, options) where options is a dictionary with keys like image, third-party, etc. (look at RegexParser.py for a list of options possible).Also, useblocklist_parser.should_block_with_items(url, options) to get the list of regexes which block a certain url
abpig
A Better Python Image Gallery. Scans a directory and generates a static website with the photos it contains.
ab-plugin-scenariolink
# ScenarioLink: An Activity Browser Plugin for Scenario-Based LCA DatabasesScenarioLink is a specialized plugin for the [Activity Browser](https://github.com/LCA-ActivityBrowser/activity-browser), an open-source software for Life Cycle Assessment (LCA). This plugin enables you to seamlessly fetch and reproduce scenario-based LCA databases, such as those generated by [premise](https://github.com/polca/premise), using [unfold](https://github.com/polca/unfold) datapackages.## FeaturesReproduce individual or multiple scenario-based databases within Activity Browser.Merge multiple databases into a unified superstructure database containing various scenarios.Leverage the capabilities of theunfoldlibrary to recreate databases with the necessary scaling factors.## OverviewThe Activity Browser builds upon the [Brightway2](https://brightway.dev) LCA framework. ScenarioLink aims to simplify the use of scenario-based LCA databases within the Activity Browser by:Eliminating the need for separate tools required to generate these databases (e.g., premise).Utilizingunfolddatapackages that contain scaling factors essential for reproducing scenario-based LCA databases, assuming a consistent source database (e.g., ecoinvent 3.7.1).![Flow Diagram](assets/flow_diagram.png)## InstallationActivate your existing Activity Browser conda environment.Install the ScenarioLink plugin using Pypi or conda:`bash pip installab-plugin-scenariolink``bash conda install-cromainsacchiab-plugin-scenariolink`Launch the Activity Browser.Navigate toTools > Pluginsand select ScenarioLink from the plugin list.## Usage### Reproduce a scenario-based databaseActivate the plugin by selecting it from the plugin list.![Plugin List](assets/plugin_list.png)After activating the plugin, select theScenarioLinktab.![ScenarioLink Tab](assets/scenariolink_tab.png)Select (double-click) the desired datapackage from the table.![Datapackage Table](assets/datapackage_table.png)If the datapackage selected is not present in the local cache, it will be downloaded from the remote repository.Once the download is complete, a second table presents the scenarios contained in the datapackage.![Scenario Table](assets/scenario_table.png)Select the desired scenario(s) by checking the corresponding checkboxes.Choose whether to merge the selected scenarios into a single database (superstructure database) or reproduce them individually.ClickImportto start the process.The plugin will ask you to select the databases in your project that will be used as source databases.![Source Database Selection](assets/source_database_selection.png)The plugin will then reproduce the selected scenario(s) and add them to your project.## ContributingYou can make your own scenario-based LCA databases available to the community. To do so, you need to create aunfolddatapackage and upload it to a remote repository. We will then add it to the list of available datapackages in the ScenarioLink plugin.## MaintainersFor questions, issues, or contributions, you can reach out to:Marc van der Meide: [Email](mailto:[email protected])Romain Sacchi: [Email](mailto:[email protected])Alternatively, you can open an issue on this GitHub repository.
abpower
abpowerabpoweris a parser for the publicly-available data provided byAESOrelated to the power grid in Alberta. It consists of a package and a command-line utility.BackgroundDuring the summer of 2020 (yes,thatsummer) I built a website named theAlberta Power Dashboardthat gathered and displayed data from AESO. It was fairly buggy and eventually I stopped maintaining it.This is an attempt to write a more robust parser than the original, with the possibility of bringing the website back at some point in the future - or at least providing a parser someone else can use.InstallationWithpip:-pipinstallabpowerWithpoetry:-poetryaddabpowerUsageTheabpowerpackageYou can query for all data currently supported by the module (see below) with the following:-fromabpowerimportETSParserparser=ETSParser()data=parser.get()This will return anETSobject that contains the data. Theas_dictandas_jsonproperties will returndictand JSON string representations of the data respectively.Querying specific dataYou can pass alistortupleof strings to the parser to only get and parse specific sections of the AESO data.For example, to only query for theCurrent Supply DemandandSystem Marginal Pricedata:-fromabpowerimportETSParserparser=ETSParser()data=parser.get(query=["current-supply-demand","system-marginal-price"])You can also import the specific parser directly:-fromabpower.parserimportCurrentSupplyDemandParser,SystemMarginalPriceParsercsd_parser=CurrentSupplyDemandParser()csd_data=csd_parser.get()smp_parser=SystemMarginalPriceParser()smp_data=smp_parser.get()Theabpowercommand-line utilityA command-line utility - also namedabpower- will be installed along with the module.As with the module, you can query for all data with:-abpowergetThis will query, parse and return all data in JSON format. You can use the--write-to-file(or-w) option to write the data to a file instead of standard output.Querying specific dataAlso like the module, you can query for specific data only:-abpowerget-qcurrent-supply-demand-qsystem-marginal-priceAvailable dataNot all the data provided on the AESO website is queried or parsed byabpower. This may change in the future, but right now the following are supported:-current-supply-demand- theCurrent Supply Demandreport, which gives an overview of the gridactual-forecast- theActual / Forecastreport, which gives a historical comparison of the forecasted and actual usage of the grid over the last 24 hoursdaily-average-pool-price- theDaily Average Pool Pricereport, which gives averages of the pool price over the last weekhourly-available-capability- the7 Day Hourly Available Capabilityreport, which gives a forecast of hourly availability over the next 7 dayspool-price- thePool Pricereport, which gives the historical pool prices over the last 24 hourssupply-surplus- theSupply Surplusreport, which gives the forecasted surplus status for the next 6 hourssystem-marginal-price- theSystem Marginal Pricereport, which gives the historical price over the last few hourspeak-load-forecastthePeak Load Forecastreport, which gives the forecasted peak load for the next 7 daysKnown issues and future plansThere are no known issues, but this was initially written in a weekend so make of that what you will.Documentation needs writing and/or generatingTests need writingCoverage of the data available needs expandingCredits and contributingabpoweris written and maintained by Andy Smith. Pull requests and bug reports are welcome!Licenseabpoweris distributed under theMIT License.
abq
No description available on PyPI.
abqcy
abqcyWrite Abaqus Subroutines in CythonGitHub repository:https://github.com/haiiliin/abqcyPyPI:https://pypi.org/project/abqcyDocumentation:https://abqcy.readthedocs.ioRead the Docs:https://readthedocs.org/projects/abqcyBug Report:https://github.com/haiiliin/abqcy/issues
abqos
Print lineSetup wheel.exe/twine.exe as your environment path (d:\Python39\Scripts)Upload packagepythonsetup.pysdistbdist_wheel twineuploaddist/*Examples of how to useSet configurationfromabqosimportset#set(<file>)set("1.txt")
abqpy
abqpy 2024Read this in other languages:English,简体中文.Type hints for Abaqus/Python scriptingabqpyis a Python package providing type hints for Python scripting of Abaqus, you can use it to write your Python script of Abaqus fluently, even without doing anything in Abaqus. It also provides some simple APIs to execute the Abaqus commands so that you can run your Python script to build the model, submit the job and extract the output data in just one Python script, even without opening the Abaqus/CAE.GitHub repository:https://github.com/haiiliin/abqpyPyPI:https://pypi.org/project/abqpyDocumentation:https://haiiliin.github.io/abqpyQuick StartMake sureandare installed on your computer, opencmdorterminal, type:pip install -U abqpy==2024.* # change the major version to match your Abaqus versionThen, open your Abaqus/Python script in your favorite IDE with Python language support, run the script with Python 3.7+ (just do it!), see the magic happens. For more information, please refer to thedocumentation.Pull Requests are WelcomeSinceabqpyis reconstructed from the official Abaqus documentation, many of the docstrings are not well formatted, for example, the Raises section, the math equations, the attributes of the objects, due to the limitation of my time, those things are left behind, if anyone is willing to make any contributions, please feel free to create your pull requests.Please referCONTRIBUTINGfor contribution guidelines.Screenshots
abqpy2016
abqpy wrapperA wrapper package for abqpy.
abqpy2017
abqpy wrapperA wrapper package for abqpy.
abqpy2018
abqpy wrapperA wrapper package for abqpy.
abqpy2019
abqpy wrapperA wrapper package for abqpy.
abqpy2020
abqpy wrapperA wrapper package for abqpy.