package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
adam-signurl
No description available on PyPI.
adam-sim
ADAM SimulatorADAM (Autonomous Domestic Ambidextrous Manipulator) is a mobile robot manipulator consisting of a base with two Degrees of Freedom (DoF) and two Universal Robots UR3 of 6 DoF each.The simulation was built usingMuJoCo, a free and open source physics engine designed from the ground up for the purpose of model-based optimization, and in particular optimization through contacts.InstallationFollow the next steps for installing the simulation on your device.Requierements:Python 3.10.0 or higherNote: The Adam Simulator works on Linux, Windows and Mac.Install miniconda (highly-recommended)It is highly recommended to install all the dependencies on a new virtual environment. For more information check the conda documentation forinstallationandenvironment management. For creating the environment use the following commands on the terminal.condacreate-nadampython==3.10.9 condaactivateadamInstall from pipThe ADAM simulator is available as a pip package. For installing it just use:pip install adam-simInstall from sourceFirstly, clone the repository in your system.gitclonehttps://github.com/vistormu/adam_simulator.gitThen, enter the directory and install the required dependenciescdadam_simulator pipinstall-rrequirements.txtInstallation for the communicationThe communication uses mosquitto as a broker. For installing it on your system, follow the instructions on themosquitto website.It is also necessary to install docker. For more information check thedocker documentation.DocumentationThe official documentation of the package is available onRead the Docs. Here you will find theinstallation instructions, theAPI referenceand someminimal working examples.
adamspy
adamspyPython tools for working with MSC/Adams data.Please see the documentation here:https://bthornton191.github.io/adamspy/#
adams-shell-wear
Adams Shell WearPython tools for calculating and applying wear to shell geometry inAdams.InstallationInstall straight from the repo using pip.> pip install adams_shell_wearUsageTBD
adanet
No description available on PyPI.
adani
ADANIADANI (Approximate DIS At N3LO Implementation) is a C++ code that computes an approximation for the DIS coefficient functions at N3LO in heavy quark production, that are not fully known yet.DependenciesThe code depends on the public librarygsl.Optional dependencies are the librarypybind11and the Python modulescikit-build(both public), that are required for building the Pyhton bindings.InstallationIn order to install the C++ library runmkdirbuild&&cdbuild cmake-DCMAKE_INSTALL_PREFIX=/your/installation/path/.. make&&makeinstallIn order to install the Python package runpipinstalladaniCompile a programIn order to compile a simple program rung++-Wall-otest.exetest.cpp-ladani`adani-config--cppflags--ldflags--cxxflags`org++-Wall-I/your/installation/path/include-L/your/installation/path/lib/-otest.exetest.cpp-ladaniIn the first case remember to runexportPATH=$PATH:/your/installation/path/binexportLD_LIBRARY_PATH=$LD_LIBRARY_PATH:/your/installation/path/libFor MacOS users: add the flags-std=c++17 -stdlib=libc++.Python moduleIn order to use the Python module doimportadaniremembering to runexportPYTHONPATH=$PYTHONPATH:/your/installation/path/lib/python3.X/site-packages/whereXis your Python version.ContactsContact me [email protected] of our prophetLa garra charrúa! L'ultima parola agli uruguagi, sempre loro! L'ultima parola nel calcio è la loro: hanno un cuore differente, lo capisci o no? L'artiglio che graffia, che lascia il segno nella storia dell'Inter: questa è la storia che si ripete! [...] Il graffio che aveva portato l'Inter in Champions serve per rimarcare il territorio: questo è l'Uruguay quando va in campo con tutto se stesso...ecco chi è Vecino: stanco, si, ma lascia in campo Vecino che parla alla fine, lascialo in campo, che la dice lui l'ultima cosa nel calcio! Stanno a insegnare cos'è il calcio agli uruguagi, ma vedi un po' te.18/09/2018Messi! Messi! Leo Messi! Il sinistro migliore del mondo. Da Di Maria a Messi, dalla Bajada alla Perdriel sempre Rosario, la città del calcio. Uno per l'altro. Si sblocca la partita. Football! [...] Tutti in piedi per il miglior giocatore del mondo. Rispetto per il numero uno. [...] Rosario, città del calcio, per questo calciatore che troppe volte è stato criticato. Anche quando era a Barcellona, quando è arrivato, sofferente, da giovane, non ha mai dimenticato l'argentino. Parla rosarino, sente l'argentino nel sangue, ha pianto per la Seleccion ed è lui che la tiene in vita. [...] La mistica che entra in campo. Abbiamo nominato Diego [Armando Maradona] 10 minuti fa. Con Diego dentro tutto è possibile ed è col pianto dell'Argentina e gli occhi spiritati del miglior giocatore del mondo. [...] VAMO!26/11/2022
adannealing
StatusCompatibilitiesContactAdAnnealingA package doing simulated annealingInstallationgit clone https://github.com/pcotteadvestis/adannealing cd adannealing pip install .UsageSimple usage :fromadannealingimportAnnealerclassLossFunc2D:def__init__(self):self.constraints=Nonedef__call__(self,w)->float:"""A __call__ method must be present. It will be called to evaluate the loss. The argument passed is theparameter value at which the loss has to be computed."""x=w[0]y=w[1]return(x-5)*(x-2)*(x-1)*x+10*y**2defon_fit_start(self,val):"""This method is called by the fitter before optimisation. The argument passed is either the starting point of theoptimiser (for single annealer) or the tuple containing different starting points if more than one annealer is used"""passdefon_fit_end(self,val):"""This method is called by the fitter after optimisation. The argument passed is either the result of theoptimiser (for single annealer) or the list of results if more than one annealer reache the end of fit."""passinit_states,bounds,acceptance=(3.0,0.5),np.array([[0,5],[-1,1]]),0.01ann=Annealer(loss=LossFunc2D(),weights_step_size=0.1,init_states=init_states,# Optionalbounds=bounds,verbose=True)# Weights of local minimum, and loss at local minimumw0,lmin,_,_,_,_=ann.fit(stopping_limit=acceptance)Use multiple initial states in parallel runs and get one output per init states :fromadannealingimportAnnealerAnnealer.set_parallel()classLossFunc2D:def__init__(self):self.constraints=Nonedef__call__(self,w)->float:"""A __call__ method must be present. It will be called to evaluate the loss. The argument passed is theparameter value at which the loss has to be computed."""x=w[0]y=w[1]return(x-5)*(x-2)*(x-1)*x+10*y**2defon_fit_start(self,val):"""This method is called by the fitter before optimisation. The argument passed is either the starting point of theoptimiser (for single annealer) or the tuple containing different starting points if more than one annealer is used"""passdefon_fit_end(self,val):"""This method is called by the fitter after optimisation. The argument passed is either the result of theoptimiser (for single annealer) or the list of results if more than one annealer reache the end of fit."""passbounds,acceptance,n=np.array([[0,5],[-1,1]]),0.01,5ann=Annealer(loss=LossFunc2D(),weights_step_size=0.1,bounds=bounds,verbose=True)# Iterable of n weights of local minimum and loss at local minimumresults=ann.fit(npoints=n,stopping_limit=acceptance)forw0,lmin,_,_,_,_inresults:"""do something"""Use multiple initial states in parallel runs and get the result with the smallest loss :fromadannealingimportAnnealerAnnealer.set_parallel()classLossFunc2D:def__init__(self):self.constraints=NoneclassLossFunc2D:def__init__(self):self.constraints=Nonedef__call__(self,w)->float:"""A __call__ method must be present. It will be called to evaluate the loss. The argument passed is theparameter value at which the loss has to be computed."""x=w[0]y=w[1]return(x-5)*(x-2)*(x-1)*x+10*y**2defon_fit_start(self,val):"""This method is called by the fitter before optimisation. The argument passed is either the starting point of theoptimiser (for single annealer) or the tuple containing different starting points if more than one annealer is used"""passdefon_fit_end(self,val):"""This method is called by the fitter after optimisation. The argument passed is either the result of theoptimiser (for single annealer) or the list of results if more than one annealer reache the end of fit."""passbounds,acceptance,n=np.array([[0,5],[-1,1]]),0.01,5ann=Annealer(loss=LossFunc2D(),weights_step_size=0.1,bounds=bounds,verbose=True)# Weights of the best local minimum and loss at the best local minimumw0,lmin,_,_,_,_=ann.fit(npoints=n,stopping_limit=acceptance,stop_at_first_found=True)One can save the history of the learning by giving a path :fromadannealingimportAnnealerAnnealer.set_parallel()classLossFunc2D:def__init__(self):self.constraints=Nonedef__call__(self,w)->float:"""A __call__ method must be present. It will be called to evaluate the loss. The argument passed is theparameter value at which the loss has to be computed."""x=w[0]y=w[1]return(x-5)*(x-2)*(x-1)*x+10*y**2defon_fit_start(self,val):"""This method is called by the fitter before optimisation. The argument passed is either the starting point of theoptimiser (for single annealer) or the tuple containing different starting points if more than one annealer is used"""passdefon_fit_end(self,val):"""This method is called by the fitter after optimisation. The argument passed is either the result of theoptimiser (for single annealer) or the list of results if more than one annealer reache the end of fit."""passbounds,acceptance,n=np.array([[0,5],[-1,1]]),0.01,5ann=Annealer(loss=LossFunc2D(),weights_step_size=0.1,bounds=bounds,verbose=True)# Weights of the best local minimum and loss at the best local minimumw0,lmin,_,_,_,_=ann.fit(npoints=n,stopping_limit=acceptance,history_path="logs")In this example, callingfitwill producendirectories inlogs, each containing 2 files:history.csvandreturns.csv. The first is the entier history of the fit, the second is only the iteration that found the local minimum. If only one point is asked (either by usingnpoints=1orstop_at_first_found=True), will producehistory.csvandreturns.csvdirectly inlogs, and will delete the subfolders of the runs that did not produce the local minimum.One can plot the result of a fit by doing# figure will be saved in logs/annealing.pdffig=ann.plot("logs",nweights=2,weights_names=["A","B","C"],do_3d=True)If the argumentdo_3d=True, then 3-dimensional dynamical figures are produced to inspect the phase space marginalised over different couples of components.
adan-pytorch
No description available on PyPI.
adansonia
AdansoniaAdansoniais a genus of deciduous trees known as baobabs. They are found in arid regions of Madagascar, mainland Africa, Arabia, and Australia. The generic name honoursMichel Adanson, the French naturalist and explorer who described Adansonia digitata.DocumentationLicenseApache License 2.0
adansons-base
Adansons Base DocumentProduct Concept0. Get Access Key1. Installation2. Configuration2.1 with CLI2.2 Environment Variables3. Tutorial 1: Organize meta data and Create datasetStep 0. prepare sample datasetStep 1. create new projectStep 2. import data filesStep 3. import external metadata filesStep 4. filter and export dataset with CLIStep 5. filter and export dataset with Python SDK4. API Reference4.1 Command Reference4.2 Python ReferenceProduct ConceptAdansons Base is a data management tool that organizes metadata of unstructured data and creates and organizes datasets.It makes dataset creation more effective, helps find essential insights from training results, and improves AI performance.More detail ↓↓↓Mediumhttps://medium.com/@KenichiHiguchi/3-things-you-need-to-deal-with-in-data-management-to-create-best-dataset-781177507fc2Product Pagehttps://adansons.wraptas.site0. Get Access KeyType your email into the form below to join our slack and get the access key.Invitation Form:https://share.hsforms.com/1KG8Hp2kwSjC6fjVwwlklZA8moen1. InstallationAdansons Base contains Command Line Interface (CLI) and Python SDK, and you can install both withpipcommand.pipinstallgit+https://github.com/adansons/baseNote: if you want to use CLI in any directory, you have to install with the python globally installed on your computer.2. Configuration2.1 with CLIwhen you run any Base CLI command for the first time, Base will ask for your access key provided on our slack.then, Base will verify the specified access key was correct.if you don't have an access key, please see0. Get Access Key.this command will show you what projects you havebaselistOutputWelcome to Adansons Base!! Let's start with your access key provided on our slack. Please register your access_key: xxxxxxxxxx Successfully configured as [email protected] projects ========2.2 Environment Variablesif you don’t want to configure interactively, you can use environment variables for configuration.BASE_USER_IDis used for the identification of users, this is the email address you submitted via our form.exportBASE_ACCESS_KEY=xxxxxxxxxxexportBASE_USER_ID=xxxx@yyyy.com3. Tutorial 1: Organize metadata and Create a datasetlet’s start the Base tutorial with the mnist dataset.Step 0. prepare sample datasetinstall dependencies for download dataset at first.pipinstallpypngthen, download a script for mnist from our Base repositorycurl-sSLhttps://raw.githubusercontent.com/adansons/base/main/download_mnist.py>download_mnist.pyrun the download-mnist script. you can specify any folder for downloading as the last argument(default “~/dataset/mnist”). if you run this command on Windows, please replace it with the windows path like “C:\dataset\mnist”python3./download_mnist.py~/dataset/mnistNote: Base can link the data files if you put them anywhere on the local computer. So if you already downloaded the mnist dataset, you can use itafter downloading, you can see data files in ~/dataset/mnist.~ └── dataset └── mnist ├── train │ ├── 0 │ │ ├── 1.png │ │ ├── ... │ │ └── 59987.png │ ├── ... │ └── 9 └── test ├── 0 └── ...Step 1. create a new projectcreate mnist project withbase newcommand.basenewmnistOutputYour Project UID ---------------- abcdefghij0123456789 save Project UID in the local file (~/.base/projects)Base will issue a Project Unique ID and automatically save it in a local file.Step 2. import data filesafter step 0, you have many png image files on the”~/dataset/mnist” directory.let’s upload metadata related to their paths into the mnist project with thebase importcommand.baseimportmnist--directory~/dataset/mnist--extensionpng--parse"{dataType}/{label}/{id}.png"Note: if you changed the download folder, please replace “~/dataset/mnist” in the above command.OutputCheck datafiles... found 70000 files with png extension. Success!Step 3. import external metadata filesif you have external metadata files, you can integrate them into the existing project database with the—-external-fileoption.in this time, we usewrongImagesInMNISTTestset.csvpublished on Github by youkaichao.https://github.com/youkaichao/mnist-wrong-testthis is the extra metadata that correct wrong label on the mnist test dataset.you can evaluate your model more strictly and correctly by using these extra metadata with Base.download external CSVcurl-SLhttps://raw.githubusercontent.com/youkaichao/mnist-wrong-test/master/wrongImagesInMNISTTestset.csv>~/Downloads/wrongImagesInMNISTTestset.csvbaseimportmnist--external-file--path~/Downloads/wrongImagesInMNISTTestset.csv-adataType:testOutput1 tables found! now estimating the rule for table joining... 1 table joining rule was estimated! Below table joining rule will be applied... Rule no.1 key 'index' -> connected to 'id' key on exist table key 'originalLabel' -> connected to 'label' key on exist table key 'correction' -> newly added 1 tables will be applied Table 1 sample record: {'index': 8, 'originalLabel': 5, 'correction': '-1'} Do you want to perform table join? Base will join tables with that rule described above. 'y' will be accepted to approve. Enter a value: y Success!Step 4. filter and export dataset with CLInow, we are ready to create a dataset.let’s pick up a part of data files, the label is 0, 1, or 2 for training, from project mnist withbase search <project>command.you can use--conditions <value-only-search>option for magical search filter and--query <key-value-pair-search>option for advanced filter.Note that the--queryoption can only use the value for searching.Be careful that you may get so large output on your console without the-s, --summaryoption.The--queryoption's grammar is below.--query {KeyName} {Operator} {Values}add 1 space between each sectiondon't use space any otherYou can use these operators below in the query option.[operators]== : equal != : not equal >= : greater than <= : less than > : greater < : less in : inner list of Values not in : not inner list of Values(checksearch docsfor more information).basesearchmnist--conditions"train"--query"label in ['1','2','3']"Note: in the query option, you have to specify each component as a string in the list without space like“[’1’,’2’,’3’]”, when you want to operateinornot inquery.Output18831 files ======== '/home/xxxx/dataset/mnist/train/1/42485.png' ...Note: If you specify no conditions or query, Base will return whole data files.If you want to use the 'OR search' with the--querycommand, please use our Python SDK.Step 5. filter and export dataset with Python SDKin python script, you can filter and export datasets easily and simply withProject classandFiles class. (seeSDK docs)(If you don't have the packages below, please install them by usingpip)pipinstallNumPypillowtorchtorchvisionfrombaseimportProject,DatasetimportnumpyasnpfromPILimportImage# export dataset as you want to useproject=Project("mnist")files=project.files(conditions="train",query=["label in ['1','2','3']"])print(files[0])# this returns path-like `File` object# -> '/home/xxxx/dataset/mnist/0/12909.png'print(files[0].label)# this returns the value of attribute 'lable' of first `File` object# -> '0'# function to load image from path# this is necessary, if you want to use image in your dataset# because base Dataset class doesn't convert path to imagedefpreprocess_func(path):image=Image.open(path)image=image.resize((28,28))image=np.array(image)returnimagedataset=Dataset(files,target_key="label",transform=preprocess_func)# you can also use dataset objects like this.fordata,labelindataset:# data: an image-data. ndarray# label: the label of an image data, like 0passx_train,x_test,y_train,y_test=dataset.train_test_split(split_rate=0.2)# or use with torchimporttorchimporttorchvision.transformsastransformsfromPILimportImagedefpreprocess_func(path):image=transforms.ToTensor()(transforms.Resize((28,28))(Image.open(path)))returnimagedataset=Dataset(files,target_key="label",transform=preprocess_func)loader=torch.utils.data.DataLoader(dataset,batch_size=32,shuffle=True)finally, let’s try one of the most characteristic use cases on Adansons Base.in the external file, you imported in step.3, some mnist test data files are annotated as“-1”in the correction column. this means that it is difficult to classify that files even for a human.so, you should exclude that files from your dataset to evaluate your AI models more properly.# you can exclude files which have "-1" on "correction" with below codeeval_files=project.files(conditions="test",query=["correction != -1"])print(len(eval_files))# this returns the number of files matched with requested conditions or query# -> 9963eval_dataset=Dataset(eval_files,target_key="label",transform=preprocess_func)4. API Reference4.1 Command ReferenceCommand Reference4.2 Python ReferencePython Reference
adan-tensorflow
No description available on PyPI.
adao
AboutThe ADAO module provides data assimilation and optimizationfeatures in Python or SALOME context (seehttp://www.salome-platform.org/). Briefly stated, Data Assimilation is a methodological framework to compute the optimal estimate of the inaccessible true value of a system state, eventually over time. It uses information coming from experimental measurements or observations, and from numericala priorimodels, including information about their errors. Parts of the framework are also known under the names ofcalibration,adjustment,state estimation,parameter estimation,parameter adjustment,inverse problems,Bayesian estimation,optimal interpolation,mathematical regularization,meta-heuristics for optimization,model reduction,data smoothing, etc. More details can be found in the full ADAO documentation (seehttps://www.salome-platform.org/User Documentation dedicated section).Only the use of ADAO text programming interface (API/TUI) is introduced here. This interface gives ability to create a calculation object in a similar way than the case building obtained through the graphical interface (GUI). When one wants to elaborate directly the TUI calculation case, it is recommended to extensively use all the ADAO module documentation, and to go back if necessary to the graphical interface (GUI), to get all the elements allowing to correctly set the commands.A simple setup example of an ADAO TUI calculation caseTo introduce the TUI interface, lets begin by a simple but complete example of ADAO calculation case. All the data are explicitly defined inside the script in order to make the reading easier. The whole set of commands is the following one:from numpy import array, matrix from adao import adaoBuilder case = adaoBuilder.New() case.set( 'AlgorithmParameters', Algorithm = '3DVAR' ) case.set( 'Background', Vector = [0, 1, 2] ) case.set( 'BackgroundError', ScalarSparseMatrix = 1.0 ) case.set( 'Observation', Vector = array([0.5, 1.5, 2.5]) ) case.set( 'ObservationError', DiagonalSparseMatrix = '1 1 1' ) case.set( 'ObservationOperator', Matrix = '1 0 0;0 2 0;0 0 3' ) case.set( 'Observer', Variable = "Analysis", Template = "ValuePrinter" ) case.execute()The result of running these commands in SALOME (either as a SALOME “shell” command, in the Python command window of the interface, or by the script execution entry of the menu) is the following:Analysis [ 0.25000264 0.79999797 0.94999939]More advanced examples of ADAO TUI calculation caseReal cases involve observations loaded from files, operators explicitly defined as generic functions including physical simulators, time dependant information in order to deal with forecast analysis in addition to calibration or re-analysis. More details can be found in the full ADAO documentation (see documentation on the reference sitehttps://www.salome-platform.org/, withhttps://docs.salome-platform.org/latest/gui/ADAO/en/index.htmlfor english orhttps://docs.salome-platform.org/latest/gui/ADAO/fr/index.htmlfor french, both being equivalents).License and requirementsThe license for this module is the GNU Lesser General Public License (Lesser GPL), as stated here and in the source files:<ADAO, a module for Data Assimilation and Optimization> Copyright (C) 2008-2023 EDF R&D This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA See http://www.salome-platform.org/In addition, it is requested that any publication or presentation, describing work using this module, or any commercial or non-commercial product using it, cite at least one of the references below with the current year added:ADAO, a module for Data Assimilation and Optimization,http://www.salome-platform.org/ADAO, un module pour l’Assimilation de Données et l’Aide à l’Optimisation,http://www.salome-platform.org/SALOME The Open Source Integration Platform for Numerical Simulation,http://www.salome-platform.org/The documentation of the module is also covered by the license and the requirement of quoting.
adapapi
Module adapapiClassesAppen(api_key):### Methods `bonus_contributor(self, job_id, worker_id, amount_in_cents)` : Max at one time is $20.00 or 2000 cents Args: job_id (int): ADAP Job ID worker_id (int): ADAP Contributor ID amount_in_cents (int): Amount in cents. The max amount to bonus per API request is $20.00. Function will split out your amount in max $20.00 chunks + remainder if more than $20.00 is to be bonused. `delete_unit(self, job_id, unit_id)` : Units cannot be deleted from a running or paused job. Args: job_id (int): ADAP job ID unit_id (int): ADAP Unit ID `deprecate_job(self, job_id)` : Deprecating an ADAP job. Args: job_id (int): ADAP job ID `download_jobid(self, job_id, reporttype, to_csv=False)` : Recommended to regenerate report first. Downloads ADAP report to DataFrame object or CSV file. Args: job_id (int): ADAP Job ID reporttype (str): ADAP report type -- full, aggregated, json, gold_report, workset, source to_csv (bool): True if report should be saved to CSV, False if report to output as DataFrame object Returns: Pandas dataframe: Pandas dataframe of report `download_jobid_list(self, list_job_ids, reporttype, outfile_concat=False)` : Download reports from a list of job IDs Args: list_job_ids (list): list of job IDs reporttype (str): ADAP report type -- full, aggregated, json, gold_report, workset, source outfile_concat (boolean): Option to concat all of the downloaded reports into one Pandas dataframe. Default is set to False. Returns: Pandas dataframe: Dependent on outfile_concat flag, will return concatenated dataframe of all listed job IDs `duplicate_job(self, job_id, include_uploaded_rows=False, include_tq=False)` : Duplicate ADAP job Args: job_id (int): ADAP Job ID include_uploaded_rows (bool): Flag to include previously uploaded rows. Includes test questions if present. Default set to False. include_tq (bool): Flag to include test questions only. Returns: int: New ADAP Job ID `get_all_jobs(self): """Retrieves list of all jobs. More information https://developer.appen.com/#tag/Account-Info/paths/~1jobs.json/get Returns: list: List of all jobs. """ `filter_jobs_by_tag(self, tag)` : Retrieves list of job IDs with associated tag. More information https://developer.appen.com/#tag/Account-Info/paths/~1jobs.json/get Args: tag (str): For multiple tags, delimit by comma. eg. 'tag1, tag2' Returns: list: List of job IDs with tag `filter_jobs_by_title(self, title): """Retrieves list of job IDs with associated title. More information https://developer.appen.com/#tag/Account-Info/paths/~1jobs.json/get Args: title (str): keywords to search for in job title Returns: list: List of jobs with title """ `filter_jobs_by_copied_from(self, copied_from): """Retrieves list of job IDs with associated copied_from job_id. More information https://developer.appen.com/#tag/Account-Info/paths/~1jobs.json/get Args: copied_from (int): fileter jobs by the job id they were copied from Returns: list: List of jobs copied from the copied_from_id `get_unit_state(self, job_id, unit_id)` : Retrieves current unit state within job Args: job_id (int): ADAP Job ID unit_id (int): ADAP Unit ID Returns: dict: Dictionary containing _unit_id and _unit_state `get_unit_state_row(self, job_id, row)` : Retrieves current unit state within job. To be used when row data needs to be returned. Args: job_id (int): ADAP Job ID row (Pandas Series or dictionary): Row from ADAP report Returns: dict: Dictionary containing all data within row and _unit_state `internal_launch(self, job_id, units_to_launch)` : Launching job internally Args: job_id (int): ADAP Job ID units_to_launch (int or str): Provide number of units to launch OR use "all" to launch all units. Returns: int: Number of units launched `job_json(self, job_id)` : Get job json Args: job_id (int): ADAP Job ID Returns: dict: Job JSON `job_summary(self, job_id)` : Getting job stats Args: job_id (int): ADAP Job ID Returns: dict: Returns golden_units, all_units, ordered_units, completed_units_estimate, needed_judgments, all_judgments, tainted_judgments, completed_gold_estimate, completed_non_gold_estimate `regenerate_jobid(self, job_id, reporttype)` : Regenerates ADAP job Args: job_id (int_or_list): ADAP Job ID. If a list of job IDs provided, will regenerate them sequentially reporttype (str): ADAP report type -- full, aggregated, json, gold_report, workset, source `split_column(self, job_id, columnname, character)` : Corresponds to the "Split Column" button in platform UI. This operation will split the contents of a column on a certain character, transforming strings into arrays of strings. Args: job_id (int): ADAP Job ID columnname (str): Column name character (str): Delimiting character `tag_add(self, job_id, tag)` : Adding new tags. https://developer.appen.com/#tag/Manage-Job-Settings/paths/~1jobs~1{job_id}~1tags/post Args: job_id (int): ADAP Job ID tag (str): For multiple tags, delimit by comma. eg. 'tag1, tag2' `tag_get(self, job_id)` : Tagging jobs. https://developer.appen.com/#tag/Manage-Job-Settings/paths/~1jobs~1{job_id}~1tags/post Args: job_id (int): ADAP Job ID Returns: list: List of tags attached to ADAP Job `tag_replace(self, job_id, tag)` : Replacing existing tags with new tags. https://developer.appen.com/#tag/Manage-Job-Settings/paths/~1jobs~1{job_id}~1tags/post Args: job_id (int): ADAP Job ID tag (str): For multiple tags, delimit by comma. eg. 'tag1, tag2' `unit_json(self, job_id, unit_id)` : Get unit json Args: job_id (int): ADAP Job ID unit_id (int): ADAP Unit ID Returns: dict: Unit JSON `update_job_json(self, job_id, indict)` : Updating job settings Args: job_id (int): ADAP Job ID indict (dict): Dictionary of items to update within job json `update_unit_state(self, job_id, unit_id, state)` : Updating unit state Args: job_id (int): ADAP job ID unit_id (int): ADAP Unit ID state (str): One of the following -- new, golden, finalized, canceled, deleted `upload(self, data_to_upload, job_id=None)` : Uploads CSV (specify path), list of dictionaries, a single dictionary, or DataFrame. If no job ID is specified, a new job will be created. Args: data_to_upload (pd.DataFrame_or_str_or_list_or_dict): DataFrame object, path to CSV file, list of dictionaries, or single dictionary job_id (int): ADAP Job ID. If None then a new job will be created
adapay
Adapay Python SDK。安装如果已下载至本地: pip install local_sdk_path远程下载并安装: pip install adapayAPI 文档https://docs.adapay.tech/api/index.htmlSDK 文档https://docs.adapay.tech/sdk/pythonsdkaccess.html
adapay-core
adapay-core,adapay sdk 工具类安装远程下载并安装: pip install adapay-coreadapay 官网https://www.adapay.tech/
adapay-merchant
Adapay-Merchant Python SDK。安装如果已下载至本地: pip install local_sdk_path远程下载并安装: pip install adapay-merchant文档相关文档请参考官方提供的线下文件
adap-exercice
Exercice AdaptatifCe package permet la résolution d'un exercice. Celui-ci permet aussi de modifier les paramètres de l'exercice pour laisser plus de liberté à l'utilisateur. Les réponses données par le fichier s'adapteront alors aux paramètres.L'exécution du package permet la création d'un fichier PDF à l'aide du langage LaTeX. Le fichier TeX est lui aussi récupérable pour une modification plus précise du fichier.ExerciceA monopoly operates for two periods and produces a homogenous good whose quality is either high or low (the monopoly cannot choose the quality of the good).In the first period, the quality of the good is unobserved by consumers and their demand is $q_1$ = $s_1$ − $p_1$, where $s_1$ is the perceived quality of the good and p1 is the price in period 1.In the second period, the quality of the good becomes common knowledge and the demand for the good is $q_2$ = 4 − $p_2$ if the quality is high and $q_2$ = 2 − $p_2$ if the quality is low, where $p_2$ is the price in the second period.The per unit cost of production is 1 in the first period, and 1 − $\gamma q_1$ in the second period, where $\gamma$ is a positive constant that reflects a learning-by-doing effect : the more the firm produces in period 1, the lower is its per unit cost in period 2. Assume that $\gamma$ = $\frac{1}{4}$ if the monopoly produces a high quality product and $\gamma$ = $\frac{1}{2}$ if the monopoly produces a low quality product. For simplicity, assume that there is no discounting.Questions :Solve the monopoly’s problem in period 2 and compute the monopoly’s profit at the optimum, taking $q_1$ as given (recall that $q_1$ determines the per-unit cost of production in period 2).Write out the sum of the monopoly profits in periods 1 and 2 as a function of $p_1$, given the monopoly type, assuming that consumers believe that (1) $s_1$ = 4 and (2) $s_1$ = 2.Now suppose that in period 1 the monopoly chooses a price, $p_1$, and a level of uninformative advertising, A. Solve for the strategy of a low type monopoly in a separating equilibrium.Let A($p_1$) define, for each period 1 price $p_1$, the minimal amount of advertising required by a high quality monopoly in order to deter a low quality monopoly from mimicking it. Given your answers to parts (2) and (3), compute $A(p_1)$ and show it in a figure. Moreover, compute the prices at which $A(p_1)$ crosses the horizontal axis.Explain the meaning of these crossing points.Solve for the price that a high quality monopoly will charge in a Pareto undominated separating equilibrium (one where a high quality monopoly advertises just enough to induce separation, or more precisely, one where consumers believe that the monopoly must be of a high quality if they observe a pair $(p_1,A)$ which is a weakly dominated strategy for a low quality monopoly) and compute the amount of advertising that it will choose.Compare your answer in part (5) to the optimal strategy of a high quality monopoly in the full information case (the case where the quality is common knowledge even in period 1). Does the monopoly underprice or overprice in equilibrium, relative to the full information case ? Explain why the price distortion could serve as a signal for quality in this particular case.Présentation du moduleInstallationIl y a deux manières pour installer ce module :Installer le package depuis PiPy avecpip:py-mpipinstalladap_exerciceInstaller le module à partir du répertoire téléchargé sur GitHub (en utilisant votre propre chemin où se situe le dossier) :py-mpipinstall"C:/Users/Username/Downloads/adap_exercice"FonctionnementCe module contient deux fonctions utilisables en ligne de commandes par l'utilisateur :La première permet d'obtenir la version originale de la correction de l'exercice, tous les calculs étant fait de manière symbolique grâce au modulesympy.py-madap_exerciceoriginalLa seconde permet de modifier les paramètres au choix de l'utilisateur à l'aide d'un prompt :Les valeurs de qualité haute et basse $s_1$ et $s_2$.Les valeurs des $\gamma$.Enfin, la commande :py-madap_exercice--helpElle permet d'obtenir les différentes informations sur le package, ainsi que l'auto-complétion et les descriptions des commandes.FichiersLe fichiers PDF s'ouvrira normalement automatiquement dans une fenêtre. Il est cependant possible de récupérer les deux fichiers dans le dossierC:/Users/Username.
adapi-sdk
No description provided (generated by Swagger Codegenhttps://github.com/swagger-api/swagger-codegen) # noqa: E501
adApkTools
An ad channel apk generator used for chinese android ad package.
adapt
ADAPTAwesomeDomainAdaptationPythonToolboxADAPT is an open source library providing numerous tools to perform Transfer Learning and Domain Adaptation.The purpose of the ADAPT library is to facilitate the access to transfer learning algorithms for a large public, including industrial players. ADAPT is specifically designed forScikit-learnandTensorflowusers with a "user-friendly" approach. All objects in ADAPT implement thefit,predictandscoremethods like any scikit-learn object. A very detailed documentation with several examples is provided::arrow_right:DocumentationSample bias correctionModel-based TransferDeep Domain AdaptationMulti-Fidelity TransferInstallation and UsageThis package is available onPypiand can be installed with the following command line:pip install adaptThe following dependencies are required and will be installed with the library:numpyscipytensorflow(>= 2.0)scikit-learncvxoptscikerasIf for some reason, these packages failed to install, you can do it manually with:pip install numpy scipy tensorflow scikit-learn cvxopt scikerasFinally import the module in your python scripts with:importadaptA simple example of usage is given in theQuick-Startbelow.Stable environments [Updated Dec 2023]ADAPT sometimes encounters incompatibility issue after a new Tensorflow release. In this case, you can use the following environment, which has passed all tests. ADAPT should work well on it:OS:ubuntu-22.04, windows-2022, macos-12Python versions:3.8 to 3.11pip install numpy==1.26.2 scipy==1.11.4 tensorflow==2.15.0 scikit-learn==1.3.2 cvxopt==1.3.2 scikeras==0.12.0ADAPT GuidelineThe transfer learning methods implemented in ADAPT can be seen as scikit-learn "Meta-estimators" or tensorflow "Custom Model":Adapt EstimatorAdaptEstimator(estimator="""A scikit-learn estimator(like Ridge(alpha=1.) for example)or a Tensorflow Model""",Xt="The target input features",yt="The target output labels (if any)",**params="Hyper-parameters of the AdaptEstimator")Deep Adapt EstimatorDeepAdaptEstimator(encoder="A Tensorflow Model (if required)",task="A Tensorflow Model (if required)",discriminator="A Tensorflow Model (if required)",Xt="The target input features",yt="The target output labels (if any)",**params="""Hyper-parameters of the DeepAdaptEstimator andthe compile and fit params (optimizer, epochs...)""")Scikit-learn Meta-EstimatorSklearnMetaEstimator(base_estimator="""A scikit-learn estimator(like Ridge(alpha=1.) for example)""",**params="Hyper-parameters of the SklearnMetaEstimator")As you can see, the main difference between ADAPT models and scikit-learn and tensorflow objects is the two argumentsXt, ytwhich refer to the target data. Indeed, in classical machine learning, one assumes that the fitted model is applied on data distributed according to the training distribution. This is why, in this setting, one performs cross-validation and splits uniformly the training set to evaluate a model.In the transfer learning framework, however, one assumes that the target data (on which the model will be used at the end) are not distributed like the source training data. Moreover, one assumes that the target distribution can be estimated and compared to the training distribution. Either because a small sample of labeled target dataXt, ytis available or because a large sample of unlabeled target dataXtis at one's disposal.Thus, the transfer learning models from the ADAPT library can be seen as machine learning models that are fitted with a specific target in mind. This target is different but somewhat related to the training data. This is generally achieved by a transformation of the input features (seefeature-based transfer) or by importance weighting (seeinstance-based transfer). In some cases, the training data are no more available but one aims at fine-tuning a pre-trained source model on a new target dataset (seeparameter-based transfer).Navigate into ADAPTThe ADAPT library proposes numerous transfer algorithms and it can be hard to know which algorithm is best suited for a particular problem. If you do not know which algorithm to choose, thisflowchartmay help you:Quick StartHere is a simple usage example of the ADAPT library. This is a simulation of a 1D sample bias problem with binary classification task. The source input data are distributed according to a Gaussian distribution centered in -1 with standard deviation of 2. The target data are drawn from Gaussian distribution centered in 1 with standard deviation of 2. The output labels are equal to 1 in the interval [-1, 1] and 0 elsewhere. We apply the transfer methodKMMwhich is an unsupervised instance-based algorithm.# Import standard librariesimportnumpyasnpfromsklearn.linear_modelimportLogisticRegression# Import KMM method form adapt.instance_based modulefromadapt.instance_basedimportKMMnp.random.seed(0)# Create source dataset (Xs ~ N(-1, 2))# ys = 1 for ys in [-1, 1] else, ys = 0Xs=np.random.randn(1000,1)*2-1ys=(Xs[:,0]>-1.)&(Xs[:,0]<1.)# Create target dataset (Xt ~ N(1, 2)), yt ~ ysXt=np.random.randn(1000,1)*2+1yt=(Xt[:,0]>-1.)&(Xt[:,0]<1.)# Instantiate and fit a source only model for comparisonsrc_only=LogisticRegression(penalty="none")src_only.fit(Xs,ys)# Instantiate a KMM model : estimator and target input# data Xt are given as parameters with the kernel parametersadapt_model=KMM(estimator=LogisticRegression(penalty="none"),Xt=Xt,kernel="rbf",# Gaussian kernelgamma=1.,# Bandwidth of the kernelverbose=0,random_state=0)# Fit the model.adapt_model.fit(Xs,ys);# Get the score on target dataadapt_model.score(Xt,yt)>>>0.574Quick-Start Plotting Results.The dotted and dashed lines are respectively the class separation of the "source only" and KMM models. Note that the predicted positive class is on the right of the dotted line for the "source only" model but on the left of the dashed line for KMM. (The code for plotting the Figure is availablehere)ContentsADAPT package is divided in three sub-modules containing the following domain adaptation methods:Feature-based methodsFA(Frustratingly Easy Domain Adaptation)[paper]SA(Subspace Alignment)[paper]fMMD(feature Selection with MMD)[paper]DANN(Discriminative Adversarial Neural Network)[paper]ADDA(Adversarial Discriminative Domain Adaptation)[paper]CORAL(CORrelation ALignment)[paper]DeepCORAL(Deep CORrelation ALignment)[paper]MCD(Maximum Classifier Discrepancy)[paper]MDD(Margin Disparity Discrepancy)[paper]WDGRL(Wasserstein Distance Guided Representation Learning)[paper]CDAN(Conditional Adversarial Domain Adaptation)[paper]CCSA(Classification and Contrastive Semantic Alignment)[paper]Instance-based methodsLDM(Linear Discrepancy Minimization)[paper]KMM(Kernel Mean Matching)[paper]KLIEP(Kullback–Leibler Importance Estimation Procedure)[paper]TrAdaBoost(Transfer AdaBoost)[paper]TrAdaBoostR2(Transfer AdaBoost for Regression)[paper]TwoStageTrAdaBoostR2(Two Stage Transfer AdaBoost for Regression)[paper]NearestNeighborsWeighting(Nearest Neighbors Weighting)[paper]WANN(Weighting Adversarial Neural Network)[paper]Parameter-based methodsRegularTransferLR(Regular Transfer with Linear Regression)[paper]RegularTransferLC(Regular Transfer with Linear Classification)[paper]RegularTransferNN(Regular Transfer with Neural Network)[paper]FineTuning(Fine-Tuning)[paper]TransferTreeClassifier(Transfer Tree Classifier)[paper]TransferTreeForest(Transfer Tree Forest)[paper]ReferenceIf you use this library in your research, please cite ADAPT using the following reference:https://arxiv.org/pdf/2107.03049.pdf@article{de2021adapt, title={ADAPT: Awesome Domain Adaptation Python Toolbox}, author={de Mathelin, Antoine and Deheeger, Fran{\c{c}}ois and Richard, Guillaume and Mougeot, Mathilde and Vayatis, Nicolas}, journal={arXiv preprint arXiv:2107.03049}, year={2021} }AcknowledgementThis work has been funded by Michelin and the Industrial Data Analytics and Machine Learning chair from ENS Paris-Saclay, Borelli center.
adapta
AdaptaThis project aim at providing tools needed for everyday activities of data scientists and engineers:Connectors for various cloud APIsSecure secret handlers for various remote storagesLogging frameworkMetrics reporting frameworkStorage drivers for various clouds and storage typesDelta LakeThis module provides basic Delta Lake operations without Spark session, based ondelta-rsproject.Please refer to themoduledocumentation for examples.Secret StoragesPlease refer to themoduledocumentation for examples.NoSql (Astra DB)Please refer to themoduledocumentation for examples.
adaptable
No description available on PyPI.
adaptation
No description available on PyPI.
adaptationism
### Last Updated: 7/1/2018 # Adaptationism is defined as…_”The belief or assumption, now generally held, that each feature of an organism is the result of evolutionary adaptation for a particular function.”_Our use of language is no different. We are constantly evolving our own group-specific languages (as well as learning the languages of others - both groups and individuals),the catalyst of development rooted in the functions that they serve.Like any scholarly endeavor, the wikipedia article that I pulled this information from defines the traits of anadaptationinclude:The trait is a variation of an earlier form.The trait is heritable through the transmission of genes.The trait enhances reproductive success.The last metaphorical comparison that I’ll make to this idea is that like all of the above, language (and features of language):Can be derived from variations of a broader parent language,Are heritable through the groups, communities, and experiences we partake in, and…The feature enhances the success of achieving a descriptive and/or actionable outcomethroughlanguage.## So what is this package _actually_ for…?*“Adaptationism” is meant to help answer the three following questions:Uses language to describe features of a journey or experience,How the use of language differs against another group, a broader (parent) group, or a subset of the same group, and finally,Changes its use of language following some event.The gap that I hope to fill through the development of this package is notjusta set oftools, but alsoa framework for interpretation and action.## What is your roadmap?My roadmap for this package is structured as a pyramid (3 levels), flowing from:(level 1)descriptive features of words and phrases, to…(level 2)the analysis and description of meta-language related features (i.e. POS, polarity, patterns of POS, named entities… etc.), and finally…(level 3)the descriptive statistics of text (word length, statement length, corpus length, average length… etc.).While I primarily spend my time analyzing comments…the types of [corpora](https://wiki.apache.org/spamassassin/PluralOfCorpus) this package can analyze includes:CommentsChat / Text ConversationsBooksSpeechesFor a full list of different kinds of corpora, [check this out](https://weblearn.ox.ac.uk/access/content/group/3a217dfd-a8cd-4034-8564-c27a58f89b9b/Handouts/CorpusTypes.pdf).## How did I come up with this idea?This package has a few origins… mostly stemming from unanswered Stack Overflow posts on NLTK. As I continue to add to my own roadmap, I will add the sets of SO posts that I draw inspiration from.Current list of TA questions:[Generating N-Gram Markov Chain Transition Table](https://stackoverflow.com/questions/23374694/n-gram-markov-chain-transition-table)
adaptavist
adaptavistThis python package provides functionality for Jira Test Management (tm4j).Table of ContentsInstallationGetting StartedExamples and FeaturesGeneral WorkflowInstallationTo install adaptavist, you can use the following command(s):python-mpipinstalladaptavistTo uninstall adaptavist, you can use the following command:python-mpipuninstalladaptavistGetting Startedadaptavist is using the REST API of Adaptavist Test Management for Jira Server (seehttps://docs.adaptavist.io/tm4j/server/api/) and Jira's internal REST API, both with HTTP Basic authentication.In order to access Adaptavist/Jira, valid credentials are necessary. In addition,getpass.getuser().lower()must be a known Jira user as well.Examples and FeaturesGeneral WorkflowfromadaptavistimportAdaptavist# create a new instanceatm=Adaptavist(jira_server,jira_username,jira_password)# create a test plantest_plan_key=atm.create_test_plan(project_key="TEST",test_plan_name="my test plan")# create a test cycle (formerly test run) with a set of test cases and add it to test plantest_run_key=atm.create_test_run(project_key="TEST",test_run_name="my test cycle",test_cases=["TEST-T1"],test_plan_key=test_plan_key)# as test cycle creation also creates/initializes test results, we can just edit theseatm.edit_test_script_status(test_run_key=test_run_key,test_case_key="TEST-T1",step=1,status="Pass")# (optional) edit/overwrite the overall execution status of the test case (by default this is done automatically when editing status of a single step)atm.edit_test_result_status(test_run_key=test_run_key,test_case_key="TEST-T1",status="Pass")There's much more inside (like adding attachments, creating folders and environments, cloning test cycles). Additional code examples will follow.
adaptavist-fixed
adaptavistThis python package provides functionality for Jira Test Management (tm4j).Table of ContentsInstallationGetting StartedExamples and FeaturesGeneral WorkflowInstallationTo install adaptavist, you can use the following command(s):python-mpipinstalladaptavistTo uninstall adaptavist, you can use the following command:python-mpipuninstalladaptavistGetting Startedadaptavist is using the REST API of Adaptavist Test Management for Jira Server (seehttps://docs.adaptavist.io/tm4j/server/api/) and Jira's internal REST API, both with HTTP Basic authentication.In order to access Adaptavist/Jira, valid credentials are necessary. In addition,getpass.getuser().lower()must be a known Jira user as well.Examples and FeaturesGeneral WorkflowfromadaptavistimportAdaptavist# create a new instanceatm=Adaptavist(jira_server,jira_username,jira_password)# create a test plantest_plan_key=atm.create_test_plan(project_key="TEST",test_plan_name="my test plan")# create a test cycle (formerly test run) with a set of test cases and add it to test plantest_run_key=atm.create_test_run(project_key="TEST",test_run_name="my test cycle",test_cases=["TEST-T1"],test_plan_key=test_plan_key)# as test cycle creation also creates/initializes test results, we can just edit theseatm.edit_test_script_status(test_run_key=test_run_key,test_case_key="TEST-T1",step=1,status="Pass")# (optional) edit/overwrite the overall execution status of the test case (by default this is done automatically when editing status of a single step)atm.edit_test_result_status(test_run_key=test_run_key,test_case_key="TEST-T1",status="Pass")There's much more inside (like adding attachments, creating folders and environments, cloning test cycles). Additional code examples will follow.
adapt-diagnostics
ADAPT  ·Activity-informed Design with All-inclusive Patrolling of TargetsADAPT efficiently designs activity-informed nucleic acid diagnostics for viruses.In particular, ADAPT designs assays with maximal predicted detection activity, in expectation over a virus's genomic diversity, subject to soft and hard constraints on the assay's complexity and specificity. ADAPT's designs are:Comprehensive. Designs are effective against variable targets because ADAPT considers the full spectrum of their known genomic diversity.Sensitive. ADAPT leverages predictive models of detection activity. It includes a pre-trained model of CRISPR-Cas13a detection activity, trained from ~19,000 guide-target pairs.Specific. Designs can distinguish related species or lineages within a species. The approach accommodates G-U pairing, which is important in RNA applications.End-to-end. ADAPT automatically downloads and curates data from public databases to provide designs rapidly at scale. The input can be as simple as a species or taxonomy in the form of an NCBI taxonomy identifier.ADAPT outputs a list of assay options ranked by predicted performance. In addition to its objective that maximizes expected activity, ADAPT supports a simpler objective that minimizes the number of probes subject to detecting a specified fraction of diversity.ADAPT includes a pre-trained model that predicts CRISPR-Cas13a guide detection activity, so ADAPT is directly suited to detection with Cas13a. ADAPT's output also includes amplification primers, e.g., for use with the SHERLOCK platform. The framework and software are compatible with other nucleic acid technologies given appropriate models.For more information, see ourpublicationthat describes ADAPT and evaluates its designs experimentally.Table of contentsSetting up ADAPTDependenciesSetting up a conda environmentDownloading and installingTestingRunning on DockerUsing ADAPTOverviewRequired subcommandsSpecifying the objectiveEnforcing specificitySearching for complete targetsAutomatically downloading and curating dataUsing custom sequences as inputWeighting sequencesMiscellaneous key argumentsOutputExamplesBasic: designing within sliding windowDesigning end-to-end with predictive modelSupport and contributingQuestionsContributingCitationLicenseRelated repositoriesSetting up ADAPTDependenciesADAPT requires:Python== 3.8NumPy>= 1.16.0, < 1.19.0SciPy== 1.4.1TensorFlow== 2.3.2Using the thermodynamic modules of ADAPT requires:Primer3-py== 0.6.1Using ADAPT with AWS cloud features additionally requires:Boto3>= 1.14.54Botocore>= 1.17.54Installing ADAPT withpip, as described below, will install NumPy, SciPy, and TensorFlow if they are not already installed. Installing ADAPT withpipwith the thermodynamic modules, as described below, will install Primer3-py if it is not already installed as well. Installing ADAPT withpipusing the AWS cloud features, as described below, will install Boto3 and Botocore if they are not already installed as well.If using alignment features in subcommands below, ADAPT also requires a path to an executable ofMAFFT.Setting up a conda environmentNote: This section is optional, but may be useful to users who are new to Python.It is generally useful to install and run Python packages inside of avirtual environment, especially if you have multiple versions of Python installed or use multiple packages. This can prevent problems when upgrading, conflicts between packages with different requirements, installation issues that arise from having different Python versions available, and more.One option to manage packages and environments is to useconda. A fast way to obtain conda is to install Miniconda: you can download ithereand find installation instructions for ithere. For example, on Linux you would run:wgethttps://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh bashMiniconda3-latest-Linux-x86_64.shOnce you have conda, you cancreatean environment for ADAPT with Python 3.8:condacreate-nadaptpython=3.8Then, you can activate theadaptenvironment:condaactivateadaptAfter the environment is created and activated, you can install ADAPT as described below. You will need to activate the environment each time you use ADAPT.Downloading and installingADAPT is available viaBiocondafor GNU/Linux and Windows operating systems and viaPyPIfor all operating systems.Before installing ADAPT via Bioconda, we suggest you follow the instructions inSetting up a conda environmentto install Miniconda and activate the environment. To install via Bioconda, run the following command:condainstall-cbiocondaadaptIf you want to be able to use thermodynamic modules of ADAPT, run the following instead:condainstall-cbioconda"adapt[thermo]"If you want to be able to use AWS cloud features through ADAPT, run the following instead:condainstall-cbioconda"adapt[AWS]"For both AWS and thermodynamics, run the following instead:condainstall-cbioconda"adapt[AWS,thermo]"Before installing ADAPT via PyPI, we suggest you follow the instructions in either thePython documentationorSetting up a conda environmentto set up and activate a virtual environment for ADAPT. To install via PyPI, run the following command:pipinstalladapt-diagnosticsIf you want to be able to use thermodynamic modules of ADAPT, run the following instead:pipinstall"adapt-diagnostics[thermo]"If you want to be able to use AWS cloud features through ADAPT, run the following instead:pipinstall"adapt-diagnostics[AWS]"For both AWS and thermodynamics, run the following instead:pipinstall"adapt-diagnostics[AWS,thermo]"If you wish to modify ADAPT's code, ADAPT can be installed by cloning the repository and installing the package withpip:[email protected]:broadinstitute/adapt.gitcdadapt pipinstall-e.Depending on your setup (i.e., if you do not have write permissions in the installation directory), you may need to supply--usertopip install.If you want to be able to use thermodynamic modules of ADAPT, replace the last line with the following:pipinstall-e".[thermo]"If you want to be able to use AWS cloud features through ADAPT, replace the last line with the following:pipinstall-e".[AWS]"For both AWS and thermodynamics, replace the last line with the following:pipinstall-e".[AWS,thermo]"TestingIf you clone this repository, you may want to run tests to ensure your clone is running properly. This package uses Python'sunittestframework. To execute all tests, from the home directory of your ADAPT clone, run:python-munittestdiscoverRunning on DockerNote: This section is optional, but may be useful for more advanced users or developers. You will need to installDocker.If you would like to run ADAPT using a Docker container rather than installing it, you may use one of our pre-built ADAPT images.For ADAPT without cloud features, use the image IDquay.io/broadinstitute/adapt.For ADAPT with cloud features, use the image IDquay.io/broadinstitute/adaptcloud.To pull our Docker image to your computer, run:dockerpull[IMAGE-ID]To run ADAPT on a Docker container, run:dockerrun--rm[IMAGE-ID]"[COMMAND]"To run with ADAPT memoizing to a local directory, run:dockerrun--rm-v/path/to/memo/on/host:/memo[IMAGE-ID]"[COMMAND]"To run the container interactively (opening a command line to the container), run:dockerrun--rm-it[IMAGE-ID]Using ADAPTOverviewThe main program for designing assays isdesign.py.Below, we refer toguidesin reference to our pre-trained model for CRISPR-Cas13a guides and our testing of ADAPT's designs with Cas13a. More generally,guidescan be thought of asprobesto encompass other diagnostic technologies.design.pyrequires two subcommands:design.py[SEARCH-TYPE][INPUT-TYPE]...Required subcommandsSEARCH-TYPE is one of:complete-targets: Search for the best assay options, each containing primer pairs and guides between them. This is usually our recommended search type. More information is inSearching for complete targets. (Examplehere.)sliding-window: Search for guides within a sliding window of a fixed length, and output an optimal guide set for each window. This is the much simpler search type and can be helpful when getting started. (Examplehere.)INPUT-TYPE is one of:fasta: The input is one or more FASTA files, each containing sequences for a taxon. If more than one file is provided, the search finds taxon-specific designs meant for differential identification of the taxa. This assumes the FASTA files contain aligned sequences, unless otherwise specified (seeUsing custom sequences as input)auto-from-args: The input is a single NCBI taxonomy ID, and related information, provided as command-line arguments. This fetches sequences for the taxon, then curates, clusters and aligns the sequences, and finally uses the generated alignment as input for design. More information is inAutomatically downloading and curating data.auto-from-file: The input is a file containing a list of taxonomy IDs and related information. This operates likeauto-from-args, except ADAPT designs with specificity across the input taxa using a single index for evaluating specificity (as opposed to having to build it separately for each taxon). More information is inAutomatically downloading and curating data.Positional argumentsThe positional arguments — which specify required input to ADAPT — depend on the INPUT-TYPE. These arguments are defined below for each INPUT-TYPE.If INPUT-TYPE isfasta:design.py[SEARCH-TYPE]fasta[fasta][fasta...]-o[out-tsv][out-tsv...]where[fasta]is a path to an aligned FASTA file for a taxon and[out-tsv]specifies the basename of where to write the output TSV file (withoutthe.tsvsuffix). If there are more than one space-separated FASTA, there must be an equivalent number of output TSV files; thei'th output gives designs for thei'th input FASTA.If INPUT-TYPE isauto-from-args:design.py[SEARCH-TYPE]auto-from-args[taxid][segment][out-tsv]where[taxid]is an NCBItaxonomy ID,[segment]is a segment label (e.g., 'S') or 'None' if unsegmented, and[out-tsv]specifies where to write the output TSV file.If INPUT-TYPE isauto-from-file:design.py[SEARCH-TYPE]auto-from-file[in-tsv][out-dir]where[in-tsv]is a path to a file specifying the input taxonomies (rundesign.py [SEARCH-TYPE] auto-from-file --helpfor details) and[out-dir]specifies a directory in which to write the outputs.Details on all argumentsTo see details on all the arguments available, rundesign.py[SEARCH-TYPE][INPUT-TYPE]--helpwith the particular choice of subcommands substituted in for[SEARCH-TYPE]and[INPUT-TYPE].Specifying the objectiveADAPT supports two objective functions, specified using the--objargument:Maximize activity (--obj maximize-activity)Minimize complexity (--obj minimize-guides)Details on each are below.Objective: maximizing activitySetting--obj maximize-activitytells ADAPT to design sets of guides having maximal activity, in expectation over the input taxon's genomic diversity, subject to soft and hard constraints on the size of the guide set. This is usually our recommended objective, especially with access to a predictive model. With this objective, the following arguments todesign.pyare relevant:-sgc SOFT_GUIDE_CONSTRAINT: Soft constraint on the number of guides in a design option. There is no penalty for a number of guides ≤ SOFT_GUIDE_CONSTRAINT. Having a number of guides beyond this is penalized linearly according to PENALTY_STRENGTH. (Default: 1.)-hgc HARD_GUIDE_CONSTRAINT: Hard constraint on the number of guides in a design option. The number of guides in a design option will always be ≤ HARD_GUIDE_CONSTRAINT. HARD_GUIDE_CONSTRAINT must be ≥ SOFT_GUIDE_CONSTRAINT. (Default: 5.)--penalty-strength PENALTY_STRENGTH: Importance of the penalty when the number of guides exceeds the soft guide constraint. For a guide set G, the penalty in the objective is PENALTY_STRENGTH*max(0, |G| - SOFT_GUIDE_CONSTRAINT). PENALTY_STRENGTH must be ≥ 0. The value depends on the output values of the activity model and reflects a tolerance for more complexity in the assay; for the default pre-trained activity model included with ADAPT, reasonable values are in the range [0.1, 0.5]. (Default: 0.25.)--maximization-algorithm [greedy|random-greedy]: Algorithm to use for solving the submodular maximization problem. 'greedy' uses the canonical greedy algorithm (Nemhauser 1978) for constrained monotone submodular maximization, which can perform well in practice but has poor worst-case guarantees because the function is not monotone (unless PENALTY_STRENGTH is 0). 'random-greedy' uses a randomized greedy algorithm (Buchbinder 2014) for constrained non-monotone submodular maximization, which has good worst-case guarantees. (Default: 'random-greedy'.)Note that, when the objective is to maximize activity, this objective requires a predictive model of activity and thus--predict-activity-model-pathor--predict-cas13a-activity-modelshould be specified (details inMiscellaneous key arguments). If you wish to use this objective but cannot use our pre-trained Cas13a model nor another model, see the help message for the argument--use-simple-binary-activity-prediction.Objective: minimizing complexitySetting--obj minimize-guidestells ADAPT to minimize the number of guides in an assay subject to constraints on coverage of the input taxon's genomic diversity. With this objective, the following arguments todesign.pyare relevant:-gm MISMATCHES: Tolerate up to MISMATCHES mismatches when determining whether a guide detects a sequence. This argument is mainly meant to be helpful in the absence of a predictive model of activity. When using a predictive model of activity (via--predict-activity-model-pathor--predict-cas13a-activity-model), this argument serves as an additional requirement for evaluating detection on top of the model; it can be effectively ignored by setting MISMATCHES to be sufficiently high. (Default: 0.)--predict-activity-thres THRES_C THRES_R: Thresholds for determining whether a guide-target pair is active and highly active. THRES_C is a decision threshold on the output of the classifier (in [0,1]); predictions above this threshold are decided to be active. Higher values have higher precision and less recall. THRES_R is a decision threshold on the output of the regression model (at least 0); predictions above this threshold are decided to be highly active. Higher values limit the number of pairs determined to be highly active. To count as detecting a target sequence, a guide must be: (i) within MISMATCHES mismatches of the target sequence; (ii) classified as active; and (iii) predicted to be highly active. Using this argument requires also setting--predict-activity-model-pathor--predict-cas13a-activity-model(seeMiscellaneous key arguments). As noted above, MISMATCHES can be set to be sufficiently high to effectively ignore-gm. (Default: use the default thresholds included with the model.)-gp COVER_FRAC: Design guides such that at least a fraction COVER_FRAC of the genomes are detected by the guides. (Default: 1.0.)--cover-by-year-decay YEAR_TSV MIN_YEAR_WITH_COVG DECAY: Group input sequences by year and set a distinct COVER_FRAC for each year. Seedesign.py [SEARCH-TYPE] [INPUT-TYPE] --helpfor details on this argument. Note that when INPUT-TYPE isauto-from-{file,args}, this argument does not accept YEAR_TSV.Enforcing specificityADAPT can enforce strict specificity so that designs will distinguish related taxa.For all INPUT-TYPEs, ADAPT can enforce specificity by parsing the--specific-against-*arguments. When INPUT-TYPE isauto-from-fileorfasta, ADAPT will also automatically enforce specificity between taxa/FASTA files using a single specificity index.To enforce specificity, the following arguments todesign.pyare important:--id-m ID_M/--id-frac ID_FRAC: These parameters specify thresholds for determining specificity. Allow for up to ID_M mismatches when determining whether a guidehits a sequencein a taxon other than the one for which it is being designed, and decide that a guidehits a taxonif it hits at least ID_FRAC of the sequences in that taxon. ADAPT does not design guides that hit a taxon other than the one for which they are being designed. Higher values of ID_M and lower values of ID_FRAC correspond to more strict specificity. (Default: 4 for ID_M, 0.01 for ID_FRAC.)--specific-against-fastas [fasta] [fasta ...]: Design guides to be specific against the provided sequences (in FASTA format; do not need to be aligned). That is, the guides should not hit sequences in these FASTA files, as measured by ID_M and ID_FRAC. Each[fasta]is treated as a separate taxon when ID_FRAC is applied.--specific-against-taxa SPECIFIC_TSV: Design guides to be specific against the provided taxa. SPECIFIC_TSV is a path to a TSV file where each row specifies a taxonomy with two columns: (1) NCBI taxonomy ID; (2) segment label, or 'None' if unsegmented. That is, the guides should not hit sequences in these taxonomies, as measured by ID_M and ID_FRAC.Searching for complete targetsWhen SEARCH-TYPE iscomplete-targets, ADAPT performs a branch and bound search to find a collection of assay design options. It finds the bestNdesign options for a specifiedN. Each design option represents a genomic region containing primer pairs and guides between them. There is no set length for the region. TheNoptions are intended to be a diverse (non-overlapping) selection.Below are key arguments todesign.pywhen SEARCH-TYPE iscomplete-targets:--best-n-targets BEST_N_TARGETS: Only compute and output the best BEST_N_TARGETS design options, where each receives an objective value according to OBJ_FN_WEIGHTS. Note that higher values of BEST_N_TARGETS can significantly increase runtime. (Default: 10.)--obj-fn-weights OBJ_FN_WEIGHTS: Coefficients to use in an objective function for each design target. Seedesign.py complete-targets [INPUT-TYPE] --helpfor details.-pl PRIMER_LENGTH: Design primers to be PRIMER_LENGTH nt long. (Default: 30.)-pp PRIMER_COVER_FRAC: Same as-gpdescribed above, except for the design of primers. (Default: 1.0.)-pm PRIMER_MISMATCHES: Tolerate up to PRIMER_MISMATCHES mismatches when determining whether a primer hybridizes to a sequence. (Default: 0.)--max-primers-at-site MAX_PRIMERS_AT_SITE: Only allow up to MAX_PRIMERS_AT_SITE primers at each primer site. If not set, there is no limit. This argument is mostly intended to improve runtime — smaller values (~5) can significantly improve runtime on especially diverse viruses — because the number of primers is already penalized in the objective function. Note that this is only an upper bound, and in practice the number will usually be less than it. (Default: not set.)Automatically downloading and curating dataWhen INPUT-TYPE isauto-from-{file,args}, ADAPT will run end-to-end. It fetches and curates genomes, clusters and aligns them, and uses the generated alignment as input for design.Below are key arguments todesign.pywhen INPUT-TYPE isauto-from-fileorauto-from-args:--mafft-path MAFFT_PATH: Use theMAFFTexecutable at MAFFT_PATH for generating alignments.--prep-memoize-dir PREP_MEMOIZE_DIR: Memoize alignments and statistics on these alignments to the directory specified by PREP_MEMOIZE_DIR. If repeatedly re-running on the same taxonomies, using this argument can significantly improve runtime across runs. ADAPT can save the memoized information to an AWS S3 bucket by using the syntaxs3://BUCKET/PATH, though this requires the AWS cloud installation mentioned inDownloading and installingand setting access key information. Access key information can either be set using AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (details below) or by installing and configuringAWS CLI. If not set (default), do not memoize information across runs.--sample-seqs SAMPLE_SEQS: Randomly sample SAMPLE_SEQS accessions with replacement from each taxonomy, and move forward with the design using this sample. This can be useful for measuring some properties of the design, or for faster runtime when debugging.--cluster-threshold CLUSTER_THRESHOLD: Use CLUSTER_THRESHOLD as the maximum inter-cluster distance when clustering sequences prior to alignment. The distance is average nucleotide dissimilarity (1-ANI); higher values result in fewer clusters. (Default: 0.2.)--use-accessions USE_ACCESSIONS: Use the specified NCBI GenBank accessions, in a file at the path USE_ACCESSIONS, for generating input. ADAPT uses these accessions instead of fetching neighbors from NCBI, but it will still download the sequences for these accessions. Seedesign.py [SEARCH-TYPE] auto-from-{file,args} --helpfor details on the format of the file.--metadata-filter FILTERS: Filter sequences from the specified taxonomic ID to only those that match this metadata in their NCBI GenBank entries. The format ismetadata=valueormetadata!=value.metadatacan be 'year', 'taxid', or 'country'. Separate multiple values with commas and different filters with spaces (e.g.,--metadata-filter year!=2020,2019 taxid=11060). This argument can allow designing for only a specified subspecies: the corresponding species taxonomic ID can be provided in the input argument for[taxid], while the desired subspecies ID can be provided in FILTERS as a 'taxid'. There is a related argument,--specific-against-metadata-filter, to filter the sequences used in the specificity constraint. These arguments are only available when INPUT-TYPE isauto-from-args.When using AWS S3 to memoize information across runs (--prep-memoize-dir), the following arguments are also important:--aws-access-key-id AWS_ACCESS_KEY_ID/--aws-secret-access-key AWS_SECRET_ACCESS_KEY: Use AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to login to AWS cloud services. Both AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are needed to login. These arguments are only necessary if saving the memoized data to an S3 bucket using PREP_MEMOIZE_DIR and AWS CLI has not been installed and configured. If AWS CLI has been installed and configured and these arguments are passed, they will override the AWS CLI configuration.Using custom sequences as inputWhen INPUT-TYPE isfasta, ADAPT will run on only the sequences specified in the FASTA, without curation.Below are key arguments todesign.pywhen INPUT-TYPE isfasta:--unaligned: Specify if any of the input FASTA files are unaligned. This will align them using MAFFT.--mafft-path MAFFT_PATH: Use theMAFFTexecutable at MAFFT_PATH for generating alignments. Required if--unalignedis specified.--cluster-threshold CLUSTER_THRESHOLD: Use CLUSTER_THRESHOLD as the maximum inter-cluster distance when clustering sequences prior to alignment. The distance is average nucleotide dissimilarity (1-ANI); higher values result in fewer clusters. (Default: 0.2.)Weighting sequencesBy default, ADAPT bases the "coverage" across a virus's variation on the percent of genome sequences predicted to be detected. Likewise, when maximizing expected (or average) activity across variation, it treats the different genome sequences uniformly. While this works well if the genome sequences represent a random sample of the targeted viral population, that is often not the case owing to sampling biases. We include sequence weighting in ADAPT, allowing the relative importance of sequences to be set.To manually set sequence weights when INPUT-TYPE isfasta, use--weight-sequences WEIGHT_SEQUENCES.WEIGHT_SEQUENCESshould be a file path to a TSV with two columns: (1) a sequence name that matches to one in the input FASTA; (2) the weight of that sequence. If more than one input FASTA is given, the same number of input TSVs must be given. Each input TSV corresponds to an input FASTA. The input weights will be normalized to sum to 1 and used when calculating objective scores and summary statistics. Any sequence not listed in the input TSV(s) will be assigned, by default, a pre-normalized weight of 1.When ADAPT designs an assay across multiple subtaxa, each with very different levels of sampling, ADAPT may design deficient assays that only detect a highly overrepresented subtaxon and no other subtaxa. While the number of sequences in the database often indicates a subtaxon's relative importance, it should typically not cause other subtaxa to be ignored in practice.As a simple correction for this problem, ADAPT includes the argument--weight-by-log-size-of-subtaxa SUBTAXAfor when the INPUT-TYPE isauto-from-argsorauto-from-file.SUBTAXAis a taxonomic rank ('genus', 'subgenus', 'species', or 'subspecies') lower than the rank of the taxon being designed for. It works as follows:Each input sequence is associated with oneSUBTAXAgroup.EachSUBTAXAgroup is assigned a weight equal to the log of the number of sequences in that group plus 1.Each sequence is assigned a weight equal to the weight of itsSUBTAXAgroup divided by the number of sequences in itsSUBTAXAgroup.Weights are normalized across all sequences to sum to 1.Miscellaneous key argumentsIn addition to the arguments above, there are others that are often important when runningdesign.py:--predict-cas13a-activity-model: Use ADAPT's pre-trained Cas13 model to predict activity of guide-target pairs. Classification and regression model files can be viewed inmodels/. (Default: not set, which does not use predicted activity during design.)--predict-activity-model-path MODEL_C MODEL_R: Models that predict activity of guide-target pairs. MODEL_C gives a classification model that predicts whether a guide-target pair is active, and MODEL_R gives a regression model that predicts a measure of activity on active pairs. This does not need to be set if--predict-cas13a-activity-modelis specified, but it is useful for custom models. Each argument is a path to a serialized model in TensorFlow's SavedModel format. With--obj maximize-activity, the models are essential because they inform ADAPT of the measurements it aims to maximize. With--obj minimize-guides, the models constrain the design such that a guide must be highly active to detect a sequence (specified by--predict-activity-thres). (Default: not set, which does not use predicted activity during design.)-gl GUIDE_LENGTH: Design guides to be GUIDE_LENGTH nt long. (Default: 28.)--do-not-allow-gu-pairing: If set, do not count G-U (wobble) base pairs between guide and target sequence as matching. By default, they count as matches. This applies when-gmis used with--obj minimize-guidesand when enforcing specificity.--require-flanking5 REQUIRE_FLANKING5/--require-flanking3 REQUIRE_FLANKING3: Require the given sequence on the 5' (REQUIRE_FLANKING5) and/or 3' (REQUIRE_FLANKING3) side of the protospacer for each designed guide. This tolerates ambiguity in the sequence (e.g., 'H' requires 'A', 'C', or 'T'). This can enforce a desired protospacer flanking site (PFS) nucleotide; it can also accommodate multiple nucleotides (motif). Note that this is the 5'/3' end in the target sequence (not the spacer sequence). When a predictive model of activity is given, this argument is not needed; it can still be specified, however, as an additional requirement on top of how the model evaluates activity.OutputThe files output by ADAPT are TSV files, but vary in format depending on SEARCH-TYPE and INPUT-TYPE. There is a separate TSV file for each taxon.For all cases, rundesign.py [SEARCH-TYPE] [INPUT-TYPE] --helpto see details on the output format and on how to specify paths to the output TSV files.Complete targetsWhen SEARCH-TYPE iscomplete-targets, each row gives an assay design option; there are BEST_N_TARGETS of them. Each design option corresponds to a genomic region (amplicon). The columns give the primer and guide sequences as well as additional information about them. There are about 20 columns; some key ones are:objective-value: Objective value based on OBJ_FN_WEIGHTS.target-start/target-end: Start (inclusive) and end (exclusive) positions of the genomic region in the alignment generated by ADAPT.{left,right}-primer-target-sequences: Sequences of 5' and 3' primers, from the targets (seeComplementarity). Within each of the two columns (amplicon endpoints), if there are multiple sequences they are separated by spaces.total-frac-bound-by-guides: Fraction of all input sequences predicted to be detected by the guide set.guide-set-expected-activity: Predicted activity of the guide set in detecting the input sequences, in expectation over the input sequences. (nan if no predictor is set.)guide-set-median-activity/guide-set-5th-pctile-activity: Median and 5th percentile of predicted activity of the guide set over the input sequences. (nan if no predictor is set.)guide-expected-activities: Predicted activity of each separate guide in detecting the input sequences, in expectation over the input sequences. They are separated by spaces; if there is only 1 guide, this is equivalent toguide-set-expected-activity. (nan if no predictor is set.)guide-target-sequences: Sequences of guides, from the targets (seeComplementarity!). If there are multiple, they are separated by spaces.guide-target-sequence-positions: Positions of the guides in the alignment, in the same order as they are reported; a guide may come from >1 position, so positions are reported in set notation (e.g., {100}).The rows in the output are sorted by the objective value: better options are on top. Smaller values are better with--obj minimize-guidesand larger values are better with--obj maximize-activity.When INPUT-TYPE isauto-from-fileorauto-from-argsand ADAPT generates more than one cluster of input sequences, there is a separate TSV file for each cluster; the filenames end in.0,.1, etc.Sliding windowWhen SEARCH-TYPE issliding-window, each row gives a window in the alignment and the columns give information about the guides designed for that window. The columns are:window-start/window-end: Start (inclusive) and end (exclusive) positions in the alignment.count: Number of guide sequences.score: Statistic between 0 and 1 that describes the redundancy of the guides in detecting the input sequences (higher is better). This is meant to break ties between windows with the same number of guide sequences, and is not intended to be compared between windows with different numbers of guides.total-frac-bound: Total fraction of all input sequences that are detected by a guide. Note that if--cover-by-year-decayis provided, this might be less than COVER_FRAC.target-sequences: Sequences of guides, from the targets (seeComplementarity!). If there are multiple, they are separated by spaces.target-sequence-positions: Positions of the guides in the alignment, in the same order as they are reported; a guide may come from >1 position, so positions are reported in set notation (e.g., {100}).By default, when SEARCH-TYPE issliding-window, the rows in the output are sorted by the position of the window. With the--sortargument todesign.py, ADAPT sorts the rows so that the "best" choices of windows are on top. It sorts bycount(ascending) followed byscore(descending), so that windows with the fewest guides and highest score are on top.ComplementarityNote that output sequences are all in the same sense as the input (target) sequences.Synthesized guide sequences should be reverse complements of the output sequences!Likewise, synthesized primer sequences should account for this.ExamplesBasic: designing within sliding window without predictive modelThis is the most simple example.It does not download genomes nor search for genomic regions to target. It also does not use a predictive model of activity, and it seeks to minimize assay complexity rather than maximize activity, which is our usual objective. For these features, see the next example.The repository includes an alignment of Lassa virus sequences (S segment) from Sierra Leone inexamples/SLE_S.aligned.fasta. If you have installed ADAPT via Bioconda or PyPI, you'll need to download the alignment fromhere. Run:design.pysliding-windowfastaFASTA_PATH-oprobes--objminimize-guides-w200-gl28-gm1-gp0.95From this alignment, ADAPT scans each 200 nt window (-w 200) to find the smallest collection of probes that:are all within the windoware 28 nt long (-gl 28)detect 95% of all input sequences (-gp 0.95), tolerating up to 1 mismatch (-gm 1) between a probe and targetADAPT outputs a file,probes.tsv, that contains the probe sequences for each window. SeeOutputabove for a description of this file.Designing end-to-end with predictive modelADAPT can automatically download and curate sequences for its design, and search efficiently across the genome to find primers/amplicons as well as Cas13a guides. It identifies Cas13a guides using a pre-trained predictive model of activity.Run:design.pycomplete-targetsauto-from-args64320Noneguides--objmaximize-activity-gl28-pl30-pm1-pp0.95--predict-cas13a-activity-model--best-n-targets5--mafft-pathMAFFT_PATH--sample-seqs50--verboseThis downloads and designs assays to detect genomes of Zika virus (NCBI taxonomy ID64320). You must fill inMAFFT_PATHwith an executable of MAFFT.ADAPT designs primers and Cas13a guides within the amplicons, such that:guides have maximal predicted detection activity, in expectation over Zika's genomic diversity (--obj maximize-activity)guides are 28 nt long (-gl 28) and primers are 30 nt long (-pl 30)primers capture 95% of sequence diversity (-pp 0.95), tolerating up to 1 mismatch for each (-pm 1)ADAPT outputs a file,guides.0.tsv, that contains the best 5 design options (--best-n-targets 5) as measured by ADAPT's default objective function. SeeOutputabove for a description of this file.This example randomly selects 50 sequences (--sample-seqs 50) prior to design to speed the runtime in this example; the command should take about 10 minutes to run in full. Using--verboseprovides detailed output and is usually recommended, but the output can be extensive.Note that this example does not enforce specificity.To instead find minimal guide sets, use--obj minimize-guidesinstead of--obj maximize-activityand set-gmand-gp. With that alternative objective, Cas13a guides are determined to detect a sequence if they (i) satisfy the number of mismatches specified with-gmand (ii) are predicted by the model to be highly active in detecting the sequence;-gmcan be sufficiently high to rely entirely on the predictive model. The output guides will detect a desired fraction of all genomes, as specified by-gp.Support and contributingQuestionsIf you have questions about ADAPT, please create anissue.ContributingWe welcome contributions to ADAPT. This can be in the form of anissueorpull request.CitationADAPT was started by Hayden Metsky, and is developed by Priya Pillai and Hayden.If you find ADAPT useful to your work, please cite ourpaperas:Metsky HCet al. Designing sensitive viral diagnostics with machine learning.Nature Biotechnology(2022). doi:10.1038/s41587-022-01213-5.LicenseADAPT is licensed under the terms of theMIT license.Related repositoriesThere are other repositories on GitHub associated with ADAPT:adapt-seq-design: Predictive modeling library, datasets, training, and evaluation (applied to CRISPR-Cas13a).adapt-analysis: Analysis of ADAPT's designs and benchmarking its computational performance, as well as miscellaneous analyses for the ADAPT paper.adapt-designs: Designs output by ADAPT, including all experimentally tested designs.adapt-pipes: Workflows for running ADAPT on the cloud, tailored for AWS.
adaptdl
No description available on PyPI.
adaptdl-cli
No description available on PyPI.
adaptdl-modified-pandyaka
No description available on PyPI.
adaptdl-ray
No description available on PyPI.
adaptdl-sched
No description available on PyPI.
adaptdl-sched-modified-pandyaka
No description available on PyPI.
adapted-eh
Failed to fetch description. HTTP Status Code: 404
adapted-estatehunter
No description available on PyPI.
adapted-logger
A helper log library based on default logging module permetting a custom format of logs to be redirected to Logstash - Elasticsearch - Kibana.Custom usage:HOW TO INSTALLInstall adapted_logger using easy_setup or pip:pip install adapted_loggerHOW TO USE:from logger.adapted_logger import AdaptedLogger logger = AdaptedLogger("project_name", "127.0.0.1") # specify project_name and ip address of current server log = logger.get_logger() log.info("This is an info message") log.debug("This is a debug message") log.warn("This is a warning message") log.error("This is an error message")RESULTS:2015-10-27 17:06:50,176 project_name INFO 127.0.0.1 This is an info message 2015-10-27 17:06:55,552 project_name DEBUG 127.0.0.1 This is a debug message 2015-10-27 17:07:00,863 project_name WARNING 127.0.0.1 This is a warning message 2015-10-27 17:07:05,360 project_name ERROR 127.0.0.1 This is an error messageRedirect logs to console:Instantiate AdaptedLogger object:adapted_log = AdaptedLogger("retail_crm_server", "127.0.0.1")Redirect logs to console (Default behavior):adapted_log.redirect_to_console()Get logger object:logger = adapted_log.get_logger() logger.debug("Testing Debug Message") logger.info("Testing Info Message") logger.warn("Testing Warn Message") logger.error("Testing Error Message")Redirect logs to file:Instantiate AdaptedLogger object:adapted_log = AdaptedLogger("retail_crm_server", "127.0.0.1")Redirect logs to file:adapted_log.redirect_to_file("/path/logfile.log")Get logger object:logger = adapted_log.get_logger() logger.debug("Testing Debug Message") logger.info("Testing Info Message") logger.warn("Testing Warn Message") logger.error("Testing Error Message")Custom adapterThis adds the possibility to configure your logger within a config file like config.yml, inside /config folder you should specify your configuration in YAML format.formatters:simpleFormater: format: '%(asctime)s %(name)s %(levelname)s %(message)s' datefmt: '%Y/%m/%d %H:%M:%S'sHere we specify time, package name, level name, and the message, right now there is no injection of the ip addresshandlersconsole: class: logging.StreamHandler formatter: simpleFormater level: DEBUG stream: ext://sys.stdout file: class : logging.FileHandler formatter: simpleFormater level: WARNING filename: filename_or_path.logHere we specify handlers, in this exemple we use 2 handlers : console and file.loggers:clogger: level: DEBUG handlers: [console] flogger: level: WARNING handlers: [file]And finally we create 2 loggers for console and file handlers.To inject ip address into context, we have created CustomAdapter class that create an adapter and inject ip in the process.Usagelogging_config = yaml.load(open('config/config.yml', 'r')) dictConfig(logging_config) logger_1 = logging.getLogger("project_name.application_name1") logger_1 = CustomAdapter(logger_1, {'ip': ip}) logger_2 = logging.getLogger("project_name.application_name2") logger_2 = CustomAdapter(logger_2, {'ip': ip})Right now, we can calllogger_1.warning('This is a warning Message') logger_2.error('This is an error message)
adapted-NERDA
NERDANot only isNERDAa mesmerizing muppet-like character.NERDAis also a python package, that offers a slick easy-to-use interface for fine-tuning pretrained transformers for Named Entity Recognition (=NER) tasks.You can also utilizeNERDAto access a selection ofprecookedNERDAmodels, that you can use right off the shelf for NER tasks.NERDAis built onhuggingfacetransformersand the popularpytorchframework.Installation guideNERDAcan be installed fromPyPIwithpip install NERDAIf you want the development version then install directly fromGitHub.Named-Entity Recogntion tasksNamed-entity recognition (NER) (also known as (named) entity identification, entity chunking, and entity extraction) is a subtask of information extraction that seeks to locate and classify named entities mentioned in unstructured text into pre-defined categories such as person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc.1Example Task:TaskIdentify person names and organizations in text:Jim bought 300 shares of Acme Corp.SolutionNamed EntityType'Jim'Person'Acme Corp.'OrganizationRead more about NER onWikipedia.Train Your OwnNERDAModelSay, we want to fine-tune a pretrainedMultilingual BERTtransformer for NER in English.Load package.fromNERDA.modelsimportNERDAInstantiate aNERDAmodel (with default settings) for theCoNLL-2003English NER data set.fromNERDA.datasetsimportget_conll_datamodel=NERDA(dataset_training=get_conll_data('train'),dataset_validation=get_conll_data('valid'),transformer='bert-base-multilingual-uncased')By default the network architecture is analogous to that of the models inHvingelby et al. 2020.The model can then be trained/fine-tuned by invoking thetrainmethod, e.g.model.train()Note: this will take some time depending on the dimensions of your machine (if you want to skip training, you can go ahead and use one of the models, that we have already precooked for you in stead).After the model has been trained, the model can be used for predicting named entities in new texts.# text to identify named entities in.text='Old MacDonald had a farm'model.predict_text(text)([['Old','MacDonald','had','a','farm']],[['B-PER','I-PER','O','O','O']])This means, that the model identified 'Old MacDonald' as aPERson.Please note, that theNERDAmodel configuration above was instantiated with all default settings. You can however customize yourNERDAmodel in a lot of ways:Use your own data set (finetune a transformer for any given language)Choose whatever transformer you likeSet all of the hyperparameters for the modelYou can even apply your own Network ArchitectureRead more about advanced usage ofNERDAin thedetailed documentation.Use a PrecookedNERDAmodelWe have precooked a number ofNERDAmodels for Danish and English, that you can download and use right off the shelf.Here is an example.Instantiate a multilingual BERT model, that has been finetuned for NER in Danish,DA_BERT_ML.fromNERDA.precookedimportDA_BERT_ML()model=DA_BERT_ML()Down(load) network from web:model.download_network()model.load_network()You can now predict named entities in new (Danish) texts# (Danish) text to identify named entities in:# 'Jens Hansen har en bondegård' = 'Old MacDonald had a farm'text='Jens Hansen har en bondegård'model.predict_text(text)([['Jens','Hansen','har','en','bondegård']],[['B-PER','I-PER','O','O','O']])List of Precooked ModelsThe table below shows the precookedNERDAmodels publicly available for download.ModelLanguageTransformerDatasetF1-scoreDA_BERT_MLDanishMultilingual BERTDaNE82.8DA_ELECTRA_DADanishDanish ELECTRADaNE79.8EN_BERT_MLEnglishMultilingual BERTCoNLL-200390.4EN_ELECTRA_ENEnglishEnglish ELECTRACoNLL-200389.1F1-scoreis the micro-averaged F1-score across entity tags and is evaluated on the respective test sets (that have not been used for training nor validation of the models).Note, that we have not spent a lot of time on actually fine-tuning the models, so there could be room for improvement. If you are able to improve the models, we will be happy to hear from you and include yourNERDAmodel.Model PerformanceThe table below summarizes the performance (F1-scores) of the precookedNERDAmodels.LevelDA_BERT_MLDA_ELECTRA_DAEN_BERT_MLEN_ELECTRA_ENB-PER93.892.096.095.1I-PER97.897.198.597.9B-ORG69.566.988.486.2I-ORG69.970.785.783.1B-LOC82.579.092.391.1I-LOC31.644.483.980.5B-MISC73.468.681.880.1I-MISC86.163.663.468.4AVG_MICRO82.879.890.489.1AVG_MACRO75.672.886.385.3'NERDA'?'NERDA' originally stands for'Named Entity Recognition for DAnish'. However, this is somewhat misleading, since the functionality is no longer limited to Danish. On the contrary it generalizes to all other languages, i.e.NERDAsupports fine-tuning of transformers for NER tasks for any arbitrary language.BackgroundNERDAis developed as a part ofEkstra Bladet’s activities on Platform Intelligence in News (PIN). PIN is an industrial research project that is carried out in collaboration between theTechnical University of Denmark,University of CopenhagenandCopenhagen Business Schoolwith funding fromInnovation Fund Denmark. The project runs from 2020-2023 and develops recommender systems and natural language processing systems geared for news publishing, some of which are open sourced likeNERDA.Shout-outsThanks toAlexandra Institutefor with thedanlppackage to have encouraged us to develop this package.Thanks toMalte Højmark-BertelsenandKasper Jungefor giving feedback onNERDA.Read moreThe detailed documentation forNERDAincluding code references and extended workflow examples can be accessedhere.ContactWe hope, that you will findNERDAuseful.Please direct any questions and feedbacks tous!If you want to contribute (which we encourage you to), open aPR.If you encounter a bug or want to suggest an enhancement, pleaseopen an issue.
adapter
adapterPython library that enables you to create an adapter object that converts an input to a desired output
adapter-client
Клиент для взаимодействия со СМЭВ3 посредством АдаптераПодключениеsettings:INSTALLED_APPS = [ 'adapter_client' ]services:from adapter_client.adapters.smev.adapter import adapter from adapter_client.adapters.smev.services.base import AbstractService from adapter_client.core.domain.model import Message class ROGDINFService(AbstractService): """ Сервис обрабатывающий сообщения со сведениями о рождении. """ message_type = 'urn://x-artefacts-zags-rogdinf/root/112-51/4.0.1' def process_message(self, message: Message): # сообщение на которое получен ответ reply_to = message.reply.to ... class ApplicationRequestService(AbstractService): """ Сервис обрабатывающий запросы на зачисление (в качестве поставщика). """ message_type = ( 'http://epgu.gosuslugi.ru/concentrator/kindergarten/3.2.1' ) def process_message(self, message: Message): # обрабатываем сообщение-запрос ... # отправляем ответ на запрос adapter.send( Message( # необходимо указать что сообщение является ответом reply_to=message, # остальные поля сообщения ... ) )apps:from django.apps.config import AppConfig as AppConfigBase class AppConfig(AppConfigBase): name = __package__ def ready(self): self._init_adapter_client() self._register_services() def _init_adapter_client(self): from adapter_client.config import ProductConfig, set_config from tasks import BaseTask set_config(ProductConfig(async_task_base=BaseTask)) def _register_services(self): from adapter_client.adapters.smev.adapter import adapter from .services import ApplicationRequestService, ROGDINFService adapter.register_service(ApplicationRequestService(), ROGDINFService())Запуск тестов$ toxAPIПередача сообщенияfrom adapter_client.adapters.smev.adapter import adapter from adapter_client.core.domain.model import Message message = Message( message_type='Foo', body='<foo>bar</foo>', attachments=['http://domain.com/attach1', 'http://domain.com/attach2'], test=True ) adapter.send(message)Дальнейшая обработка сообщений производится Celery в фоновом режиме.Получение ответа на сообщениеОтветы на отправленные сообщения собираются периодической задачей и передаются зарегистрированным сервисам.
adapter-engine
Adapter for pocsuite3 and nuclei
adapterio
TheAdapter Python IOsoftware provides a convenient data table loader from various formats such asxlsx,csv,db (sqlite database), andsqlalchemy. Its main feature is the ability to convert data tables identified in one main and optionally one or more additional input files intodatabase tablesandPandas DataFramesfor downstream usage in any compatible software.Adapterbuilds upon the existing Python packages that allow for the communication betweenPythonandMS Excel, as well asdatabasesandcsvfiles. It provides inbuilt capabilities to specify the output location path, as well as a version identifier for a research code run. In addition to the loading capability, an instance of theAdapterIOobject has the write capability. If invoked, all loaded tables are written as either a singledatabaseor a set ofcsvfiles, or both. The purpose of this software is to support the development of research and analytical software through allowing for a simple multi-format IO with versioning and output path specification in the input data itself. The package is supported onWindowsandmacOS, as well as forLinuxfor the utilization without anyxlsxinputs.
adapterio-test
Failed to fetch description. HTTP Status Code: 404
adapters
Note: This repository holds the codebase of theAdapterslibrary, which has replacedadapter-transformers. For the legacy codebase, go to:https://github.com/adapter-hub/adapter-transformers-legacy.AdaptersA Unified Library for Parameter-Efficient and Modular Transfer Learningadaptersis an add-on toHuggingFace's Transformerslibrary, integrating adapters into state-of-the-art language models by incorporatingAdapterHub, a central repository for pre-trained adapter modules.Installationadapterscurrently supportsPython 3.8+andPyTorch 1.10+. Afterinstalling PyTorch, you can installadaptersfrom PyPI ...pip install -U adapters... or from source by cloning the repository:git clone https://github.com/adapter-hub/adapters.git cd adapters pip install .Quick TourLoad pre-trained adapters:fromadaptersimportAutoAdapterModelfromtransformersimportAutoTokenizermodel=AutoAdapterModel.from_pretrained("roberta-base")tokenizer=AutoTokenizer.from_pretrained("roberta-base")model.load_adapter("AdapterHub/roberta-base-pf-imdb",source="hf",set_active=True)print(model(**tokenizer("This works great!",return_tensors="pt")).logits)Learn MoreAdapt existing model setups:importadaptersfromtransformersimportAutoModelForSequenceClassificationmodel=AutoModelForSequenceClassification.from_pretrained("t5-base")adapters.init(model)model.add_adapter("my_lora_adapter",config="lora")model.train_adapter("my_lora_adapter")# Your regular training loop...Learn MoreFlexibly configure adapters:fromadaptersimportConfigUnion,PrefixTuningConfig,ParBnConfig,AutoAdapterModelmodel=AutoAdapterModel.from_pretrained("microsoft/deberta-v3-base")adapter_config=ConfigUnion(PrefixTuningConfig(prefix_length=20),ParBnConfig(reduction_factor=4),)model.add_adapter("my_adapter",config=adapter_config,set_active=True)Learn MoreEasily compose adapters in a single model:fromadaptersimportAdapterSetup,AutoAdapterModelimportadapters.compositionasacmodel=AutoAdapterModel.from_pretrained("roberta-base")qc=model.load_adapter("AdapterHub/roberta-base-pf-trec")sent=model.load_adapter("AdapterHub/roberta-base-pf-imdb")withAdapterSetup(ac.Parallel(qc,sent)):print(model(**tokenizer("What is AdapterHub?",return_tensors="pt")))Learn MoreUseful ResourcesHuggingFace's great documentation on getting started withTransformerscan be foundhere.adaptersis fully compatible withTransformers.To get started with adapters, refer to these locations:Colab notebook tutorials, a series notebooks providing an introduction to all the main concepts of (adapter-)transformers and AdapterHubhttps://docs.adapterhub.ml, our documentation on training and using adapters withadaptershttps://adapterhub.mlto explore available pre-trained adapter modules and share your own adaptersExamples folderof this repository containing HuggingFace's example training scripts, many adapted for training adaptersImplemented MethodsCurrently, adapters integrates all architectures and methods listed below:MethodPaper(s)Quick LinksBottleneck adaptersHoulsby et al. (2019)Bapna and Firat (2019)Quickstart,NotebookAdapterFusionPfeiffer et al. (2021)Docs: Training,NotebookMAD-X,Invertible adaptersPfeiffer et al. (2020)NotebookAdapterDropRücklé et al. (2021)NotebookMAD-X 2.0,Embedding trainingPfeiffer et al. (2021)Docs: Embeddings,NotebookPrefix TuningLi and Liang (2021)DocsParallel adapters,Mix-and-Match adaptersHe et al. (2021)DocsCompacterMahabadi et al. (2021)DocsLoRAHu et al. (2021)Docs(IA)^3Liu et al. (2022)DocsUniPELTMao et al. (2022)DocsPrompt TuningLester et al. (2021)DocsSupported ModelsWe currently support the PyTorch versions of all models listed on theModel Overviewpagein our documentation.Developing & ContributingTo get started with developing onAdaptersyourself and learn more about ways to contribute, please seehttps://docs.adapterhub.ml/contributing.html.CitationIf you useAdaptersin your work, please consider citing our library paper:Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning@inproceedings{poth-etal-2023-adapters, title = "Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning", author = {Poth, Clifton and Sterz, Hannah and Paul, Indraneil and Purkayastha, Sukannya and Engl{\"a}nder, Leon and Imhof, Timo and Vuli{\'c}, Ivan and Ruder, Sebastian and Gurevych, Iryna and Pfeiffer, Jonas}, booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-demo.13", pages = "149--160", }Alternatively, for the predecessoradapter-transformers, the Hub infrastructure and adapters uploaded by the AdapterHub team, please consider citing our initial paper:AdapterHub: A Framework for Adapting Transformers@inproceedings{pfeiffer2020AdapterHub, title={AdapterHub: A Framework for Adapting Transformers}, author={Pfeiffer, Jonas and R{\"u}ckl{\'e}, Andreas and Poth, Clifton and Kamath, Aishwarya and Vuli{\'c}, Ivan and Ruder, Sebastian and Cho, Kyunghyun and Gurevych, Iryna}, booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations}, pages={46--54}, year={2020} }
adapter-transformers
IMPORTANT NOTEThis is the legacyadapter-transformerslibrary, which has been replaced by the newAdapters library, found here:https://github.com/adapter-hub/adapters.Install the new library via pip:pip install adapters.This repository is kept for archival purposes, and will not be updated in the future. Please use the new library for all active projects.The documentation of this library can be found athttps://docs-legacy.adapterhub.ml. The documentation of the newAdapterslibrary can be found athttps://docs.adapterhub.ml. For transitioning, please read:https://docs.adapterhub.ml/transitioning.html.adapter-transformersA friendly fork of HuggingFace'sTransformers, adding Adapters to PyTorch language modelsadapter-transformersis an extension ofHuggingFace's Transformerslibrary, integrating adapters into state-of-the-art language models by incorporatingAdapterHub, a central repository for pre-trained adapter modules.💡 Important: This library can be used as a drop-in replacement for HuggingFace Transformers and regularly synchronizes new upstream changes. Thus, most files in this repository are direct copies from the HuggingFace Transformers source, modified only with changes required for the adapter implementations.Installationadapter-transformerscurrently supportsPython 3.8+andPyTorch 1.12.1+. Afterinstalling PyTorch, you can installadapter-transformersfrom PyPI ...pip install -U adapter-transformers... or from source by cloning the repository:git clone https://github.com/adapter-hub/adapter-transformers.git cd adapter-transformers pip install .Getting StartedHuggingFace's great documentation on getting started withTransformerscan be foundhere.adapter-transformersis fully compatible withTransformers.To get started with adapters, refer to these locations:Colab notebook tutorials, a series notebooks providing an introduction to all the main concepts of (adapter-)transformers and AdapterHubhttps://docs-legacy.adapterhub.ml, our documentation on training and using adapters withadapter-transformershttps://adapterhub.mlto explore available pre-trained adapter modules and share your own adaptersExamples folderof this repository containing HuggingFace's example training scripts, many adapted for training adaptersImplemented MethodsCurrently, adapter-transformers integrates all architectures and methods listed below:MethodPaper(s)Quick LinksBottleneck adaptersHoulsby et al. (2019)Bapna and Firat (2019)Quickstart,NotebookAdapterFusionPfeiffer et al. (2021)Docs: Training,NotebookMAD-X,Invertible adaptersPfeiffer et al. (2020)NotebookAdapterDropRücklé et al. (2021)NotebookMAD-X 2.0,Embedding trainingPfeiffer et al. (2021)Docs: Embeddings,NotebookPrefix TuningLi and Liang (2021)DocsParallel adapters,Mix-and-Match adaptersHe et al. (2021)DocsCompacterMahabadi et al. (2021)DocsLoRAHu et al. (2021)Docs(IA)^3Liu et al. (2022)DocsUniPELTMao et al. (2022)DocsSupported ModelsWe currently support the PyTorch versions of all models listed on theModel Overviewpagein our documentation.CitationIf you use this library for your work, please consider citing our paperAdapterHub: A Framework for Adapting Transformers:@inproceedings{pfeiffer2020AdapterHub, title={AdapterHub: A Framework for Adapting Transformers}, author={Pfeiffer, Jonas and R{\"u}ckl{\'e}, Andreas and Poth, Clifton and Kamath, Aishwarya and Vuli{\'c}, Ivan and Ruder, Sebastian and Cho, Kyunghyun and Gurevych, Iryna}, booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations}, pages={46--54}, year={2020} }
adaptest
Adaptest - a lightweight YAML wrapper for httest================================================## OverviewThere are many powerful tools for automated HTTP-based tests, and and even in Python:- [httest](http://htt.sourceforge.net/), HTTP Test tool- [pyresttest](https://github.com/svanoort/pyresttest), Python REST Test tool- [gabbi](https://github.com/cdent/gabbi), Declarative HTTP Testing toolBut key features for me were:- powerful- easily maintenable config, ideally using YAML or something similar- Cookies support- CSRF supportSome of them were HTTP REST and JSON specific. `httest` was best option but notvery confortable .htt files especially for Testers with little knowledge of HTTPprotocol and programming.Therefore I wrote `Adaptest` which is basically a `httest` YAML wrapper.## FeaturesAs `httest` is really powerful tool, `Adaptest` does not support everything at this stage. Buteven while in alpha stage it supports:- Sequence HTTP testing- Cookies support- CSRF support- Any request headers- Multiple `expect`'s, status, response header tests, body tests- Capturing response headers or body using regex to variables and use in later testes- POST (application/x-www-form-urlencoded)- Auto referer from previous test## Examples`Adaptest` turns yaml config of test:```yml---config:auto_cookie: ontests:- name: user profile without authurl: /en/account/method: getheaders:- Connection: keep-aliveexpect:- scope: .value: "302 Found"- scope: headersvalue: "Location: /en/account/log-in/"- name: login page to get cookieurl: /en/account/log-in/method: getexpect:- scope: .value: 200 OKmatch:- scope: headerspattern: "csrftoken=([^;]+)"variable: csrf- name: login pageurl: /en/account/log-in/method: postreferer: autoheaders:- Content-Type: application/x-www-form-urlencodeddata:- csrfmiddlewaretoken: $csrf- username: [email protected] password: Mys3cr3tp455expect:- scope: .value: "302 Found"- name: user profile after authurl: /en/account/method: getheaders:- Connection: keep-aliveexpect:- scope: .value: 200 OK```into this:```CLIENT_AUTO_COOKIE on_REQ example.com SSL:443_DEBUG user profile without auth__GET /en/account/ HTTP/1.1__Host: example.com__Cookie: AUTO__Connection: keep-alive___EXPECT . "302 Found"_EXPECT headers "Location: /en/account/log-in/"_WAIT_CLOSE_REQ example.com SSL:443_DEBUG login page to get cookie__GET /en/account/log-in/ HTTP/1.1__Host: example.com__Cookie: AUTO___EXPECT . "200 OK"_MATCH headers "csrftoken=([^;]+)" csrf_WAIT_CLOSE_REQ example.com SSL:443_DEBUG login page__POST /en/account/log-in/ HTTP/1.1__Host: example.com__Cookie: AUTO__Content-Length: AUTO__Content-Type: application/x-www-form-urlencoded__Referer: https://example.com/en/account/log-in/____csrfmiddlewaretoken=$csrf&[email protected]&password=Mys3cr3tp455&_EXPECT . "302 Found"_WAIT_CLOSE_REQ example.com SSL:443_DEBUG user profile after auth__GET /en/account/ HTTP/1.1__Host: example.com__Cookie: AUTO__Connection: keep-alive___EXPECT . "200 OK"_WAIT_CLOSEEND```## Installation### From source```bashgit clone [email protected]:Edke/adaptest.gitcd adaptestsudo python setup.py install```### From PyPI```bashpip install adaptest```## Testing```bashcd testspytest```## StatusPlease consider this tool as early alpha, not ready for production. Testing is more than welcome.## ContributingFor bugs, feature requests or code contributing please use [Github project page](https://github.com/Edke/adaptest).
adaptfilt
Adaptfilt is an adaptive filtering module for Python. It includes simple, procedural implementations of the following filtering algorithms:Least-mean-squares (LMS)- including traditional and leaky filteringNormalized least-mean-squares (NLMS)- including traditional and leaky filtering with recursively updated input energyAffine projection (AP)- including traditional and leaky filteringThe algorithms are implemented using Numpy for computational efficiency. Further optimization have also been done, but this is very limited and only on the most computationally intensive parts of the source code. Future implementation of the following algorithms is currently planned:Recursive least squares (RLS)Steepest descent (SD)Authors: Jesper Wramberg & Mathias TausenVersion: 0.2PyPI:https://pypi.python.org/pypi/adaptfiltGitHub:https://github.com/Wramberg/adaptfiltLicense: MITInstallationTo install from PyPI using pip simply run:sudo pip install adaptfiltAlternatively, the module can also be downloaded athttps://pypi.python.org/pypi/adaptfiltorhttps://github.com/Wramberg/adaptfilt. The latter is also used for issue tracking. Note that adaptfilt requires Numpy to be installed (tested using version 1.9.0).UsageOnce installed, the module should be available for import by calling:import adaptfiltFollowing the reference sections, examples are provided to show the modules functionality.Function ReferenceIn this section, the functions provided by adaptfilt are described. The descriptions correspond with excerpts from the function docstrings and are only included here for your convenience.y, e, w = lms(u, d, M, step, leak=0., initCoeffs=None, N=None, returnCoeffs=False)Perform least-mean-squares (LMS) adaptive filtering on u to minimize error given by e=d-y, where y is the output of the adaptive filter.Parametersuarray-likeOne-dimensional filter input.darray-likeOne-dimensional desired signal, i.e., the output of the unknown FIR system which the adaptive filter should identify. Must have length >= len(u), or N+M-1 if number of iterations are limited (via the N parameter).MintDesired number of filter taps (desired filter order + 1), must be non-negative.stepfloatStep size of the algorithm, must be non-negative.Optional ParametersleakfloatLeakage factor, must be equal to or greater than zero and smaller than one. When greater than zero a leaky LMS filter is used. Defaults to 0, i.e., no leakage.initCoeffsarray-likeInitial filter coefficients to use. Should match desired number of filter taps, defaults to zeros.NintNumber of iterations, must be less than or equal to len(u)-M+1 (default).returnCoeffsbooleanIf true, will return all filter coefficients for every iteration in an N x M matrix. Does not include the initial coefficients. If false, only the latest coefficients in a vector of length M is returned. Defaults to false.Returnsynumpy.arrayOutput values of LMS filter, array of length N.enumpy.arrayError signal, i.e, d-y. Array of length N.wnumpy.arrayFinal filter coefficients in array of length M if returnCoeffs is False. NxM array containing all filter coefficients for all iterations otherwise.RaisesTypeErrorIf number of filter taps M is not type integer, number of iterations N is not type integer, or leakage leak is not type float/int.ValueErrorIf number of iterations N is greater than len(u)-M, number of filter taps M is negative, or if step-size or leakage is outside specified range.y, e, w = nlmsru(u, d, M, step, eps=0.001, leak=0, initCoeffs=None, N=None, returnCoeffs=False)Same as nlms but updates input energy recursively for faster computation. Note that this can cause instability due to rounding errors.y, e, w = nlms(u, d, M, step, eps=0.001, leak=0, initCoeffs=None, N=None, returnCoeffs=False)Perform normalized least-mean-squares (NLMS) adaptive filtering on u to minimize error given by e=d-y, where y is the output of the adaptive filter.Parametersuarray-likeOne-dimensional filter input.darray-likeOne-dimensional desired signal, i.e., the output of the unknown FIR system which the adaptive filter should identify. Must have length >= len(u), or N+M-1 if number of iterations are limited (via the N parameter).MintDesired number of filter taps (desired filter order + 1), must be non-negative.stepfloatStep size of the algorithm, must be non-negative.Optional ParametersepsfloatRegularization factor to avoid numerical issues when power of input is close to zero. Defaults to 0.001. Must be non-negative.leakfloatLeakage factor, must be equal to or greater than zero and smaller than one. When greater than zero a leaky LMS filter is used. Defaults to 0, i.e., no leakage.initCoeffsarray-likeInitial filter coefficients to use. Should match desired number of filter taps, defaults to zeros.NintNumber of iterations to run. Must be less than or equal to len(u)-M+1. Defaults to len(u)-M+1.returnCoeffsbooleanIf true, will return all filter coefficients for every iteration in an N x M matrix. Does not include the initial coefficients. If false, only the latest coefficients in a vector of length M is returned. Defaults to false.Returnsynumpy.arrayOutput values of LMS filter, array of length N.enumpy.arrayError signal, i.e, d-y. Array of length N.wnumpy.arrayFinal filter coefficients in array of length M if returnCoeffs is False. NxM array containing all filter coefficients for all iterations otherwise.RaisesTypeErrorIf number of filter taps M is not type integer, number of iterations N is not type integer, or leakage leak is not type float/int.ValueErrorIf number of iterations N is greater than len(u)-M, number of filter taps M is negative, or if step-size or leakage is outside specified range.y, e, w = ap(u, d, M, step, K, eps=0.001, leak=0, initCoeffs=None, N=None, returnCoeffs=False)Perform affine projection (AP) adaptive filtering on u to minimize error given by e=d-y, where y is the output of the adaptive filter.Parametersuarray-likeOne-dimensional filter input.darray-likeOne-dimensional desired signal, i.e., the output of the unknown FIR system which the adaptive filter should identify. Must have length >= len(u), or N+M-1 if number of iterations are limited (via the N parameter).MintDesired number of filter taps (desired filter order + 1), must be non-negative.stepfloatStep size of the algorithm, must be non-negative.KintProjection order, must be integer larger than zero.Optional ParametersepsfloatRegularization factor to avoid numerical issues when power of input is close to zero. Defaults to 0.001. Must be non-negative.leakfloatLeakage factor, must be equal to or greater than zero and smaller than one. When greater than zero a leaky LMS filter is used. Defaults to 0, i.e., no leakage.initCoeffsarray-likeInitial filter coefficients to use. Should match desired number of filter taps, defaults to zeros.NintNumber of iterations to run. Must be less than or equal to len(u)-M+1. Defaults to len(u)-M+1.returnCoeffsbooleanIf true, will return all filter coefficients for every iteration in an N x M matrix. Does not include the initial coefficients. If false, only the latest coefficients in a vector of length M is returned. Defaults to false.Returnsynumpy.arrayOutput values of LMS filter, array of length N.enumpy.arrayError signal, i.e, d-y. Array of length N.wnumpy.arrayFinal filter coefficients in array of length M if returnCoeffs is False. NxM array containing all filter coefficients for all iterations otherwise.RaisesTypeErrorIf number of filter taps M is not type integer, number of iterations N is not type integer, or leakage leak is not type float/int.ValueErrorIf number of iterations N is greater than len(u)-M, number of filter taps M is negative, or if step-size or leakage is outside specified range.Helper Function Referencemswe = mswe(w, v)Calculate mean squared weight error between estimated and true filter coefficients, in respect to iterations.Parametersvarray-likeTrue coefficients used to generate desired signal, must be a one-dimensional array.warray-likeEstimated coefficients from adaptive filtering algorithm. Must be an N x M matrix where N is the number of iterations, and M is the number of filter coefficients.Returnsmswenumpy.arrayOne-dimensional array containing the mean-squared weight error for every iteration.RaisesTypeErrorIf inputs have wrong dimensionsNoteTo use this function with the adaptive filter functions set the optional parameter returnCoeffs to True. This will return a coefficient matrix w corresponding with the input-parameter w.ExamplesThe following examples illustrate the use of the adaptfilt module. Note that the matplotlib.pyplot module is required to run them.Acoustic echo cancellation""" Acoustic echo cancellation in white background noise with NLMS. Consider a scenario where two individuals, John and Emily, are talking over the Internet. John is using his loudspeakers, which means Emily can hear herself through John's microphone. The speech signal that Emily hears, is a distorted version of her own. This is caused by the acoustic path from John's loudspeakers to his microphone. This path includes attenuated echoes, etc. Now for the problem! Emily wishes to cancel the echo she hears from John's microphone. Emily only knows the speech signal she sends to him, call that u(n), and the speech signal she receives from him, call that d(n). To successfully remove her own echo from d(n), she must approximate the acoustic path from John's loudspeakers to his microphone. This path can be approximated by a FIR filter, which means an adaptive NLMS FIR filter can be used to identify it. The model which Emily uses to design this filter looks like this: u(n) ------->->------+----------->->----------- | | +-----------------+ +------------------+ +->-| Adaptive filter | | John's Room | | +-----------------+ +------------------+ | | -y(n) | | | d(n) | e(n) ---+---<-<------+-----------<-<----------+----<-<---- v(n) As seen, the signal that is sent to John is also used as input to the adaptive NLMS filter. The output of the filter, y(n), is subtracted from the signal received from John, which results in an error signal e(n) = d(n)-y(n). By feeding the error signal back to the adaptive filter, it can minimize the error by approximating the impulse response (that is the FIR filter coefficients) of John's room. Note that so far John's speech signal v(n) has not been taken into account. If John speaks, the error should equal his speech, that is, e(n) should equal v(n). For this simple example, however, we assume John is quiet and v(n) is equal to white Gaussian background noise with zero-mean. In the following example we keep the impulse response of John's room constant. This is not required, however, since the advantage of adaptive filters, is that they can be used to track changes in the impulse response. """ import numpy as np import matplotlib.pyplot as plt import adaptfilt as adf # Get u(n) - this is available on github or pypi in the examples folder u = np.load('speech.npy') # Generate received signal d(n) using randomly chosen coefficients coeffs = np.concatenate(([0.8], np.zeros(8), [-0.7], np.zeros(9), [0.5], np.zeros(11), [-0.3], np.zeros(3), [0.1], np.zeros(20), [-0.05])) d = np.convolve(u, coeffs) # Add background noise v = np.random.randn(len(d)) * np.sqrt(5000) d += v # Apply adaptive filter M = 100 # Number of filter taps in adaptive filter step = 0.1 # Step size y, e, w = adf.nlms(u, d, M, step, returnCoeffs=True) # Calculate mean square weight error mswe = adf.mswe(w, coeffs) # Plot speech signals plt.figure() plt.title("Speech signals") plt.plot(u, label="Emily's speech signal, u(n)") plt.plot(d, label="Speech signal from John, d(n)") plt.grid() plt.legend() plt.xlabel('Samples') # Plot error signal - note how the measurement noise affects the error plt.figure() plt.title('Error signal e(n)') plt.plot(e) plt.grid() plt.xlabel('Samples') # Plot mean squared weight error - note that the measurement noise causes the # error the increase at some points when Emily isn't speaking plt.figure() plt.title('Mean squared weight error') plt.plot(mswe) plt.grid() plt.xlabel('Samples') # Plot final coefficients versus real coefficients plt.figure() plt.title('Real coefficients vs. estimated coefficients') plt.plot(w[-1], 'g', label='Estimated coefficients') plt.plot(coeffs, 'b--', label='Real coefficients') plt.grid() plt.legend() plt.xlabel('Samples') plt.show()Convergence comparison""" Convergence comparison of different adaptive filtering algorithms (with different step sizes) in white Gaussian noise. """ import numpy as np import matplotlib.pyplot as plt import adaptfilt as adf # Generating input and desired signal N = 3000 coeffs = np.concatenate(([-4, 3.2], np.zeros(20), [0.7], np.zeros(33), [-0.1])) u = np.random.randn(N) d = np.convolve(u, coeffs) # Perform filtering M = 60 # No. of taps to estimate mu1 = 0.0008 # Step size 1 in LMS mu2 = 0.0004 # Step size 1 in LMS beta1 = 0.08 # Step size 2 in NLMS and AP beta2 = 0.04 # Step size 2 in NLMS and AP K = 3 # Projection order 1 in AP # LMS y_lms1, e_lms1, w_lms1 = adf.lms(u, d, M, mu1, returnCoeffs=True) y_lms2, e_lms2, w_lms2 = adf.lms(u, d, M, mu2, returnCoeffs=True) mswe_lms1 = adf.mswe(w_lms1, coeffs) mswe_lms2 = adf.mswe(w_lms2, coeffs) # NLMS y_nlms1, e_nlms1, w_nlms1 = adf.nlms(u, d, M, beta1, returnCoeffs=True) y_nlms2, e_nlms2, w_nlms2 = adf.nlms(u, d, M, beta2, returnCoeffs=True) mswe_nlms1 = adf.mswe(w_nlms1, coeffs) mswe_nlms2 = adf.mswe(w_nlms2, coeffs) # AP y_ap1, e_ap1, w_ap1 = adf.ap(u, d, M, beta1, K, returnCoeffs=True) y_ap2, e_ap2, w_ap2 = adf.ap(u, d, M, beta2, K, returnCoeffs=True) mswe_ap1 = adf.mswe(w_ap1, coeffs) mswe_ap2 = adf.mswe(w_ap2, coeffs) # Plot results plt.figure() plt.title('Convergence comparison of different adaptive filtering algorithms') plt.plot(mswe_lms1, 'b', label='LMS with stepsize=%.4f' % mu1) plt.plot(mswe_lms2, 'b--', label='LMS with stepsize=%.4f' % mu2) plt.plot(mswe_nlms1, 'g', label='NLMS with stepsize=%.2f' % beta1) plt.plot(mswe_nlms2, 'g--', label='NLMS with stepsize=%.2f' % beta2) plt.plot(mswe_ap1, 'r', label='AP with stepsize=%.2f' % beta1) plt.plot(mswe_ap2, 'r--', label='AP with stepsize=%.2f' % beta2) plt.legend() plt.grid() plt.xlabel('Iterations') plt.ylabel('Mean-squared weight error') plt.show()Release History0.2Included NLMS filtering function with recursive updates of input energy.Included acoustic echo cancellation example0.1Initial module with LMS, NLMS and AP filtering functions.
adapt-fw
ADAPTADAPTstands forADAptivePickingToolbox. This package is a library thought for seismologist and scientist that approach a seismic phase picking analysis.This library aims to easening the creation of seismic catalogue by means of a semi-automated offline multi-phase repicking system.AUTHOR:Matteo BagagliVERSION:0.0.1DATE:06.2021 @ ETH-ZurichSetupMore info coming soon. The software is currently under submission process in peer-reviewd journal. In the meantime it is distributed a useful magnitude calculation script/module with several functions for LocalMagnitude (ML) calculation [Richter 1935]For the moment just type:$pipinstalladaptReferencesRichter, Charles F. "An instrumental earthquake magnitude scale."Bulletin of the seismological society of America25.1 (1935): 1-32.
adaptfx
TheadaptfxpackageContentAboutInstallationPackage StructureDescribtion2D Algorithms3D AlgorithmsGUIProbability UpdatingAdditional DataExtended FunctionTroubleshootingAboutadaptfxis a python package to calculate adaptive fractionation schemes. Using magnetic resonance (MR) guidance in radiotherapy, treatment plans can be adapted daily to a patient's geometry, thereby exploiting inter-fractional motion of tumors and organs at risk (OAR). This can improve OAR sparing or tumor coverage, compared to standard fractionation schemes, which simply apply a predefined dose every time.For this adaptive approach a reinforcement learning algorithm based on dynamic programming was initially developed by Pérez Haas et al.[1]. The package is actively maintained and frequently extended as part of our ongoing research on the topicInstallationIt is recommended to create a virtual environment using thevenvmodule:$ python3.10 -m venv adaptfx_envactivate the virtual environment$ cd adaptfx_env $ source bin/activateTo install theadaptfxpackage, use either of the methods below.Method 1: pip$ pip install adaptfxMethod 2: install from source$ git clone https://github.com/openAFT/adaptfx.git $ cd adaptfx $ pip3 install .the command line tool (CLI) is then available and can be used via$ aft [options] <instructions_file>for more information on the usage of the CLI, read themanual.The user can also decide to use the scripts fromreinforcein their python scripts e.g.importadapatfxasafxplan_output=afx.multiple('oar',keys)adaptfxalso provides a GUI. However, it depends onTkinter. It often comes installed, but if not you can find the relevant installation instructionshere. E.g. in python and on Ubuntu, you would install it via$ sudo apt install python3-tkPackage StructureThe package is organized under thesrcfolder. All relevant scripts that calculate the fractionation schemes are packed as functions in eitherreinforce.pyorreinforce_old.py. Wherereinforce.pyholds the newest functions supporting more features and faster calculation. Older functions are also integrated with the CLI, but need to be updated.adaptfx ├── src/adaptfx │ ├── aft_propmt.py │ ├── aft_utils.py │ ├── aft.py │ ├── constants.py │ ├── maths.py │ ├── planning.py │ ├── radiobiology.py │ ├── reinforce_old.py │ ├── reinforce.py │ └── visualiser.py └── workDescriptionThe 2D algorithmsThe functionmax_tumor_bed_oldglobally tracks OAR BED to satisfy constraints on the dose to the normal tissue, while attempting to maximize the BED delivered to the tumor.min_oar_bedandmin_oar_bed_old, on the other hand, track tumor BED to achieve the tumor dose target and in doing so it minimizes the cumulative OAR BED.Since the state spaces for these two algorithms are essentially two-dimensional, they are the faster algorithm. But they may overshoot w.r.t. the dose delivered to the tumor/OAR, since only one of the structure's BED can be tracked, one has to decide whether reaching the prescribed tumor dose or staying below the maximum OAR BED is more relevant.Generally the OAR tracking is better suited for patients with anatomies where the OAR and tumor are close to each other and the prescribed dose may not be reached. When the OAR and tumor are farther apart, tracking the tumor BED and minimizing OAR BED can lead to reduced toxicity while achieving the same treatment goals.frac_mindefines the function to track OAR BED and minimize the number of fractions in cases where there appears an exceptionally low sparing factor during the course of a treatment.The 3D algorithmsThe 3D algorithms in functionmin_oar_max_tumor_oldtrack OAR BED and tumor BED simultaneously. In this version a prescribed tumor dose must be provided alongside an OAR BED constraint. The algorithm then tries smartly optimizes for a low OAR BEDandhigh tumor BED at the same time, while never compromising OAR constraints and always preferring to reduce normal tissue dose when achieving the treatment objectives.The algorithms are based on an inverse-gamma prior distribution. To set up this distribution a dataset is needed with prior patient data (sparing factors) from the same population.There is a function to calculate the hyperparameters of the inverse-gamma distribution. But there is also the option to use a fixed probability distribution for the sparing factors. In this case, the probability distribution must be provided with a mean and a standard deviation, and it is not updated as more information is available. To check out how the hyperparameters influence the prior distribution, theInverse_gamma_distribution_preview.pyfile has been included that allows direct modelling of the distribution.GUIA last addition is made with graphical user interfaces that facilitate the use of the interpolation algorithms. There are two interfaces that can be run. In these interfaces all variables can be given to compute an adaptive fractionation plan for a patient.:warning: Note:The interfaces are not optimized, and thus it is not recommended using them to further develop extensions.Reducing Number of FractionsFor the 2D algorithms there exist the possibility to reduce number of fractions. A constant $c$ can be chosen which introduces a reward (or rather a cost) linear to the number of fractions used to finish the treatment. The cost is added to the immediate reward returned by the environment in the current fraction. There exist a simulative model helping to estimate what the constant $c$ should be chosen in order for the treatment to finish on some target number of fractions $n_{\text{targ}}$. The function can be foundhereinc_calc.Probability UpdatingThe DP algorithm relies on a description of the environment to compute an optimal policy, in this case the probability distribution of the sparing factor $P(\delta)$, which we assume to be a Gaussian distribution truncated at $0$, with patient-specific parameters for mean and standard deviation. At the start of a treatment, only two sparing factors are available for that patient, from the planning scan and the first fraction. In each fraction, an additional sparing factor is measured, which can be used to calculate updated estimates $\mu_t$ and $\sigma_t$ for mean and standard deviation, respectively.No UpdatingIn case where the probability is not updated the parameters $\mu_t$ and $\sigma_t$ of the normal distribution can be fixed.Maximum a posteriori estimationIn each fraction $t$, a maximum likelihood estimator of the mean of the sparing factor distribution and an estimator for the standard deviation (following a chi-squared distribution) is used. Both estimators are used to constitute the updated normal distribution in fraction $t$.However, the standard deviation may be severely under- or overestimated if calculated from only two samples at the very beginning of the treatment. Therefore, we assume a population based prior for the standard deviation and compute the maximum a posterior estimator of $\sigma_t$ via Bayesian inference. As the sparing factors are assumed to follow a normal distribution with unknown variance, a gamma distribution is chosen as prior to estimate the standard deviation $\sigma$.Posterior predicitve distributionApart from using a gamma prior for the standard deviation, a full Bayesian approach can be employed with an inverse-gamm distribution as a conjugate prior for the variance. The resulting posterior predictive distribution is a student t-distribution. With this approach instead of using the gamma prior to estimate, the probability distribution is estimated from an updated t-distribution. The results are slightly different compared to the maximum a posteriori estimation.Additional DataThe two additional folders (DVH_figures,Patientdata_paper) contain the DVH data and figures of the 10 patients that were included in the paper.Extended FunctionalityThe algorithms allow to choose some extra parameters to specify extra constraints. The suggested parameters are specified for a 5 fraction SBRT plan where there are not constraints on the maximum or minimum dose:Chose the amount of fractions. Instead of just calculating for the case of a 5-fractions SBRT treatment, the amount of fractions can be chosen freely (e.g. 30 fractions)Fix a minimum and maximum dose: Limits the action space by forcing a minimum and maximum dose for each fraction. (e.g. 4-16 Gy)Calculate optimal fraction size by tracking tumor BED: The 2D GUI has an additional extension, where one can optimize the optimal dose based on the prescribed tumor dose. (E.g., the clinician prescribes a tumor BED of 72 Gy. The program will try to minimize the OAR BED while aiming at the 72 Gy BED prescribed dose.)TroubleshootingNo module named_ctypeson installProblem:on Linux distributions it happens that thepip install .command fails with the message:Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.10/some/module", line 10, in <module> import ctypes File "/usr/local/lib/python3.10/ctypes/__init__.py", line 7, in <module> from _ctypes import Union, Structure, Array ImportError: No module named '_ctypes'Solution:with the specific package manager of the Linux distribution installlibffi-devdevelopment tool. E.g. in Fedora Linux and derivatives install this tool$ sudo dnf install libffi-develOn Ubuntu:$ sudo apt install libffi-devNo GUI backend formatplotlibProblem:on Linux or MacOS it could be that onceaftis run the plots are not shown and there is an error message:Collecting tkinter Could not find a version that satisfies the requirement tkinter (from versions: ) No matching distribution found for tkinterSolution:on Fedora Linux and derivative distributions one could solve this by either installing python tkinter$ sudo dnf install python3-tkinteron Ubuntu$ sudo apt-get install python3-tkSolution:on MacOS and Linux one could instead usepipto installpyqt$ pip install pyqt5References[1]Yoel Samuel Pérez Haas et al.;Adaptive fractionation at the MR-linac,Physics in Medicine & Biology, Jan. 2023, doi:https://doi.org/10.1088/1361-6560/acafd4
adaptgym
No description available on PyPI.
adaptisc
# AdaptFPGA programmer demonstrating how to use the ProteusISC library.## Installation / Setupsudo pip install .For additional information on setting up supported jtag controllers, check the gh-pages above.
adaptive
Adaptive: Parallel Active Learning of Mathematical Functions :brain::1234:Adaptive is an open-source Python library that streamlines adaptive parallel function evaluations. Rather than calculating all points on a dense grid, it intelligently selects the "best" points in the parameter space based on your provided function and bounds. With minimal code, you can perform evaluations on a computing cluster, display live plots, and optimize the adaptive sampling algorithm.Adaptive is most efficient for computations where each function evaluation takes at least ≈50ms due to the overhead of selecting potentially interesting points.To see Adaptive in action, try theexample notebook on Binderor explore thetutorial on Read the Docs.[ToC]📚:star: Key features:rocket: Example usage:floppy_disk: Exporting Data:test_tube: Implemented Algorithms:package: Installation:wrench: Development:books: Citing:page_facing_up: Draft Paper:sparkles: Credits:star: Key features🎯Intelligent Adaptive Sampling: Adaptive focuses on areas of interest within a function, ensuring better results with fewer evaluations, saving time, and computational resources.⚡Parallel Execution: The library leverages parallel processing for faster function evaluations, making optimal use of available computational resources.📊Live Plotting and Info Widgets: When working in Jupyter notebooks, Adaptive offers real-time visualization of the learning process, making it easier to monitor progress and identify areas of improvement.🔧Customizable Loss Functions: Adaptive supports various loss functions and allows customization, enabling users to tailor the learning process according to their specific needs.📈Support for Multidimensional Functions: The library can handle functions with scalar or vector outputs in one or multiple dimensions, providing flexibility for a wide range of problems.🧩Seamless Integration: Adaptive offers a simple and intuitive interface, making it easy to integrate with existing Python projects and workflows.💾Flexible Data Export: The library provides options to export learned data as NumPy arrays or Pandas DataFrames, ensuring compatibility with various data processing tools.🌐Open-Source and Community-Driven: Adaptive is an open-source project, encouraging contributions from the community to continuously improve and expand the library's features and capabilities.:rocket: Example usageAdaptively learning a 1D function and live-plotting the process in a Jupyter notebook:fromadaptiveimportnotebook_extension,Runner,Learner1Dnotebook_extension()defpeak(x,a=0.01):returnx+a**2/(a**2+x**2)learner=Learner1D(peak,bounds=(-1,1))runner=Runner(learner,loss_goal=0.01)runner.live_info()runner.live_plot():floppy_disk: Exporting DataYou can export the learned data as a NumPy array:data=learner.to_numpy()If you have Pandas installed, you can also export the data as a DataFrame:df=learner.to_dataframe():test_tube: Implemented AlgorithmsThe core concept inadaptiveis thelearner. Alearnersamples a function at the most interesting locations within its parameter space, allowing for optimal sampling of the function. As the function is evaluated at more points, the learner improves its understanding of the best locations to sample next.The definition of the "best locations" depends on your application domain. Whileadaptiveprovides sensible default choices, the adaptive sampling process can be fully customized.The following learners are implemented:Learner1D: for 1D functionsf: ℝ → ℝ^N,Learner2D: for 2D functionsf: ℝ^2 → ℝ^N,LearnerND: for ND functionsf: ℝ^N → ℝ^M,AverageLearner: for random variables, allowing averaging of results over multiple evaluations,AverageLearner1D: for stochastic 1D functions, estimating the mean value at each point,IntegratorLearner: for integrating a 1D functionf: ℝ → ℝ,BalancingLearner: for running multiple learners simultaneously and selecting the "best" one as more points are gathered.Meta-learners (to be used with other learners):BalancingLearner: for running several learners at once, selecting the "most optimal" one each time you get more points,DataSaver: for when your function doesn't return just a scalar or a vector.In addition to learners,adaptiveoffers primitives for parallel sampling across multiple cores or machines, with built-in support for:concurrent.futures,mpi4py,loky,ipyparallel, anddistributed.:package: Installationadaptiveworks with Python 3.7 and higher on Linux, Windows, or Mac, and provides optional extensions for working with the Jupyter/IPython Notebook.The recommended way to install adaptive is usingconda:condainstall-cconda-forgeadaptiveadaptiveis also available on PyPI:pipinstall"adaptive[notebook]"The[notebook]above will also install the optional dependencies for runningadaptiveinside a Jupyter notebook.To use Adaptive in Jupyterlab, you need to install the following labextensions.jupyterlabextensioninstall@jupyter-widgets/jupyterlab-manager jupyterlabextensioninstall@pyviz/jupyterlab_pyviz:wrench: DevelopmentClone the repository and runpip install -e ".[notebook,testing,other]"to add a link to the cloned repo into your Python path:[email protected]:python-adaptive/adaptive.gitcdadaptive pipinstall-e".[notebook,testing,other]"We recommend using a Conda environment or a virtualenv for package management during Adaptive development.To avoid polluting the history with notebook output, set up the git filter by running:pythonipynb_filter.pyin the repository.To maintain consistent code style, we usepre-commit. Install it by running:pre-commitinstallin the repository.:books: CitingIf you used Adaptive in a scientific work, please cite it as follows.@misc{Nijholt2019,doi={10.5281/zenodo.1182437},author={Bas Nijholt and Joseph Weston and Jorn Hoofwijk and Anton Akhmerov},title={\textit{Adaptive}: parallel active learning of mathematical functions},publisher={Zenodo},year={2019}}:page_facing_up: Draft PaperIf you're interested in the scientific background and principles behind Adaptive, we recommend taking a look at thedraft paperthat is currently being written. This paper provides a comprehensive overview of the concepts, algorithms, and applications of the Adaptive library.:sparkles: CreditsWe would like to give credits to the following people:Pedro Gonnet for his implementation ofCQUAD, “Algorithm 4” as described in “Increasing the Reliability of Adaptive Quadrature Using Explicit Interpolants”, P. Gonnet, ACM Transactions on Mathematical Software, 37 (3), art. no. 26, 2010.Pauli Virtanen for hisAdaptiveTriSamplingscript (no longer available online since SciPy Central went down) which served as inspiration for theadaptive.Learner2D.For general discussion, we have aGitter chat channel. If you find any bugs or have any feature suggestions please file a GitHubissueor submit apull request.
adaptiveagatepy
Early Release Phase - November, 2020This platform is in an early release phase; if you are comfortable working with early-release software and would like access, please [email protected] Agate is a new, modern platform designed by Adaptive Biotechnologies to share data with our immunoSEQ&reg; research customers and assist with downstream analysis. The platform consists of four components:A cloud-based SQL Server database containing metadata about samples such as the total number of templates found and sample-level metrics such as clonality.Per-sample files containing detailed sequence-level information, stored in and accessed via Azure Blob Storage.A set of REST functions providing common algorithms and analyses.An easily-installed Python library that provides simple access to all of the above.Access to these resources are secured using Microsoft Azure Active Directory credentials. Customers can associate their existing email addresses with Agate accounts and, in many cases, use existing enterprise or Microsoft credentials.While we expect most users will use Python to access Agate data and functions, any tool or development environment that can access SQL, Azure Blob and HTTPS resources using Active Directory authentication is supported. For example, we supply a Microsoft Excel template (Windows only) that can automatically download data into worksheets and pivot tables for direct analysis in that environment. For more information, visit the Agate documentation website at www.adaptiveagate.com.How to get the Adaptive Agate Python SDKPackages for the Adaptive Agate Python SDK (adaptiveagatepy) are available from PyPi and Anaconda using the mechanisms below.To install from PyPi using PIP: .. code-block:pip install adaptiveagatepyTo install from Anaconda using conda: .. code-block:conda install -c adaptivebiotech adaptiveagatepyimmunoSEQ is for research use only and not for use in diagnostic procedures.Copyright 2020 Adaptive Biotechnologies, All Rights Reserved.
adaptive_binning_chisquared_2sam
UNKNOWN
adaptive-boxes
Adaptive-BoxesPython Library for rectangular decomposition of 2D binary images.See the CUDA GPU version:adaptive-boxes-gpuQuick StartInstalladaboxfrom PiP:pip install adaptive-boxesCalladaptive-boxeslibraryfrom adabox import proc from adabox.plot_tools import plot_rectangles, plot_rectangles_only_linesCall others too:import numpy as np import matplotlib.pyplot as pltLoad data in.csvformat. File should contain data with columns:[x1_position x2_position flag]. Initially,flag = 0(Seesample_datafolder).# Input Path in_path = './sample_data/sample_2.csv' # Load Demo data with columns [x_position y_position flag] data_2d = np.loadtxt(in_path, delimiter=",")If you want to see data, plot using:# Plot demo data plt.scatter(data_2d[:, 0], data_2d[:, 1]) plt.axis('scaled')Decompose data in rectangles, it returns a list of rectangles and a separation value needed to plot them.rectangles = [] # Number of random searches, more is better! searches = 2 (rectangles, sep_value) = proc.decompose(data_2d, searches) print('Number of rectangles found: ' + str(len(rectangles)))Plot resulting rectanglesplot_rectangles(rectangles, sep_value)orplot_rectangles_only_lines(rectangles, sep_value)OutputAdaboxapplied over:./sample_data/files. Click in the images to expand.Hi-res imagesFile:sample_1.csvFile:sample_2.csvRepo ContentEach folder contains the next information:data: Files with voxel information in Blender (.plyextension)proto: Prototype scriptsresults: Results of the heuristic process (.jsonextension)lib: library scriptsMore info
adaptivebridge
Project Name:AdaptiveBridgeLicense:MIT LicenseAuthor:Netanel EliavAuthor Website:https://inetanel.comAuthor Email:[email protected]:Click HereIssue Tracker:Click HereOverviewAdaptiveBridge is a revolutionary adaptive modeling for machine learning applications, particularly in the realm of Artificial Intelligence. It tackles a common challenge in AI projects: handling missing features in real-world scenarios. Machine learning models are often trained on specific features, but when deployed, users may not have access to all those features for predictions. AdaptiveBridge bridges this gap by enabling models to intelligently predict and fill in missing features, similar to how humans handle incomplete data. This ensures that AI models can seamlessly manage missing data and features while providing accurate predictions.Key FeaturesMissing Feature Prediction:AdaptiveBridge empowers AI models to predict and fill in missing features based on the available data.Feature Selection for Mapping:You can impact the features prediction methods by using configurable thresholds for importance, correlation, and accuracy.Adaptive Modeling:Utilize machine learning models to predict missing features, maintaining high prediction accuracy even with incomplete data.Custom Accuracy Logic:Define your own accuracy calculation logic to fine-tune feature selection.Feature Distribution Handling:Automatically determine the best method for handling feature distribution based on data characteristics.Dependency Management:Identify mandatory, deviation, and leveled features to optimize AI model performance.UsageWith AdaptiveBridge, integrating this powerful tool into your AI and machine learning pipelines is easy. Fit the class to your data, and let it handle missing features intelligently. Detailed comments and comprehensive documentation are provided for straightforward implementation.Getting StartedFollow these steps to get started with AdaptiveBridge:Clone this repository:pipinstalladaptivebridge# Alternativelygitclonehttps://github.com/inetanel/adaptivebridge.git pipinstall-rrequirements.txtDependenciesSklearnScipyNumPyPandasDistfitMatplotlibPytest (Production Dependency)TqdmContributionContributions and feedback are highly encouraged. You can open issues, submit pull requests for enhancements or bug fixes, and be part of the AI community that advances AdaptiveBridge.LicenseThis project is licensed under the MIT License. See the LICENSE file for details.DisclaimerThis code is provided as-is, without any warranties or guarantees. Please use it responsibly and review the documentation for usage instructions and best practices.
adaptivecard
A package that helps you design adaptive cards in an object-oriented manner.
adaptivecardbuilder
Python Adaptive Card BuilderEasily Build and Export Multilingual Adaptive Cards Through PythonProgrammatically construct adaptive cards like Lego, without the learning curve of Adaptive Card 'Templating'Avoid the curly-braces jungle of traditional JSON editingBuild pythonically, but with minimal abstraction while preserving readabilityOutput built cards to JSON or a Python Dictionary in a single method callAuto-translate all text elements in a card with a single method callCombine multiple individual cards through the + operatorView this package on pypihttps://pypi.org/project/adaptivecardbuilder/Installation via pippipinstalladaptivecardbuilderLearn about Adaptive Cards:Home Page:https://adaptivecards.io/Adaptive Card Designer:https://adaptivecards.io/designer/Schema Explorer:https://adaptivecards.io/explorer/Documentation:https://docs.microsoft.com/en-us/adaptive-cards/Adaptive Card Builder "Hello World":fromadaptivecardbuilderimport*# initialize cardcard=AdaptiveCard()# Add a textblockcard.add(TextBlock(text="0.45 miles away",separator="true",spacing="large"))# add column setcard.add(ColumnSet())# First column contentscard.add(Column(width=2))card.add(TextBlock(text="BANK OF LINGFIELD BRANCH"))card.add(TextBlock(text="NE Branch",size="ExtraLarge",weight="Bolder"))card.add(TextBlock(text="4.2 stars",isSubtle=True,spacing="None"))card.add(TextBlock(text=f"Some review text for illustration",size="Small"))# Back up to column setcard.up_one_level()# Second column contentscard.add(Column(width=1))card.add(Image(url="https://s17026.pcdn.co/wp-content/uploads/sites/9/2018/08/Business-bank-account-e1534519443766.jpeg"))# Serialize to a json payload with a one-linerawaitcard.to_json()Output when rendered inhttps://adaptivecards.io/visualizer/:A "Visual" AlternativeTheAdaptiveCardclass also supports a more visual approach to building cards by passing a list of elements to theadd()method instead.This allows us to freely indent our code within the method call and better illustrate card structure.When using this visual alternative approach to building cards, we can use specific strings to execute logic.Strings containing"<"move us up/back a level in the treeStrings containing"^"will move us back to the top of the treecard=AdaptiveCard()# Add a list of elementscard.add([TextBlock("Top Level"),ColumnSet(),Column(),TextBlock("Column 1 Top Item"),TextBlock("Column 1 Second Item"),"<",Column(),TextBlock("Column 2 Top Item"),TextBlock("Column 2 Second Item"),"<","<",TextBlock("Lowest Level"),ActionOpenUrl(title="View Website",url="someurl.com"),ActionShowCard(title="Click to Comment"),InputText(ID="comment",placeholder="Type Here"),ActionSubmit(title="Submit Comment")])awaitcard.to_json()Output when rendered inhttps://adaptivecards.io/visualizer/:Combining/Chaining CardsWe can also combine the contents of multiple cards through the+operator:defcreate_single_card(input_text_id:int):card=AdaptiveCard()card.add([TextBlock("Top Level"),ColumnSet(),Column(),TextBlock("Column 1 Top Item"),TextBlock("Column 1 Second Item"),"<",Column(),TextBlock("Column 2 Top Item"),TextBlock("Column 2 Second Item"),"<","<",TextBlock("Lowest Level"),ActionOpenUrl(title="View Website",url="someurl.com"),ActionShowCard(title="Click to Comment"),InputText(ID=f"comment_{input_text_id}",placeholder="Type Here"),ActionSubmit(title="Submit Comment")])returncard# Use above function to create cardscard1=create_single_card(1)card2=create_single_card(2)# Add the contents of card1 and card2combined_card=card1+card2awaitcombined_card.to_json()Output when rendered inhttps://adaptivecards.io/visualizer/:To preserve intra-card ordering of elements, AdaptiveCardBuilder moves all actions in the outermost action container of each card into their bodies by placing them in ActionSets instead. Each constituent card's actions is therefore attached to the appropriate portion of the combined card.Thecombine_adaptive_cardsfunction can also be used to combine a list of adaptive cards together, in a left-to-right fashion. The following code essentially produces the same result as the code above, except an arbitrary length list of cards can now be passed:card1=create_single_card(1)card2=create_single_card(2)card3=create_single_card(3)# Add the contents of all above cardscombined_card=combine_adaptive_cards([card1,card2,card3])awaitcombined_card.to_json()Translating Card ElementsPassing translator arguments to theto_json()method will translate cards.Using the example above, we can translate the created card in the same method call.To view a list of supported languages and language codes, go to:https://docs.microsoft.com/en-us/azure/cognitive-services/translator/language-support# Translate all text in card to Malayawaitcard.to_json(translator_to_lang='ms',translator_key='<YOUR AZURE API KEY>')If anytranslator_to_langargument is passed, translation will apply to all elements with translatable text attributes.To specify that a given Adaptive elementshould notbe translated, simply pass the keyworded argumentdont_translate=Trueduring the construction of any element, and AdaptiveCardBuilder will leave this specific element untranslated.ConceptsTheAdaptiveCardclass centrally handles all construction & element-addition operations:fromadaptivecardbuilderimport*card=AdaptiveCard()# initialize# Structure:# |--Card# | |--Schema="XXX"# | |--Version="1.0"# | |--Body=[]# | |--Actions=[]card.add(TextBlock(text="Header",weight="Bolder"))card.add(TextBlock(text="Subheader"))card.add(TextBlock(text="*Quote*",isSubtle="true"))# |--Card# | |--Schema="XXX"# | |--Version="1.0"# | |--Body# | |--TextBlock# | |--TextBlock# | |--TextBlock# | |--Actionscard_json=awaitcard.to_json()# output to jsonWhen rendered:Each individual adaptive object (e.g. TextBlock, Column)is implemented as a class.These are simply Python object representations of the standard Adaptive Card elements that take keyworded arguments as parameters.View the Schema Explorer athttps://adaptivecards.io/explorer/to see which keyword arguments each Adaptive Object is allowed to take.TextBlock(text="Header",weight="Bolder")# Internal representation>>>{"type":"TextBlock","text":"Header","weight":"Bolder"}Pointer LogicCentral to theAdaptiveCardclass is an internal_pointerattribute. When we add an element to the card, the element is by defaultadded to the item containerof whichever object is being pointed at.Conceptually, an adaptive object (e.g. Column, Container) can have up to two kinds of containers (pythonlists):Itemcontainers (these hold non-interactive elements like TextBlocks, Images)Actioncontainers (these hold interactive actions like ActionShowUrl, ActionSubmit)For instance:AdaptiveCardobjects have bothitem(body=[]) andaction(actions=[]) containersColumnSetobjects have a singleitem(columns=[]) containerColumnobjects have a singleitem(items=[]) containerActionSetobjects have a singleaction(actions=[]) containerThecard.add()method will add a given AdaptiveObject to the appropriate container. For instance, if an Action-type object is passed, such as aActionSubmitorActionOpenUrl, then this will be added to the parent object'sactioncontainer.If the parent object does not have the appropriate container for the element being added, then this will throw anAssertionErrorand a corresponding suggestion.Recursing Into an Added ElementWhen adding elements that canthemselvescontain other elements(e.g. column sets and columns), the pointer will by defaultrecurse into the added element, so that any elements added thereafter will go straight into the added element's container (making our code less verbose).This is essentially adepth-firstapproach to building cards:card=AdaptiveCard()# |--Card <- Pointer# | |--Schema="XXX"# | |--Version="1.0"# | |--Body=[]# | |--Actions=[]card.add(TextBlock(text="Header",weight="Bolder"))# |--Card <- Pointer# | |--Schema="XXX"# | |--Version="1.0"# | |--Body# | |--TextBlock <- added# | |--Actionscard.add(TextBlock(text="Subheader"))card.add(TextBlock(text="*Quote*",isSubtle="true"))# |--Card <- Pointer# | |--Schema="XXX"# | |--Version="1.0"# | |--Body# | |--TextBlock# | |--TextBlock <- added# | |--TextBlock <- added# | |--Actionscard.add(ColumnSet())# |--Card# | |--Schema="XXX"# | |--Version="1.0"# | |--Body# | |--TextBlock# | |--TextBlock# | |--TextBlock# | |--ColumnSet <- Pointer <- added# | |--Actionscard.add(Column(width=1))# |--Card# | |--Schema="XXX"# | |--Version="1.0"# | |--Body# | |--TextBlock# | |--TextBlock# | |--TextBlock# | |--ColumnSet# | |--Column <- Pointer <- added# | |--Actionscard.add(TextBlock(text="<Column 1 Contents>"))# |--Card# | |--Schema="XXX"# | |--Version="1.0"# | |--Body# | |--TextBlock# | |--TextBlock# | |--TextBlock# | |--ColumnSet# | |--Column <- Pointer# | |--TextBlock <- added# | |--ActionsRendered:Observe that when adding aTextBlockto aColumn's items, the pointer stays at theColumnlevel, rather than recursing into theTextBlock. Theadd()method will only recurse into the added element if it has anitemoractioncontainer within it.Because of the depth-first approach, we'll need toback ourselves outof a container once we are done adding elements to it.One easy method to doing so is by using theup_one_level()method, can be called multiple times and just moves the pointer one step up the element tree.card=AdaptiveCard()card.add(TextBlock(text="Header",weight="Bolder"))card.add(TextBlock(text="Subheader"))card.add(TextBlock(text="*Quote*",isSubtle="true"))card.add(ColumnSet())card.add(Column(width=1))card.add(TextBlock(text="<Column 1 Contents>"))# |--Card# | |--Schema="XXX"# | |--Version="1.0"# | |--Body# | |--TextBlock# | |--TextBlock# | |--TextBlock# | |--ColumnSet# | |--Column <- Pointer# | |--TextBlock <- added# | |--Actionscard.up_one_level()# |--Card# | |--Schema="XXX"# | |--Version="1.0"# | |--Body# | |--TextBlock# | |--TextBlock# | |--TextBlock# | |--ColumnSet <- Pointer# | |--Column# | |--TextBlock# | |--Actionscard.add(Column(width=1))# |--Card# | |--Schema="XXX"# | |--Version="1.0"# | |--Body# | |--TextBlock# | |--TextBlock# | |--TextBlock# | |--ColumnSet# | |--Column# | |--TextBlock# | |--Column <- Pointer <- added# | |--Actionscard.add(TextBlock(text="Column 2 Contents"))# |--Card# | |--Schema="XXX"# | |--Version="1.0"# | |--Body# | |--TextBlock# | |--TextBlock# | |--TextBlock# | |--ColumnSet# | |--Column# | |--TextBlock# | |--Column <- Pointer# | |--TextBlock <- added# | |--ActionsRendered:We can also use thecard.save_level()method to create a "checkpoint" at any level if we intend to back ourselves out to the level we are currently at in our code block. To "reload" to that checkpoint, usecard.load_level(checkpoint).# checkpoints examplecard=AdaptiveCard()card.add(Container())card.add(TextBlock(text="Text as the first item, at the container level"))# create checkpoint herecontainer_level=card.save_level()# add nested columnsets and columns for funforiinrange(1,6):card.add(ColumnSet())card.add(Column(style="emphasis"))card.add(TextBlock(text=f"Nested Column{i}"))# our pointer continues to move downwards into the nested structure# reset pointer back to container levelcard.load_level(container_level)card.add(TextBlock(text="Text at the container level, below all the nested containers"))awaitcard.to_json()Adding ActionsAs previously mentioned, the AdaptiveCard'sadd()method will automatically add action elements to the appropriate containers.Let's first move our pointer back to the top level using theback_to_top()method:card.back_to_top()# back to top of tree# |--Card <- Pointer# | |--Schema="XXX"# | |--Version="1.0"# | |--Body# | |--TextBlock# | |--TextBlock# | |--TextBlock# | |--ColumnSet# | |--Column# | |--TextBlock# | |--Column# | |--TextBlock# | |--ActionsOur pointer is now pointing at the main Card object.Because it has anActionscontainer, the next action element to be added will be sent there.# Adding single url actioncard.add(ActionOpenUrl(url="someurl.com",title="Open Me"))# |--Card <- Pointer# | |--Schema="XXX"# | |--Version="1.0"# | |--Body# | |--TextBlock# | |--TextBlock# | |--TextBlock# | |--ColumnSet# | |--Column# | |--TextBlock# | |--Column# | |--TextBlock# | |--Actions# | |--ActionOpenUrl <- added
adaptivecards
AdaptiveCardsAuthor adaptive cards in pure pythonIntroductionAdaptive Cardsare a great way to extend your bot interactions. However, writing the JSON required to specify the card layout by hand can be cumbersome and error prone. And while using adesigneris a good way to manually create cards this does not cover cards that are generated by code. AdaptiveCards allows you to author cards in native python without ever touching the underlying json.A code sample says more then a thousand words so the following code snippet ...fromadaptivecards.adaptivecardimportAdaptiveCardfromadaptivecards.elementsimportTextBlockfromadaptivecards.containersimportColumn,ColumnSet,Containercard=AdaptiveCard()card.body=[Container(items=[TextBlock(text='Hello from adaptivecards',font_type='Default',size='Medium'),ColumnSet(columns=[Column(width='stretch',items=[TextBlock(text='author',weight="Bolder",wrap=True),TextBlock(text='version',weight="Bolder",wrap=True),]),Column(width='stretch',items=[TextBlock(text='Huu Hoa NGUYEN',wrap=True),TextBlock(text='0.1.0',wrap=True),])])]),TextBlock(text='more information can be found at [https://pypi.org/project/adaptivecards/](https://pypi.org/project/adaptivecards/)',wrap=True)]json_str=str(card)print(json_str)... produces this json ...{"$schema":"http://adaptivecards.io/schemas/adaptive-card.json","body":[{"items":[{"fontType":"Default","size":"Medium","text":"Hello from adaptivecards","type":"TextBlock"},{"columns":[{"items":[{"text":"author","type":"TextBlock","weight":"Bolder","wrap":true},{"text":"version","type":"TextBlock","weight":"Bolder","wrap":true}],"type":"Column","width":"stretch"},{"items":[{"text":"Huu Hoa NGUYEN","type":"TextBlock","wrap":true},{"text":"0.1.0","type":"TextBlock","wrap":true}],"type":"Column","width":"stretch"}],"type":"ColumnSet"}],"type":"Container"},{"text":"more information can be found at [https://pypi.org/project/adaptivecards/](https://pypi.org/project/adaptivecards/)","type":"TextBlock","wrap":true}],"type":"AdaptiveCard","version":"1.2"}when render in Teams will look likeFeaturesSupports all components, options and features of adaptive cards version 1.4Create adaptive cards from pure pythonInstallationYou can install AdaptiveCards using pip by issuing$pipinstalladaptivecardsFor more information on how to use this package please check the project documentation athttps://github.com/huuhoa/adaptivecards.Authors & MaintainersHuu Hoa [email protected] following resources were influential in the creation of this project:The package's README was copied partially fromPyAdaptiveCardsLicenseThis project is licensed to you under the terms of theMIT License.
adaptive-cards-py
Adaptive CardsA thin Python wrapper for creatingAdaptive Cardseasily on code level. The deep integration of Python'stypingpackage prevents you from creating invalid schemes and guides you while creating visual apealing cards.If you are interested in the general concepts of adaptive cards and want to dig a bit deeper, have a look into theofficial documentationor start a jump start and get used to theschema.💡Please noteThis library is still in progress and is lacking some features. However, missing fractions are planned to be added soon.AboutThis library is intended to provide a clear and simple interface for creating adaptive cards with only a few lines of code in a more robust way. The heavy usage of Python's typing library should prevent one from creating invalid schemes and structures. Instead, creating cards should be intuitive and work like a breeze.For a comprehensive introduction into the main ideas and patterns of adaptive cards, have a look on theofficial documentation. I also recommend using theschema explorerpage alongside the implementation, since the library's type system relies on these schemes.💡Please noteIt's highly recommended to turn on thetype checkcapabilities for Python in your editor. This will serve you with direct feedback about the structures you create. If you are trying to assign values of incompatible types, your editor will mark it as such and yell at you right in the moment you are about to do so.FeaturesType annotated components based on Python'sdataclassesSchema validation for version compatibilitySimpleJSONexportCompliant with the official structures and ideasDependenciesPython 3.10+dataclasses-jsonStrEnumInstallationpipinstalladaptive-cards-pyLibrary structureAdaptive cardscan consist of different kind of components. The four main categories beside the actual cards areElements,Containers,ActionsandInputs. You can find all available components for each category within the corresponding file. TheAdaptiveCardis defined incards.py.In addition to that, some fields of certain components are of custom types. These types are living inside thecard_types.pyfile. For instance, if you are about to assign a color to aTextBlock, the fieldcolorwill only accept a value of typeColors, which is implemented in the aforementioned Python file.To perform validation on a fully initialized card, one can make use of theSchemaValidatorclass. Similar to the whole library, this class provides a simple interface with only on method. The validation currently checks whether all used fields are compliant with the overall card versionUsageA simple cardA simpleTextBlocklives in theelementsmodule and can be used after it's import.fromadaptive_cards.elementsimportTextBlocktext_block:TextBlock=TextBlock(text="It's your first card")For this component,textis the only required property. However, if more customization is needed, further available fields can be used.fromadaptive_cards.elementsimportTextBlockimportadaptive_cards.card_typesastypestext_block:TextBlock=TextBlock(text="It's your second card",color=types.Colors.ACCENT,size=types.FontSize.EXTRA_LARGE,horizontal_alignment=types.HorizontalAlignment.CENTER,)An actual card with only this component can be created like this.fromadaptive_cards.cardimportAdaptiveCard...version:str="1.4"card:AdaptiveCard=AdaptiveCard.new()\.version(version)\.add_item(text_block)\.create()Find your final layout below.💡Please noteAfter building the object is done, thecreate(...)method must be called in order to create the final object. In this case, the object will be of typeAdaptiveCard.To directly export your result, make use of theto_json()method provided by every card.withopen("path/to/out/file.json","w+")asf:f.write(card.to_json())Adding multiple elements at onceAssuming you have a bunch of elements you want your card to enrich with. There is also a method for doing so. Let's re-use the example from before, but add anotherImageelement here as well.fromadaptive_cards.elementsimportTextBlock,Imageimportadaptive_cards.card_typesastypestext_block:TextBlock=TextBlock(text="It's your third card"color=types.Colors.ACCENT,size=types.FontSize.EXTRA_LARGE,horizontal_alignment=types.HorizontalAlignment.CENTER,)image:Image=Image(url="https://adaptivecards.io/content/bf-logo.png")version:str="1.4"card:AdaptiveCard=AdaptiveCard.new()\.version(version)\.add_items([text_block,image])\.create()# Alternatively, you can also chain multiple add_item(...) functions:# card = AdaptiveCard.new() \# .version(version) \# .add_item(text_block) \# .add_item(image) \# .create()withopen("path/to/out/file.json","w+")asf:f.write(card.to_json())This will result in a card like shown below.Finally, a more complex cardYou can have a look on the following example for getting an idea of what's actually possible with adaptive cards.Codeimportadaptive_cards.card_typesastypesfromadaptive_cards.actionsimportActionToggleVisibility,TargetElementfromadaptive_cards.validationimportSchemaValidator,Resultfromadaptive_cards.cardimportAdaptiveCardfromadaptive_cards.elementsimportTextBlock,Imagefromadaptive_cards.containersimportContainer,ContainerTypes,ColumnSet,Columncontainers:list[ContainerTypes]=[]icon_source:str="https://icons8.com/icon/vNXFqyQtOSbb/launch"icon_url:str="https://img.icons8.com/3d-fluency/94/launched-rocket.png"header_column_set:ColumnSet=ColumnSet(columns=[Column(items=[TextBlock(text="Your Daily Wrap-Up",size=types.FontSize.EXTRA_LARGE)],width="stretch",),Column(items=[Image(url=icon_url,width="40px")],rtl=True,width="auto"),])containers.append(Container(items=[header_column_set],style=types.ContainerStyle.EMPHASIS,bleed=True))containers.append(Container(items=[TextBlock(text="**Some numbers for you**",size=types.FontSize.MEDIUM,),ColumnSet(columns=[Column(items=[TextBlock(text="_Total_"),TextBlock(text="_Done by you_"),TextBlock(text="_Done by other teams_"),TextBlock(text="_Still open_"),TextBlock(text="_Closed_"),]),Column(items=[TextBlock(text="5"),TextBlock(text="4"),TextBlock(text="3"),TextBlock(text="6"),TextBlock(text="1"),],spacing=types.Spacing.MEDIUM,rtl=True,),],separator=True,),],spacing=types.Spacing.MEDIUM,))containers.append(Container(items=[TextBlock(text="**Detailed Results**",size=types.FontSize.MEDIUM,),],separator=True,spacing=types.Spacing.EXTRA_LARGE,))sample_column_set:ColumnSet=ColumnSet(columns=[Column(items=[TextBlock(text="12312")]),Column(items=[TextBlock(text="done",color=types.Colors.GOOD)]),Column(items=[TextBlock(text="abc")]),Column(items=[Image(url="https://adaptivecards.io/content/down.png",width="20px",horizontal_alignment=types.HorizontalAlignment.RIGHT,)],select_action=ActionToggleVisibility(title="More",target_elements=[TargetElement(element_id="toggle-me",)],),),])containers.append(Container(items=[Container(items=[ColumnSet(columns=[Column(items=[TextBlock(text="**Number**")]),Column(items=[TextBlock(text="**Status**")]),Column(items=[TextBlock(text="**Topic**")]),Column(items=[TextBlock(text="")]),],id="headline",),],style=types.ContainerStyle.EMPHASIS,bleed=True,),Container(items=[sample_column_set]),Container(items=[TextBlock(text="_Here you gonna find more information about the whole topic_",id="toggle-me",is_visible=False,is_subtle=True,wrap=True,)]),],))containers.append(Container(items=[TextBlock(text=f"Icon used from:{icon_source}",size=types.FontSize.SMALL,horizontal_alignment=types.HorizontalAlignment.CENTER,is_subtle=True,)]))card=AdaptiveCard.new().version("1.5").add_items(containers).create()validator:SchemaValidator=SchemaValidator()result:Result=validator.validate(card)print(f"Validation was successful:{result==Result.SUCCESS}")Schema{"type":"AdaptiveCard","version":"1.5","schema":"http://adaptivecards.io/schemas/adaptive-card.json","body":[{"items":[{"type":"ColumnSet","columns":[{"items":[{"text":"Your Daily Wrap-Up","type":"TextBlock","size":"extraLarge"}],"width":"stretch"},{"items":[{"url":"https://img.icons8.com/3d-fluency/94/launched-rocket.png","type":"Image","width":"40px"}],"rtl":true,"width":"auto"}]}],"type":"Container","style":"emphasis","bleed":true},{"spacing":"medium","items":[{"text":"**Some numbers for you**","type":"TextBlock","size":"medium"},{"separator":true,"type":"ColumnSet","columns":[{"items":[{"text":"_Total_","type":"TextBlock"},{"text":"_Done by you_","type":"TextBlock"},{"text":"_Done by other teams_","type":"TextBlock"},{"text":"_Still open_","type":"TextBlock"},{"text":"_Closed_","type":"TextBlock"}]},{"spacing":"medium","items":[{"text":"5","type":"TextBlock"},{"text":"4","type":"TextBlock"},{"text":"3","type":"TextBlock"},{"text":"6","type":"TextBlock"},{"text":"1","type":"TextBlock"}],"rtl":true}]}],"type":"Container"},{"separator":true,"spacing":"extraLarge","items":[{"text":"**Detailed Results**","type":"TextBlock","size":"medium"}],"type":"Container"},{"items":[{"items":[{"id":"headline","type":"ColumnSet","columns":[{"items":[{"text":"**Number**","type":"TextBlock"}]},{"items":[{"text":"**Status**","type":"TextBlock"}]},{"items":[{"text":"**Topic**","type":"TextBlock"}]},{"items":[{"text":"","type":"TextBlock"}]}]}],"type":"Container","style":"emphasis","bleed":true},{"items":[{"type":"ColumnSet","columns":[{"items":[{"text":"12312","type":"TextBlock"}]},{"items":[{"text":"done","type":"TextBlock","color":"good"}]},{"items":[{"text":"abc","type":"TextBlock"}]},{"items":[{"url":"https://adaptivecards.io/content/down.png","type":"Image","horizontalAlignment":"right","width":"20px"}],"selectAction":{"title":"More","targetElements":[{"elementId":"toggle-me"}],"type":"Action.ToggleVisibility"}}]}],"type":"Container"},{"items":[{"id":"toggle-me","isVisible":false,"text":"_Here you gonna find more information about the whole topic_","type":"TextBlock","isSubtle":true,"wrap":true}],"type":"Container"}],"type":"Container"},{"items":[{"text":"Icon used from: https://icons8.com/icon/vNXFqyQtOSbb/launch","type":"TextBlock","horizontalAlignment":"center","isSubtle":true,"size":"small"}],"type":"Container"}]}But we are still scratching the surface. You can do even better!Validate schemaNew components and fields are getting introduced every now and then. This means, if you are using an early version for a card and add fields, which are not compliant with it, you will have an invalid schema. To prevent you from exporting fields not yet supported by the card and target framework, a schema validation can be performed. It's as simple as that:fromadaptive_cards.validatorimportSchemaValidator,Result...version:str="1.4"card:AdaptiveCard=AdaptiveCard.new()\.version(version)\.add_items([text_block,image])\.create()validator:SchemaValidator=SchemaValidator()result:Result=validator.validate(card)print(f"Validation was successful:{result==Result.SUCCESS}")ExamplesIf you are interested in more comprehensive examples or the actual source code, have a look into theexamplesfolder.ContributionFeel free to create issues, fork the repository or even come up with a pull request. I am happy about any kind of contribution and would love to hear your feedback!Roadmap📕 Comprehensive documentation on code level🐍 Ready to use Python package🚀 More and better examples🔎 Comprehensive validation
adaptive-curvefitting
Adaptive Curvefitting ToolAdaptive curvefitting is a tool to find potentially optimal models for your research data. It's based onscipy,numpy, andmatplotlib.Table of contentsWhy is this tool?Installation, update and uninstallationTo installTo updateTo uninstallUsageImport the required moduleDo the curvefittingGenerate a expected modelRe-use the fitted curveShortagesHow to cite?ChangelogWhy is this toolThe very difference of adaptive-curvefitting withnumpy.polyfit,scipy.optimize.curve_fitorscipy.optimize.least_squaresisthe hypothesis you don’t know which model to fit. If you already have the expected model, the methods inscipyandnumpyare fantastic tools and better than this one.When you explore something unknown, this will be a maybe.Installation, update and uninstallationTo installQuick installation withpip:pipinstalladaptive-curvefittingOr from github:pipinstallgit+https://github.com/longavailable/adaptive-curvefittingTo updatepipinstall--upgradeadaptive-curvefittingTo uninstallpipuninstalladaptive-curvefittingUsageImport the required moduleIn general,importlongscurvefittingor import the specified function:fromlongscurvefittingimportoneClickCurveFittingfromlongscurvefittingimportgenerateFunctionfromlongscurvefittingimportgenerateModelsDo the curvefittingoneClickCurveFitting(xdata,ydata)There are some optional arguments ofoneClickCurveFitting.functions: specified or all (default) basic models(name of models) to fit.Type: list of stringDefault: basicModels_nameListpiecewise: if consider custom a piecewise function. It is mandatory not to 'piecewise' when the data size is less than 20.Type: boolDefault: Falseoperator: operatation between basic models.Type: stringDefault: '+'maxCombination: max number of combination of basic models.Type: integerDefault: 2plot_opt: the number of plot for optimal models.Type: integerDefault: 10xscale: one of {"linear", "log", "symlog", "logit", ...}Type: stringDefault: Noneyscale: one of {"linear", "log", "symlog", "logit", ...}Type: stringDefault: Nonefilename_startwith: a custom string mark as part of output filenameType: stringDefault: 'curvefit'silent: minimal output to monitorType: booleanDefault: Falsefeedback: if True, return the optimal model(function object), parametersType: booleanDefault: Falsekwargs: keyword arguments passed tocurve_fit_m. Note thatboundsandp0will take no effect when multi-models.Type: dictSee the complete example "/tests/curvefitting.py".Generate a expected modelCreate a model composited by gaussian and erf function:funcs=['gaussian','erf']myfunc=generateFunction(funcs,functionName='myfunc',operator='+')['model']See the complete example "/tests/custom_a_model.py".Re-use the fitted curveSee the complete example "/tests/reuse_the_fitted_model.py".ShortagesBased onscipy.optimize.least_squares, it cannot enhance the estimate of specified model. Evenmore, it has more limit thanscipy.optimize.least_squares. For example, arguments ofbounds,x0orp0were not supported due to thebasic hypothesis.How to citeIf this tool is useful to your research,starand cite it as below:Xiaolong Liu, & Meixiu Yu. (2020, June 14). longavailable/adaptive-curvefitting. Zenodo. http://doi.org/10.5281/zenodo.3893596Easily, you can import it toMendeley.Changelogv0.1.3First release.v0.1.4AddqueryModel()to simplify the reuse of a fitted model.Replacefrom scipy._lib._util import getargspec_no_self as _getargspecwithfrom ._helpers import funcArgsNrv0.1.5Updated the outdated module of sci.
adaptive-dataset
adaptive-dataloader
adaptive-dbscan
AdaptiveDBSCANThis is a normalized form of DBSCAN alogorithm that is based on varying number of neighbour. This algorithm is useful when your data has different density pattern. To get more information about the algorithm, please refer to the paper.installationTo install the package, you can use pip:pip install dadbscanGetting StartedAfter installing the package, you can use it as follows by importing the modules:from dadbscan.density import EQ_Density from dadbscan.dbscan import EQ_DBSCAN------------------------------------------------------------------------------------###Phase1. The first line is being used for creating density map and the second one is for applying the Density-Adaptive DBSCAN algorithm. Now by defining the N value you having database as a csv file, you can run the density algorithm:initiating the EQ_density class:N = 65 density = EQ_Density(N, database)To test the program, you can download the test file from the github repo and use decl_cat.csv as database.database = 'decl_cat.csv'running calc_density method:heat_matrix = density.calc_density()plotting the density map:density.plot_density(heat_matrix)a feature that can be used is smoothing the density map. This can be done by using the following method:smoothed_heat_matrix = density.cell_smoother(apply_smooth=True)! All the matrixes are saved physically in the folder 'Results'.------------------------------------------------------------------------------------###Phase2. Now that you have the density map, you can run the Density-Adaptive DBSCAN algorithm. To do so, you need to define the following parameters:radius = density.radius density_file_names = f"Results/den_decl_cat__65_smooth.csv"As it can be seen above, radius can be derived from the denisty class. now it is time to initiate the dbscan class and run the algorithm:clustering = clustering = dbscan(radius, density_file_name) final = clustering.clustering() clustering.plot_clusters() final.to_csv(f"Results/R__{density_file_name}")When plotting the clustered data, you have some options: def plot_clusters(self, **kwargs): """Plot the clusters on a map using GeoPandas and matplotlib----------**kwargs:cmap_shp: str, default="grey"The colormap to use for the shape file in the backgroundcmap_scatter: str, default="turbo"<br> The colormap to use for the scatter plot<br> shp_linewidth: float, default=2<br> The linewidth of the shape file<br> save_fig: bool, default=False<br> Whether to save the figure or not, if so, it will be saved in the ExampleData folder<br> save_fig_format: str, default="pdf"<br> The format to save the figure in <br> shape_file_address: str, default=False<br> The address of the shape file to plot in the background, you can use the World_Countries_Generalized.shp file in the ShapeFiles folder.<br> shape_file_address="ShapeFiles/World_Countries_Generalized.shp"<br> """ReferenceSabermahani, S., Frederiksen, A., 2023, Improved earthquake clustering using a Density-Adaptive DBSCAN algorithm: an example from Iran, Seismological Research LettersLicenseThis project is licensed under the MIT License - see theMIT Licensefile for details.
adaptive-interpolation
No description available on PyPI.
adaptivekde
This package implements adaptive kernel density estimation algorithms for 1-dimensional signals developed by Hideaki Shimazaki. This enables the generation of smoothed histograms that preserve important density features at multiple scales, as opposed to naive single-bandwidth kernel density methods that can either over or under smooth density estimates. These methods are described in Shimazaki's paper:H. Shimazaki and S. Shinomoto, "Kernel Bandwidth Optimization in Spike Rate Estimation," in Journal of Computational Neuroscience 29(1-2): 171–182, 2010http://dx.doi.org/10.1007/s10827-009-0180-4.License: All software in this package is licensed under the Apache License 2.0. See LICENSE.txt for more details.Authors: Hideaki Shimazaki ([email protected]) shimazaki on Github Lee A.D. Cooper ([email protected]) cooperlab on GitHub Subhasis Ray ([email protected])Three methods are implemented in this package:sshist - can be used to determine the optimal number of histogram bins for independent identically distributed samples from an underlying one-dimensional distribution. The principal here is to minimize the L2 norm of the difference between the histogram and the underlying distribution.sskernel - implements kernel density estimation with a single globally-optimized bandwidth.ssvkernel - implements kernel density estimation with a locally variable bandwidth.Dependencies: These functions in this package depend on NumPy for various operations including fast-fourier transforms and histogram generation.
adaptivelcbin
# adaptivelcbin__CAUTION__: this is a simple functional project, although it should work as generally expected, do make sanity checks and make sure you understand what the adaptive binning isIt makes adaptive binning and hardness ratio of two light curves (XMM-Newton input tested)## Installation`bash $ pip install adaptivelcbin `## Help`bash $ hratio--help`## Example:`bash $ hratioPNsrc_lc_01s_005-030.fitsPNsrc_lc_01s_030-100.fitshratio4.qdp 15.0--flag_rebin=4``python import hratiohratio.hratio_func('data/PNsrc_lc_01s_005-030.fits','data/PNsrc_lc_01s_030-100.fits','test1.qdp', 15.0) `
adaptive-learning
Adaptive-Learning
adaptive-neighbourhoods
Adaptive Neighbourhoods for the Discoveryof Adversarial ExamplesPython API for generating adapted and unique neighbourhoods for searching for adversarial examplesInstallation & usageThis work is released on PyPi. Installation, therefore, is as simple as installing the package with pip:python3-mpipinstalladaptive-neighbourhoodsAt this point, you're free to start generating neighbourhoods for your own dataset:fromadaptive_neighbourhoodsimportepsilon_expandneighbourhoods=epsilon_expand(x,# your input datay)# the integer encoded labels for your dataMove information on the variable parameters and general guidance on using this package can be found at:https://jaypmorgan.github.io/adaptive-neighbourhoods/ContributingAll contributions and feedback are welcome!There are three main remote mirrors used for hosting this project. If you would like to contribute, please submit an issue/pull-request/patch-request to any of these mirrors:Github:https://github.com/jaypmorgan/adaptive-neighbourhoodsGitlab:https://gitlab.com/jaymorgan/adaptive-neighbourhoodsSource Hut:https://git.sr.ht/~jaymorgan/adaptive-neighbourhoodsCiting this workIf you use this work in your research, please consider referencing our article using the following bibtex entry:@article{DBLP:journals/corr/abs-2101-09108, author = {Jay Morgan and Adeline Paiement and Arno Pauly and Monika Seisenberger}, title = {Adaptive Neighbourhoods for the Discovery of Adversarial Examples}, journal = {CoRR}, volume = {abs/2101.09108}, year = {2021}, url = {https://arxiv.org/abs/2101.09108}, eprinttype = {arXiv}, eprint = {2101.09108}, timestamp = {Sat, 30 Jan 2021 18:02:51 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2101-09108.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
adaptivepatch
adaptivepatchadaptivepatch can split the image in different patches with automatic detection of the best step in order to do not lose pixels. The overlap between patches depends on the patch size.ExampleInstallationpipinstalladaptivepatchHow to use itadaptivepatch(image, patch_size, step=None, verbose)LicenceMIT Licence
adaptivepy
UNKNOWN
adaptive-sampling
Adaptive SamplingThis package implements various sampling algorithms for the calculation of free energy profiles of molecular transitions.Available Sampling Methods Include:Adaptive Biasing Force (ABF) method [1]Extended-system ABF (eABF) [2]On-the-fly free energy estimate from the Corrected Z-Averaged Restraint (CZAR) [2]Application of Multistate Bannett's Acceptance Ratio (MBAR) [3] to recover full statistical information in post-processing [4](Well-Tempered) Metadynamics (WTM) [5] and WTM-eABF [6]Accelerated MD (aMD), Gaussian accelerated MD (GaMD), Sigmoid Accelerated MD (SaMD) [7, 8, 9]Gaussian-accelerated WTM-eABF [10]Free-energy Nudged Elastic Band Method [11]Implemented Collective Variables:Distances, angles and torsion angles as well as linear combinations thereofCoordination numbersMinimized Cartesian RMSD (Kabsch algorithm)Adaptive path collective variables (PCVs) [12, 13]Install:To install adaptive_sampling type:$pipinstalladaptive-samplingRequirements:python >= 3.8numpy >= 1.19torch >= 1.10scipy >= 1.7Basic Usage:To use adaptive sampling with your MD code of choice add a function calledget_sampling_data()to the corresponding python interface that returns an object containing all required data. Hard-coded dependencies can be avoided by wrapping theadaptive_samplingimport in atry/exceptclause:classMD:# Your MD code...defget_sampling_data(self):try:fromadaptive_sampling.interface.sampling_dataimportSamplingDatamass=...coords=...forces=...epot=...temp=...natoms=...step=...dt=...returnSamplingData(mass,coords,forces,epot,temp,natoms,step,dt)exceptImportErrorase:raiseNotImplementedError("`get_sampling_data()` is missing `adaptive_sampling` package")fromeThe bias force on atoms in the N-th step can be obtained by callingstep_bias()on any sampling algorithm:fromadaptive_sampling.sampling_toolsimport*# initialize MD codethe_md=MD(...)# collective variableatom_indices=[0,1]minimum=1.0# Angstrommaximum=3.5# Angstrombin_width=0.1# Angstromcollective_var=[["distance",atom_indices,minimum,maximum,bin_width]]# extended-system eABFext_sigma=0.1# thermal width of coupling between CV and extended variable in Angstromext_mass=20.0# mass of extended variablethe_bias=eABF(ext_sigma,ext_mass,the_md,collective_var,output_freq=10,f_conf=100,equil_temp=300.0)formd_stepinrange(steps):# propagate langevin dynamics and calc forces...bias_force=the_bias.step_bias(write_output=True,write_traj=True)the_md.forces+=bias_force...# finish md_stepThis automatically writes an on-the-fly free energy estimate in the output file and all necessary data for post-processing in a trajectory file. For extended-system dynamics unbiased statistical weights of individual frames can be obtained using the MBAR estimator:importnumpyasnpfromadaptive_sampling.processing_toolsimportmbartraj_dat=np.loadtxt('CV_traj.dat',skiprows=1)ext_sigma=0.1# thermal width of coupling between CV and extended variable# grid for free energy profile can be different than during samplingminimum=1.0maximum=3.5bin_width=0.1grid=np.arange(minimum,maximum,bin_width)cv=traj_dat[:,1]# trajectory of collective variablela=traj_dat[:,2]# trajectory of extended system# run MBAR and compute free energy profile and probability density from statistical weightstraj_list,indices,meta_f=mbar.get_windows(grid,cv,la,ext_sigma,equil_temp=300.0)exp_U,frames_per_traj=mbar.build_boltzmann(traj_list,meta_f,equil_temp=300.0,)weights=mbar.run_mbar(exp_U,frames_per_traj,max_iter=10000,conv=1.0e-7,conv_errvec=1.0,outfreq=100,device='cpu',)pmf,rho=mbar.pmf_from_weights(grid,cv[indices],weights,equil_temp=300.0)Documentation:Code documentation can be created with pdoc3:$pipinstallpdoc3 $pdoc--htmladaptive_sampling-odoc/References:Comer et al., J. Phys. Chem. B (2015);https://doi.org/10.1021/jp506633nLesage et al., J. Phys. Chem. B (2017);https://doi.org/10.1021/acs.jpcb.6b10055Shirts et al., J. Chem. Phys. (2008);https://doi.org/10.1063/1.2978177Hulm et al., J. Chem. Phys. (2022);https://doi.org/10.1063/5.0095554Barducci et al., Phys. rev. lett. (2008);https://doi.org/10.1103/PhysRevLett.100.020603Fu et al., J. Phys. Chem. Lett. (2018);https://doi.org/10.1021/acs.jpclett.8b01994Hamelberg et al., J. Chem. Phys. (2004);https://doi.org/10.1063/1.1755656Miao et al., J. Chem. Theory Comput. (2015);https://doi.org/10.1021/acs.jctc.5b00436Zhao et al., J. Phys. Chem. Lett. (2023);https://doi.org/10.1021/acs.jpclett.2c03688Chen et al., J. Chem. Theory Comput. (2021);https://doi.org/10.1021/acs.jctc.1c00103Semelak et al., J. Chem. Theory Comput. (2023);https://doi.org/10.1021/acs.jctc.3c00366Branduardi, et al., J. Chem. Phys. (2007);https://doi.org/10.1063/1.2432340Leines et al., Phys. Ref. Lett. (2012);https://doi.org/10.1103/PhysRevLett.109.020601This and Related Work:If you use this package in your work please cite:Hulm et al., J. Chem. Phys., 157, 024110 (2022);https://doi.org/10.1063/5.0095554Other related references:Dietschreit et al., J. Chem. Phys., (2022);https://aip.scitation.org/doi/10.1063/5.0102075Hulm et al., J. Chem. Theory. Comput., (2023);https://doi.org/10.1021/acs.jctc.3c00938Stan et. al., ACS Cent. Sci., (2024);https://doi.org/10.1021/acscentsci.3c01403
adaptive-scheduler
No description available on PyPI.
adaptive-stratification
Adaptive StratificationThis package provides an implementation of the adaptive stratification sampling method to estimate quantities of interest of the form Q = E(f(Y)), where the random vector Y follows a d-dimensional uniform distribution on the unit-cube and f is a given function.Example: Using the Sampler# Import the module containing the sampling routinesfromstratificationimportAdaptiveStratification# Create a sampler for function funcsampler=AdaptiveStratification(func,d,N_max,N_new_per_stratum,alpha,type='hyperrect')# Solve (return a tuple)result=sampler.solve()Input arguments:func: implementation of given function of interest that defines the quantity of interest. It needs to be callable, accepting one m-times-n-dimensional numpy array as input and returns a m-dimensional numpy array;d: dimension of the stochastic domain;N_max: number of total samples to be used;N_new_per_stratum: targeted average number of samples per stratum, controlling the adaptation;alpha: number between zero and one, defining the hybrid allocation rule;type: type of tessellation procedure, i.e., via hyper-rectangles (type='hyperrect') or simplices (type='simplex')More InformationSee theGithub repositoryfor more details.
adaptix
An extremely flexible and configurable data model conversion library.[!IMPORTANT] Adaptix is ready for production! The beta version only means there may be some backward incompatible changes, so you need to pin a specific version.📚DocumentationTL;DRInstallpipinstalladaptix==3.0.0b2Use for model loading and dumping.fromdataclassesimportdataclassfromadaptiximportRetort@dataclassclassBook:title:strprice:intauthor:str="Unknown author"data={"title":"Fahrenheit 451","price":100,}# Retort is meant to be global constant or just one-time createdretort=Retort()book=retort.load(data,Book)assertbook==Book(title="Fahrenheit 451",price=100)assertretort.dump(book)==dataUse for converting one model to another.fromdataclassesimportdataclassfromsqlalchemy.ormimportDeclarativeBase,Mapped,mapped_columnfromadaptix.conversionimportget_converterclassBase(DeclarativeBase):passclassBook(Base):__tablename__='books'id:Mapped[int]=mapped_column(primary_key=True)title:Mapped[str]price:Mapped[int]@dataclassclassBookDTO:id:inttitle:strprice:intconvert_book_to_dto=get_converter(Book,BookDTO)assert(convert_book_to_dto(Book(id=183,title="Fahrenheit 451",price=100))==BookDTO(id=183,title="Fahrenheit 451",price=100))Use casesValidation and transformation of received data for your API.Conversion between data models and DTOs.Config loading/dumping via codec that produces/takes dict.Storing JSON in a database and representing it as a model inside the application code.Creating API clients that convert a model to JSON sending to the server.Persisting entities at cache storage.Implementing fast and primitive ORM.AdvantagesSane defaults for JSON processing, no configuration is needed for simple cases.Separated model definition and rules of conversion that allow preservingSRPand have different representations for one model.Speed. It is one of the fastest data parsing and serialization libraries.There is no forced model representation, adaptix can adjust to your needs.Supportdozensof types, including different model kinds:@dataclass,TypedDict,NamedTuple,attrsandsqlalchemyWorking with self-referenced data types (such as linked lists or trees).Savingpathwhere an exception is raised (including unexpected errors).Machine-readableerrorsthat could be dumped.Support for user-defined generic models.Automatic name style conversion (e.g.snake_casetocamelCase).Predicate systemthat allows to concisely and precisely override some behavior.Disabling additional checks to speed up data loading from trusted sources.No auto casting by default. The loader does not try to guess value from plenty of input formats.
adaptkeybert
AdaptKeyBERTKeyBERT is a minimal and easy-to-use keyword extraction technique that leverages BERT embeddings to create keywords and keyphrases that are most similar to a document.AdaptKeyBERT expands the aforementioned library by integrating semi-supervised attention for creating a few-shot domain adaptation technique for keyphrase extraction. Also extended the work by allowing zero-shot word seeding, allowing better performance on topic relevant documentsBasic Use:Take a look atrunner.pyfromadaptkeybertimportKeyBERTdoc="""Supervised learning is the machine learning task of learning a function thatmaps an input to an output based on example input-output pairs. It infers afunction from labeled training data consisting of a set of training examples.In supervised learning, each example is a pair consisting of an input object(typically a vector) and a desired output value (also called the supervisory signal).A supervised learning algorithm analyzes the training data and produces an inferred function,which can be used for mapping new examples. An optimal scenario will allow for thealgorithm to correctly determine the class labels for unseen instances. This requiresthe learning algorithm to generalize from the training data to unseen situations in a'reasonable' way (see inductive bias). But then what about supervision and unsupervision, what happens to unsupervised learning."""kw_model=KeyBERT()keywords=kw_model.extract_keywords(doc,top_n=10)print(keywords)kw_model=KeyBERT(domain_adapt=True)kw_model.pre_train([doc],[['supervised','unsupervised']],lr=1e-3)keywords=kw_model.extract_keywords(doc,top_n=10)print(keywords)kw_model=KeyBERT(zero_adapt=True)kw_model.zeroshot_pre_train(['supervised','unsupervised'],adaptive_thr=0.15)keywords=kw_model.extract_keywords(doc,top_n=10)print(keywords)kw_model=KeyBERT(domain_adapt=True,zero_adapt=True)kw_model.pre_train([doc],[['supervised','unsupervised']],lr=1e-3)kw_model.zeroshot_pre_train(['supervised','unsupervised'],adaptive_thr=0.15)keywords=kw_model.extract_keywords(doc,top_n=10)print(keywords)
adaptlm
No description available on PyPI.
adaptmesh
adaptmeshCreate triangular meshes by the adaptive process.The user feeds in a polygon and a low quality mesh is created. Then the low quality mesh gets improved by adaptive finite elements and mesh smoothing. The approach is detailedin the following paper:@article{adaptmesh, title={A simple technique for unstructured mesh generation via adaptive finite elements}, author={Gustafsson, Tom}, volume={54}, doi={10.23998/rm.99648}, number={2}, journal={Rakenteiden Mekaniikka}, year={2021}, pages={69--79} }adaptmeshships with customized versions of the following packages:tri v0.3.1.dev0(ported to Python 3; Copyright (c) 2015 Martijn Meijers; MIT;source)optimesh v0.6.3(trimmed down version with minor changes to the edge flipping; Copyright (c) 2018-2020 Nico Schlömer; the last version with MIT;source)meshplex v0.12.3(trimmed down version with minor changes, i.e. removal of unnecessary imports; Copyright (c) 2017-2020 Nico Schlömer; the last version with MIT;source)Installationpip install adaptmeshDependenciesnumpyscipymatplotlibscikit-femExamplesThe mesh generator is called through the functionadaptmesh.triangulate.Square with default settingsfromadaptmeshimporttriangulatem=triangulate([(0.,0.),(1.,0.),(1.,1.),(0.,1.),])# m.p are the points# m.t are the elementsNon-convex shapefromadaptmeshimporttriangulatem=triangulate([(0.0,0.0),(1.1,0.0),(1.2,0.5),(0.7,0.6),(2.0,1.0),(1.0,2.0),(0.5,1.5),],quality=0.95)# default: 0.9Holesm=triangulate([(0.,0.),(1.,0.),(1.,1.),(0.,1.),],holes=[[(.25,.25),(.75,.25),(.75,.75),(.25,.75)]])Subdomainsm1=triangulate([(0.,0.),(1.,0.),(.7,1.),(0.,1.),],split=[(1,8),(2,6)],quality=0.91)m2=triangulate([(0.,2.),(2.,2.),(2.,0.),(1.,0.),(.7,1.),(0.,1.)],split=[(3,8),(4,6)],quality=0.91)m=m1+m2Multiple meshes can be joined to emulate subdomains. However, the nodes must match. Above, segments are splitted to facilitate the matching, e.g.,[(1, 8), (2, 6)]means that the second and the third segments are split using eight and six equispaced extra nodes, respectively.LicensingThe main source code ofadaptmeshis distributed under the MIT License.The licenses of the included packages can be found also inLICENSE.mdand the respective subdirectories, i.e../adaptmesh/*/LICENSE. SeeLICENSE.mdfor more information.ChangelogUnreleased[0.3.3] - 2022-02-04Fixed: Properly respect segments in the initial triangulation.[0.3.2] - 2021-09-28Fixed: Rendering of README in pypi.[0.3.1] - 2021-09-28Fixed: Support forscikit-fem>=4.[0.3.0] - 2021-06-22Fixed: Support forscikit-fem>=3. Dependency update broke the mesh refinement.[0.2.0] - 2021-01-20Added: keyword argumentsplitoftriangulateallows further splitting the provided segments. This is useful because the segment endpoints are always preserved in the final mesh.Added: keyword argumentholesoftriangulateallows specifying additional polygonal areas inside the domain that will be free of triangles in the final mesh.
adaptnlp
Welcome to AdaptNLPA high level framework and library for running, training, and deploying state-of-the-art Natural Language Processing (NLP) models for end to end tasks.What is AdaptNLP?AdaptNLP is a python package that allows users ranging from beginner python coders to experienced Machine Learning Engineers to leverage state-of-the-art Natural Language Processing (NLP) models and training techniques in one easy-to-use python package.Utilizingfastaiwith HuggingFace'sTransformerslibrary and Humboldt University of Berlin'sFlairlibrary, AdaptNLP provides Machine Learning Researchers and Scientists a modular andadaptiveapproach to a variety of NLP tasks simplifying what it takes totrain, performinference, anddeployNLP-based models and microservices.What is the Benefit of AdaptNLP Rather Than Just Using Transformers?Despite quick inference functionalities such as thepipelineAPI intransformers, it still is not quite as flexible nor fast enough. With AdaptNLP'sEasy*inference modules, these tend to be slightly faster than thepipelineinterface (bare minimum the same speed), while also providing the user with simple intuitive returns to alleviate any unneeded junk that may be returned.Along with this, with the integration of thefastailibrary the code needed to train or run inference on your models has a completely modular API through thefastaiCallbacksystem. Rather than needing to write your entire torch loop, if there is anything special needed for a model a Callback can be written in less than 10 lines of code to achieve your specific functionalities.Finally, when training your model fastai is on the forefront of beign a library constantly bringing in the best practices for achiving state-of-the-art training with new research methodologies heavily tested before integration. As such, AdaptNLP fully supports training with the One-Cycle policy, and using new optimizer combinations such as the Ranger optimizer with Cosine Annealing training through simple one-line fitting functions (fit_one_cycleandfit_flat_cos).Installation DirectionsPyPiTo install with pypi, please use:pipinstalladaptnlpOr if you have pip3:pip3installadaptnlpConda (Coming Soon)Developmental BuildsTo install any developmental style builds, please follow the below directions to install directly from git:Stable Master BranchThe master branch generally is not updated much except for hotfixes and new releases. To install please use:pipinstallgit+https://github.com/Novetta/adaptnlpDevelopmental Branch{% include note.html content='Generally this branch can become unstable, and it is only recommended for contributors or those that really want to test out new technology. Please make sure to see if the latest tests are passing (A green checkmark on the commit message) before trying this branch out' %} You can install the developmental builds with:pipinstallgit+https://github.com/Novetta/adaptnlp@devDocker ImagesThere are actively updated Docker images hosted on Novetta'sDockerHubThe guide to each tag is as follows:latest: This is the latest pypi release and installs a complete package that is CUDA capabledev: These are occasionally built developmental builds at certain stages. They are built by thedevbranch and are generally stable*api: The API builds are for theREST-APITo pull and run any AdaptNLP image immediatly you can run:dockerrun-itp8888:8888novetta/adaptnlp:TAGReplacingTAGwith any of the afformentioned tags earlier.Afterwards checklocalhost:8888orlocalhost:888/labto access the notebook containersNavigating the DocumentationThe AdaptNLP library is built withnbdev, so any documentation page you find (including this one!) can be directly run as a Jupyter Notebook. Each page at the top includes an "Open in Colab" button as well that will open the notebook in Google Colaboratory to allow for immediate access to the code.The documentation is split into six sections, each with a specific purpose:Getting StartedThis group contains quick access to the homepage, what are the AdaptNLP Cookbooks, and how to contributeModels and Model HubsThese contain any relevant documentation for theAdaptiveModelclass, the HuggingFace Hub model search integration, and theResultclass that various inference API's returnClass APIThis section contains the module documentation for the inference framework, the tuning framework, as well as the utilities and foundations for the AdaptNLP library.Inference and Training CookbooksThese two sections providequickaccess tosingle userecipies for starting any AdaptNLP project for a particular task, with easy to use code designed for that specific use case. There are currently over 13 different tutorials available, with more coming soon.NLP Services with FastAPIThis section provides directions on how to use the AdaptNLP REST API for deploying your models quickly with FastAPIContributingThere is a controbution guide availablehereTestingAdaptNLP is run on thenbdevframework. To run all tests please do the following:pip install nbverbosegit clone https://github.com/Novetta/adaptnlpcd adaptnlppip install -e .nbdev_test_nbsThis will run every notebook and ensure that all tests have passed. Please see the nbdevdocumentationfor more information about it.ContactPlease contact Zachary Mueller [email protected] questions or comments regarding AdaptNLP.Follow us on Twitter at@TheZachMuellerand@AdaptNLPfor updates and NLP dialogue.LicenseThis project is licensed under the terms of the Apache 2.0 license.
adaptor
Adaptor: Objective-centric Adaptation libraryAdaptor will help you to easily adapt a language model to your owndata domain(s),task(s), orcustom objective(s).If you want to jump right in, take a look at thetutorials.Table of ContentClick to expandBackgroundBenefits of Task and Domain AdaptationHow Can Adaptor HelpUsageInstallUse-casesTutorialsHow to ContributeCiteBenefits of Task and Domain AdaptationBoth domain adaptation (e.g.Beltagy, 2019) and task adaptation (e.g.Gururangan, 2020) are reported to improve quality of the language models on end tasks, and improve model's comprehension on morenichedomains, suggesting that it's usually a good idea to adapt pre-trained model before the final fine-tuning. However, it is still not a common practice, maybe because it is still a tedious thing to do. In the model-centric training, the multi-step, or multi-objective training requires a separate configuration of every training step due to the differences in the models' architectures specific to the chosen training objective and data set.How Adaptor handles training?Adaptor framework abstracts the term ofObjectiveaway from the model. With Adaptor,Anyobjective can be applied toanymodel, for as long as the trained model has someheadof a compatible shape.The ordering in which theObjectives are applied is determined by the givenSchedule. In conventional adaptation, the objectives are appliedsequentially(that's whatSequentialScheduledoes), but they might as well be applied in a combilation (ParallelSchedule), or balanced dynamically, e.g. according to its objectives` losses.In theAdaptorframework, instead of providing theTrainerwith a model encoded dataset both compatible with specific training task, a user constructs aSchedulecomposed of the initialisedObjectives, where each Objective performs its dataset sampling and objective-specific feature alignment (compliant withobjective.compatible_head).When training classictransformersmodels, a selection of objectives is model-agnostic: each objective takes care of resolving its own compatible head within givenLangModule.How Can Adaptor HelpAdaptor introduces objective-centric, instead of model-centric approach to the training process, that makes iteasierto experiment withmulti-objectivetraining, creatingcustom objectives. Thanks to that, you can do some things, that are difficult, or impossible in other NLP frameworks (like HF Transformers, FairSeq or NLTK). For example:Domain adaptationorTask adaptation: you do not have to handle the model between different training scripts, minimising a chance of error and improving reproducibilitySeamlessly experiment with differentschedulestrategies, allowing you, e.g. to backpropagate based on multiple objectives in every training stepTrack the progressof the model, concurrently oneachrelevant objective, allowing you to easier recognise weak points of your modelEasily performMulti-task learning, which thatcanimproves model robustnessAlthough Adaptor aims primarily for training the models of the transformer family, the library is designed to work withany PyTorch modelBuilt upon the well-established and maintained 🤗 Transformers library, Adaptor will automatically support future new NLP models out-of-box. The upgrade of Adaptor to a different version of Hugging Face Transformers library should not take longer than a few minutes.UsageFirst, install the library:pipinstalladaptorIf you clone it, you can also run and modify the provided example scripts.gitclone{thisrepo}cdadaptor python-mpipinstall-e.You can also find and run full examples below with all the imports intests/end2end_usecases_test.py.Adapted Named Entity RecognitionSay you have nicely annotated entities in a set of news articles, but eventually, you want to use the language model to detect entities in office documents. You can either train the NER model on news articles, hoping that it will not lose much accuracy on other domains. Or you can concurrently train on both data sets:# 1. pick the model baselang_module=LangModule("bert-base-multilingual-cased")# 2. pick objectives# Objectives take either List[str] for in-memory iteration, or a source file path for streamed iterationobjectives=[MaskedLanguageModeling(lang_module,batch_size=16,texts_or_path="tests/mock_data/domain_unsup.txt"),TokenClassification(lang_module,batch_size=16,texts_or_path="tests/mock_data/ner_texts_sup.txt",labels_or_path="tests/mock_data/ner_texts_sup_labels.txt")]# 3. pick a schedule of the selected objectives# This one will initially fit the first objective until convergence on its eval set, then fits the second oneschedule=ParallelSchedule(objectives,training_arguments)# 4. Run the training using Adapter, similarly to running HF.Trainer, only adding `schedule`adapter=Adapter(lang_module,schedule,training_arguments)adapter.train()# 5. save the trained lang_module (with all heads)adapter.save_model("entity_detector_model")# 6. reload and use it like any other Hugging Face modelner_model=AutoModelForTokenClassification.from_pretrained("entity_detector_model/TokenClassification")tokenizer=AutoTokenizer.from_pretrained("entity_detector_model/TokenClassification")inputs=tokenizer("Is there any Abraham Lincoln here?",return_tensors="pt")outputs=ner_model(**inputs)ner_tags=[ner_model.config.id2label[label_id.item()]forlabel_idinoutputs.logits[0].argmax(-1)]Try this exampleon real data, intutorials/adapted_named_entity_recognition.ipynbAdapted Machine TranslationSay you have a lot of clean parallel texts for news articles (like you can find onOPUS), but eventually, you need to translate a different domain, for example chats with a lot of typos, or medicine texts with a lot of latin expressions.# 1. pick the base modellang_module=LangModule("Helsinki-NLP/opus-mt-en-de")# (optional) pick train and validation evaluators for the objectivesseq2seq_evaluators=[BLEU(decides_convergence=True)]# 2. pick objectives - we use BART's objective for adaptation and mBART's seq2seq objective for fine-tuningobjectives=[BackTranslation(lang_module,batch_size=1,texts_or_path="tests/mock_data/domain_unsup.txt",back_translator=BackTranslator("Helsinki-NLP/opus-mt-de-en"),val_evaluators=seq2seq_evaluators),Sequence2Sequence(lang_module,batch_size=1,texts_or_path="tests/mock_data/seq2seq_sources.txt",labels_or_path="tests/mock_data/seq2seq_targets.txt",val_evaluators=seq2seq_evaluators,source_lang_id="en",target_lang_id="cs")]# this one will shuffle the batches of both objectivesschedule=ParallelSchedule(objectives,adaptation_arguments)# 4. train using Adapteradapter=Adapter(lang_module,schedule,adaptation_arguments)adapter.train()# 5. save the trained (multi-headed) lang_moduleadapter.save_model("translator_model")# 6. reload and use it like any other Hugging Face modeltranslator_model=AutoModelForSeq2SeqLM.from_pretrained("translator_model/Sequence2Sequence")tokenizer=AutoTokenizer.from_pretrained("translator_model/Sequence2Sequence")inputs=tokenizer("A piece of text to translate.",return_tensors="pt")output_ids=translator_model.generate(**inputs)output_text=tokenizer.batch_decode(output_ids,skip_special_tokens=True)print(output_text)Try this exampleon real data, intutorials/unsupervised_machine_translation.ipynbSingle-objective trainingIt also makes sense to use the comfort of adaptor high-level interface in simple use-cases where it's enough to apply one objective.# 1. pick the base modellang_module=LangModule(test_base_models["sequence_classification"])# 2. pick any objective - note that all objectives have almost identical interfaceclassification=SequenceClassification(lang_module=lang_module,texts_or_path="tests/mock_data/supervised_texts.txt",labels_or_path="tests/mock_data/supervised_texts_sequence_labels.txt",batch_size=1)# 3. Schedule choice does not matter in single-objective trainingschedule=SequentialSchedule(objectives=[classification],args=training_arguments)# 4. train using Adapteradapter=Adapter(lang_module=lang_module,schedule=parallel_schedule,args=training_arguments)adapter.train()# 5. save the trained lang_moduleadapter.save_model("output_model")# 6. reload and use it like any other Hugging Face modelclassifier=AutoModelForSequenceClassification.from_pretrained("output_model/SequenceClassification")tokenizer=AutoTokenizer.from_pretrained("output_model/SequenceClassification")inputs=tokenizer("A piece of text to translate.",return_tensors="pt")output=classifier(**inputs)output_label_id=output.logits.argmax(-1)[0].item()print("Your new model predicted class:%s"%classifier.config.id2label[output_label_id])Try this exampleon real data intutorials/simple_sequence_classification.ipynbMore examplesYou can find more examples intutorials. Your contributions are welcome :) (seeCONTRIBUTING.md)Motivation for objective-centric trainingWe've seen that transformers can outstandingly perform on relatively complicated tasks, which makes us think that experimenting with custom objectives can also improve their desperately-needed generalisation abilities (many studies report transformers inability to generalise the end task, e.g. onlanguage inference,paraphrase detection, ormachine translation).This way, we're also hoping to enable the easy use of the most accurate deep language models for morespecialised domainsof application, where a little supervised data is available, but much more unsupervised sources can be found (a typicalDomain adaptationcase). Such applications include for instance machine translation of non-canonical domains (chats or expert texts) or personal names recognition in texts of a domain with none of its own labeled names, but the use-cases are limitless.How can you contribute?If you want to add a new objective or schedule, seeCONTRIBUTING.md.If you find an issue, please report itin this repositoryand if you'd also be able to fix it, don't hesitate to contribute and create a PR.If you'd just like to share your general impressions or personal experience with others, we're happy to get into a discussion in theDiscussions section.Citing AdaptorIf you use Adaptor in your research, please cite it as follows.TextŠTEFÁNIK, Michal, Vít NOVOTNÝ, Nikola GROVEROVÁ and Petr SOJKA. Adaptor: Objective-Centric Adaptation Framework for Language Models. InProceedings of 60th Annual Meeting of the Association for Computational Linguistics: Demonstrations. ACL, 2022. 7 pp.BibTeX@inproceedings{stefanik2022adaptor,author={\v{S}tef\'{a}nik, Michal and Novotn\'{y}, V\'{i}t and Groverov{\'a}, Nikola and Sojka, Petr},title={Adapt\$\mathcal\{O\}\$r: Objective-Centric Adaptation Framework for Language Models},booktitle={Proceedings of 60th Annual Meeting of the Association for Computational Linguistics: Demonstrations},publisher={ACL},numpages={7},url={https://aclanthology.org/2022.acl-demo.26},}If you have any other question(s), feel free to create an issue.
adapt-parser
Adapt Intent ParserThe Adapt Intent Parser is a flexible and extensible intent definition and determination framework. It is intended to parse natural language text into a structured intent that can then be invoked programatically.Getting StartedTo take a dependency on Adapt, it's recommended to use virtualenv and pip to install source from github.$virtualenvmyvirtualenv $.myvirtualenv/bin/activate $pipinstall-egit+https://github.com/mycroftai/adapt#egg=adapt-parserExamplesExecutable examples can be found in theexamples folder.Intent ModellingIn this context, an Intent is an action the system should perform. In the context of Pandora, we’ll define two actions: List Stations, and Select Station (aka start playback)With the Adapt intent builder:list_stations_intent=IntentBuilder('pandora:list_stations')\.require('Browse Music Command')\.build()For the above, we are describing a “List Stations” intent, which has a single requirement of a “Browse Music Command” entity.play_music_command=IntentBuilder('pandora:select_station')\.require('Listen Command')\.require('Pandora Station')\.optionally('Music Keyword')\.build()For the above, we are describing a “Select Station” (aka start playback) intent, which requires a “Listen Command” entity, a “Pandora Station”, and optionally a “Music Keyword” entity.EntitiesEntities are a named value. Examples include:Blink 182is anArtistThe Big Bang Theoryis aTelevision ShowPlayis aListen CommandSong(s)is aMusic KeywordFor my Pandora implementation, there is a static set of vocabulary for the Browse Music Command, Listen Command, and Music Keyword (defined by me, a native english speaker and all-around good guy). Pandora Station entities are populated via a "List Stations" API call to Pandora. Here’s what the vocabulary registration looks like.defregister_vocab(entity_type,entity_value):pass# a tiny bit of codedefregister_pandora_vocab(emitter):forvin["stations"]:register_vocab('Browse Music Command',v)forvin["play","listen","hear"]:register_vocab('Listen Command',v)forvin["music","radio"]:register_vocab('Music Keyword',v)forvin["Pandora"]:register_vocab('Plugin Name',v)station_name_regex=re.compile(r"(.*) Radio")p=get_pandora()forstationinp.stations:m=station_name_regex.match(station.get('stationName'))ifnotm:continueformatchinm.groups():register_vocab('Pandora Station',match)DevelopmentGlad you'd like to help!To install test and development requirements runpip install -r test-requirements.txtThis will install the test-requirements as well as the runtime requirements for adapt.To test any changes before submitting them run./run_tests.shThis will run the same checks as the Github actions and verify that your code should pass with flying colours.Reporting IssuesIt's often difficult to debug issues with adapt without a complete context. To facilitate simpler debugging, please include a serialized copy of the intent determination engine using the debug dump utilities.fromadapt.engineimportIntentDeterminationEngineengine=IntentDeterminationEngine()# Load engine with vocabulary and parsersimportadapt.tools.debugasatdatd.dump(engine,'debug.adapt')Learn MoreFurther documentation can be found athttps://mycroft-ai.gitbook.io/docs/mycroft-technologies/adapt
adaptpath
# Description convinent script to adapt python’s sys.path# Installpip install adaptpath# UsageSay we have dir-tree below:a / b / c.py x / y / z.pyNow suppose we are in “z.py” and we want to do this:from a.b import cWe can put the lines below ahead of “z.py”from adaptpath import adaptpath adaptpath.adapt_from_path(2, __file__)# Testpytest test
adapt.py
adapt.pyOfficial wrapper around Adapt's API for Python.Installationpipinstalladapt.pyFrom GitHub:pipinstallgit+https://github.com/AdaptChat/adapt.pyUsageimportadaptclassClient(adapt.Client):"""My example client"""@adapt.once# This event will only be called onceasyncdefon_ready(self,ready:adapt.ReadyEvent)->None:print(f"Logged in as{ready.user}!")asyncdefon_message(self,message:adapt.Message)->None:ifmessage.content=="!ping":awaitmessage.channel.send("Pong!")if__name__=="__main__":client=Client()client.run("token")Using adapt.py with a custom Adapt instanceAdapt.py defaults to use the official Adapt instance athttps://adapt.chat. If you want to use a custom instance, pass anAdaptServerinstance to theserverkwarg when constructing the client.AdaptServer.local()can be used as a shortcut to create a server instance for a local instance of Adapt:fromadaptimportAdaptServer,Clientclient=Client(server=AdaptServer.local())# Use a local instance of Adapt...Or, you can manually pass in URLs:fromadaptimportAdaptServer,Clientserver=AdaptServer(api="https://my-adapt-instance.com/api",harmony="https://my-adapt-instance.com/harmony",convey="https://my-adapt-instance.com/convey",)client=Client(server=server)...
adaptwms
adaptwmsLightweight OpenStreetMap adapter for WMS servicesWhat it is?Basically it is adapter translating requests in so calledSlippy Map format(used by OpenStreetMap, takes zoom level and XY tile indexes) into requests to external WMS service (takes bounding box in form of two points in some coordinate system of area to render).To give an example , let's say you need to supply to external map viewer an URL in format:https://tile.openstreetmap.org/{z}/{x}/{y}.pngAll x,y and z are natural numbers. But the service you want to access gives you interface that requires this format:https://external.service/wms?BBOX=0.0%2C0.0%2C1.0%2C1.0&SRS=EPSG%3A3857&WIDTH=256&HEIGHT=256&SERVICE=WMSHowever in this case BBOX contains set of 4 floating point numbers for 2 (x0, y0) and (x1, y1) points limiting the area to render. This is a kind of problems this program is trying to solve.What is supported?These are the known constraints:WMS service you use must allow unauthenticated access (there is no way for passing credentials in any form at the moment)it must supportEPSG:3857coordinate systemHow to use it......if I don't have experience with DjangoThis makes use of this Github project and demo project it supplies. Best for testing, or temporary deployement.Make sure you have Python and pip installedClone repo:git clone [email protected]:v3l0c1r4pt0r/adaptwms.gitand enter it:cd adaptwmsInstall all requirements:pip install -r requirements.txtRun development server with:./manage.py runserverOpenhttp://127.0.0.1:8000/in your browserTODO: write...if I do have some experience with DjangoThis makes use of pip package available on PyPI and Django project that you have, or want to set up. Best for production environments.Make sure you have Python and pip installedStart new django project:django-admin startproject demo ., where demo is its name, or use existing oneInstall adaptwms from PyPI:pip install adaptwmsAddadaptwmsto yourINSTALLED_APPSin settings.pyAddpath("adaptwms/", adaptwms.views.adapter_view),to your urlpatterns in urls.py (same could be done foradaptwms.views.InterfaceView.as_view()if you want to have interactive generator for adaptwms URLs)Run development server with:./manage.py runserverOpenhttp://127.0.0.1:8000/in your browser
adapy
No description available on PyPI.
ada-py
ADA - Assembly for Design & AnalysisA python library for working with structural analysis and design. This library should be considered as experimental.The recommended way of installing ada-py is by creating a new isolated environment for the installation like so:conda create -n adaenv -c conda-forge -c krande ada-pyor if you wish to download the latest build from any branch passing all unittests you can doconda create -n adaenv -c conda-forge -c krande/label/dev ada-pyHere are some of the goals withada-py:Support reading, writing and modifying FE models and post-processing FE resultsSupport open source and commercial FE packages (based on what I use/would like to use regularly)Support scriptable FE meshingSupport reading/writing CAD/BIM formats (STEP/IFC) & mesh formats (GLTF)Use a CSG (Constructive Solid Geometry) core primitives library for boolean operations based on the IFC/STEP standardsProvide the building blocks for advanced parametric and procedural 3d model design and simulation workflowsThe library should always strive for user ergonomics.Quick LinksTry ada-py online with code-aster and calculix pre-installedFeel free to start/join any informal topic related to adapyhere.Issues related to adapy can be raisedhereUsageSome examples of using the ada-py packageCreate an IFC fileThe following codefromadaimportAssembly,Part,Beama=Assembly("MyAssembly")/(Part("MyPart")/Beam("MyBeam",(0,0,0),(1,0,0),"IPE300"))a.to_ifc("C:/temp/myifc.ifc")creates an Ifc file containing an IfcBeam with the following hierarchyMyAssembly (IfSite) MyPart (IfcBuildingStorey) MyBeam (IfcBeam)The resulting IfcBeam (and corresponding hierarchy) shown in the figure above is taken from the awesomeblenderpluginblenderbim.Convert between FEM formatsHere is an example showing the code for converting a sesam FEM file to abaqus and code asterNote! Reading FEM load and step information is not supported, but might be added in the future.importadaa=ada.from_fem('path_to_your_sesam_file.FEM')a.to_fem('name_of_my_analysis_file_deck_directory_abaqus','abaqus')a.to_fem('name_of_my_analysis_file_deck_directory_code_aster','code_aster')Current read support is: abaqus, code aster and sesamCurrent write support is: abaqus, code aster and sesam, calculix and usfosCreate and execute a FEM analysis in Calculix, Code Aster and AbaqusThis example uses a functionbeam_ex1fromherethat returns an Assembly object with a singleBeamwith a few holes in it (to demonstrate a small portion of the steel detailing capabilities in ada and IFC) converted to a shell element mesh using a FE mesh recipecreate_beam_meshfoundhere.fromada.param_models.fem_modelsimportbeam_ex1a=beam_ex1()a.to_fem("MyCantilever_abaqus","abaqus",overwrite=True,execute=True,run_ext=True)a.to_fem("MyCantilever_calculix","calculix",overwrite=True,execute=True)a.to_fem("MyCantilever_code_aster","code_aster",overwrite=True,execute=True)after the code is executed you can look at the results using supported post-processing software or directly in python using Jupyter notebook/lab (currently only supported for Code Aster) for the FEA results.To access the stress and displacement data directly using python here is a way you can use meshio to read the results from Calculix and Code Aster (continuing on the previous example).fromada.configimportSettingsimportmeshiovtu=Settings.scratch_dir/"MyCantilever_calculix"/"MyCantilever_calculix.vtu"mesh=meshio.read(vtu)# Displacements in [X, Y, Z] at point @ index=-1print('Calculix:',mesh.point_data['U'][-1])rmed=Settings.scratch_dir/"MyCantilever_code_aster"/"MyCantilever_code_aster.rmed"ca_mesh=meshio.read(rmed,'med')# Displacements in [X, Y, Z] at point @ index=-1print('Code Aster:',ca_mesh.point_data['DISP[10] - 1'][-1][:3])Note!The above example assumes you have installed Abaqus, Calculix and Code Aster locally on your computer.To set correct paths to your installations of FE software you wish to use there are a few ways of doing so.Add directory path of FE executable/batch to your system path.Add directory paths to system environment variables. This can be done by using the control panel or running the following from a cmd prompt with administrator rights::: Windows setx ADA_abaqus_exe <absolute path to abaqus.bat> setx ADA_calculix_exe <absolute path to ccx.exe> setx ADA_code_aster_exe <absolute path to as_run.bat> :: Linux? :: Mac?Note! It is very important that any paths containing whitespaces be converted to "shortened paths". To shorten a path on windows you can use the utilitypathcopycopy.For installation files of open source FEM software such as Calculix and Code Aster, here are some links:https://github.com/calculix/cae/releases(calculix CAE for windows/linux)https://code-aster-windows.com/download/(Code Aster for Windows Salome Meca v9.3.0)https://www.code-aster.org/spip.php?rubrique21(Code Aster for Linux)https://salome-platform.org/downloads/current-version(Salome v9.6.0 for windows/linux)https://prepomax.fs.um.si/downloads/(PreProMax -> Calculix preprocessor)Note!pip is not a recommended installation method due to an unstable behaviour often manifested as DLL import errors related to the vtk package.AcknowledgementsThis project would never have been possible without the existing open source python and c++ libraries. Although listed in the package dependencies (which is a long list), here are some of the packages that are at the very core of adapy;IfcOpenShellOpenCascadePythonOCC-CoreGmshTrimeshA huge thanks to all involved in the development of the packages mentioned here and in the list of packages adapy depends on.If you feel that a certain package listed in the adapy dependencies should be listed here please let me know and I will update the list :)Project ResponsibleKristoffer H. Andersen
adapya
What is adapya?---------------With adapya you can access Adabas databases from Python programs usingthe Adabas API.It comes with sample programs to show its use and demonstrates newAdabas features like reading and storing of large binary objects usingthe extended Adabas API (ACBX).Adabas is a commercial database system that runs on Windows,Unix and mainframe systems. For more information seehttp://en.wikipedia.org/wiki/Adabas andhttp://www.softwareag.com/corporate/products/adabas/default.aspIn the download area you may also find an Adabas community version forWindows.Here is the Adabas product documentationhttp://documentation.softwareag.com/adabas/Python is a scripting language that allows rapid prototyping,object-oriented or functional style programming. It is opensource and is used in a large number of projects.Note: adapya does not implement a SQL interface as defined with thePython DBAPI. You may do Adabas DBAPI access with the productADABAS SQL Gateway via the ODBC interface.adapya requires a good knowledge of the Adabas API.adapya is a pure Python package: it does not require compilation ofextensions. It should work on all platforms where CPython and Adabas are available.It has been tested on Windows, Solaris and z/Linux.It can access local Adabas databases and with the product NET-WORKremote Adabas databases on all platforms including mainframez/OS, VSE, BS2000).Prerequisite for adapya is Python version 2.5, or Python version 2.3 and2.4 where it requires the extra ctypes package.adapya can be downloaded fromhttp://tech.forums.softwareag.com/viewforum.php?f=171&C=11adapya license--------------Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.
adapya-adabas
adapya-adabas implements the Adabas database API for Python.It can access local and remote Adabas databases. adapya-adabas comes with scripts and sample programs to show its features. It is being used on Linux, mainframe z/OS, Solaris and Windows.Prerequisites for adapya-adabas are Python version 2.7, 3.5 or higher and the adapya-base package.Installationpip install adapya-adabasLinksDetails of adapya-adabas:https://softwareag.github.io/adapya-adabas/index.htmlAdabas at Software AG:https://www.softwareag.comAdabas forum:http://tech.forums.softwareag.com/techjforum/forums/show/171.pageAdabas documentation:http://techcommunity.softwareag.comLicenseCopyright 2004-ThisYear Software AGLicensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions and limitations under the License.These tools are provided as-is and without warranty or support. They do not constitute part of the Software AG product suite. Users are free to use, fork and modify them, subject to the license agreement. While Software AG welcomes contributions, we cannot guarantee to include every contribution in the master project.
adapya-base
adapya-base is a Python package that provides the foundations for the other adapya packages, especially for adapya-adabas, the Adabas API for Python.adapya-base comes with scripts and sample programs to show its features. It is being used on Linux, z/OS mainframe, Solaris and Windows.Prerequisite for adapya-base is Python version 2.7 or 3.5 and above.Installationpip install adapya-baseLinksDetails of adapya-base:https://softwareag.github.io/adapya-base/index.htmlAdabas at Software AG:https://www.softwareag.comAdabas forum:http://tech.forums.softwareag.com/techjforum/forums/show/171.pageAdabas documentation:http://techcommunity.softwareag.comLicenseCopyright 2004-ThisYear Software AGLicensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions and limitations under the License.These tools are provided as-is and without warranty or support. They do not constitute part of the Software AG product suite. Users are free to use, fork and modify them, subject to the license agreement. While Software AG welcomes contributions, we cannot guarantee to include every contribution in the master project.
adapya-entirex
adapya-entirexis a services library for persistent messaging using the Advanced Communication Interface (ACI) with the EntireX Broker from Software AG.EntireX is a component in the webMethods high-performance communication infrastructure.adapya-entirex is part of a set of Python packages including adapya-adabas a client library for Adabas database access.adapya-entirexis apurePython package: it does not require compilation of extensions.It has been used on Linux, Solaris and Windows.Prerequisite for adapya is Python version 2.7 or 3.5 and above.InstallationInstall and update with the Python package manager:pip install -U adapya-entirexLinksDetails of adapya-entirex:https://softwareag.github.io/adapya-entirex/index.htmladapya in the Adabas forum:http://tech.forums.softwareag.com/techjforum/forums/show/171.pageAbout webMethods EntireX:https://resources.softwareag.com/application-modernization/webmethods-entirexwebMethods EntireX documentation:http://techcommunity.softwareag.com/ecosystem/documentation/webmethods/entirex/entirex_vers.htmLicenseCopyright 2004-ThisYear Software AGLicensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions and limitations under the License.These tools are provided as-is and without warranty or support. They do not constitute part of the Software AG product suite. Users are free to use, fork and modify them, subject to the license agreement. While Software AG welcomes contributions, we cannot guarantee to include every contribution in the master project.
adapya-era
adapya-era- services library of the Event Replicator for Adabasadapya-eraadapya-erais a services library of the Event Replicator for Adabas.The Event Replicator for Adabas is an add-on product to Adabas that allows replicating database data to other systems. Client programs (also called target adapters) receive event replication data through a messaging system like MQ series or EntireX Broker.adapya-era can be used to write target adapters in Python.The package also consists of scripts that can send requests to the Replicator and receive event data via the EntireX Broker messaging system.adapya-era requires the following adapya packages: adapya.adabas, adapya.base and adapya.entirexPrerequisite for adapya is Python version 2.7 or 3.5 and above.Installationpip install adapya-eraLinksDocumentation for adapya-era:https://softwareag.github.io/adapya-era/index.htmlAbout Event Replicator for Adabas:https://resources.softwareag.com/adabas-natural/event-replicator-for-adabas-on-the-mainframeCommunity forum:http://tech.forums.softwareag.com/techjforum/forums/show/171.pageEvent Repliator documentation:http://techcommunity.softwareag.com/ecosystem/documentation/adabas/a_distribution/event_replicator_vers.htm(free registered access)LicenseCopyright 2004-ThisYear Software AGLicensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions and limitations under the License.These tools are provided as-is and without warranty or support. They do not constitute part of the Software AG product suite. Users are free to use, fork and modify them, subject to the license agreement. While Software AG welcomes contributions, we cannot guarantee to include every contribution in the master project.
adara-privacy
Adara Privacy SDKThe Adara Privacy SDK allows you to tokenize Personally Identifiable Information (PII) within an isolated environment. The tokens produced using this SDK follow a set of simple standards that allow you interact with other token producers so that you can participate in meaningful data exchanges without revealing sensitive information about individual users.ADARA wrote this SDK to offer out-of-the-box support for engagement with the ADARA Privacy Token API, but does not require this. Data partners can use the ADARA Privacy Token Server SDK without using the ADARA Privacy Token API.NOTE:Any tokenization data generated within this SDK is only transmitted to Adara explicitly as described below.Getting StartedDownload and InstallDownload and install the SDK from PyPi (we strongly recommend installing in a virtual environment):(venv)%pipinstalladara-privacySetup your local configurationConfigure the ADARA Privacy Token Server SDK using a single JSON configuration file with the following format:{"client_id":"<optional: your client ID>","client_secret":"<optional: your client secret>","auth_uri":"https://auth.adara.com/oauth/token","privacy":{"private_salt":"<!!REQUIRED!!: your PRIVATE salt value>","common_salts":{"adara":"7f0587e5843dff04240624af5f215fe57ba9d5841ae25c5a22b0d95900ceb3ed","my_consortium":"34ef58c071c6c125c06103bb8bd1ce239afd4046711676a8d999df0c6a0c0820","my_other_consortium":"3362e7e25a9b2616d954d71bc9fcac87fb6939b85f4cf04e8d926d0e995723bf"},"audience_uri":"https://api.adara.com","pipeline_id":"<optional: your pipeline ID>"}}In the example above, the common salts for my_consortium and my_other_consortium are provided for illustrative purposes to represent that any consortiums of interest can be added by provided salt values in the "common_salts" node. There will be one token produced per identifier, per common salt, although this behavior can be overridden for specific identifiers at call time.Set up your configuration file locally and point the environment variableADARA_SDK_CREDENTIALSto your file location:%exportADARA_SDK_CREDENTIALS=<pathtoyourconfig>/my_config.txtThe file path, name and extension are unimportant as long as they point to a readable file location in your local enviroment.Code QuickstartIdentities and IdentifiersThe SDK is written to accept the PII you have access to for an individual and transform it into a privacy-safe set of tokens. An important point to remember is that tokens, by themselves, are intentionally pretty useless. They are useful only when maintained as a set of tokens pointing to an individual user. The classes within the SDK reflect this by using a set ofIdentifiersthat belong to anIdentity:fromadara_privacyimportIdentity,Identifiermy_identity=Identity(# pass the identifier type as an arg (placement doesn't matter)Identifier('email','[email protected]'),# or use a named argumentIdentifier(state_id="D1234567"),)Supported identifier typesThe ADARA Privacy SDK supports the following identifiers out of the box:Type ValueDescriptionKeywordscookiePersistent cookie identifiersingle:cookiecustomer_idInternal customer IDsingle:customer_iddrivers_licenseState-issued driver's license numbersingle:drivers_licenseemailClear text email addresssingle:emailhashed_emailHashed email addresssingle:hashed_emailmembership_idMembership / loyalty IDsingle:membership_idpassportPassport numbersingle:passportsocial_securitySocial security numbersingle:social_securitystate_idOther state IDsingle:state_idstreetname_zipcodeStreet name and zip codecomposite:street_name,zip_codeYou can also extend the SDK with identifier types of your own.TokensEachIdentifiercan be turned into tokens. The tokens are generated using theprivate saltand one or morecommon saltsdefined in your local configuration. Using these salts and some standard hashing algorithms, the ADARA Privacy SDK turns the raw PII from the identifier into aprivate tokenand one or morecommon tokens. The type of identifier (example: email or driver license number) is also returned with the token, as well as an optional label.You can see the tokens for anIdentityby invoking theto_tokens()method:print(json.dumps(my_identity.to_tokens()))For the first example above, this yields the following output (or something similar, based on your client salt):{"package_token":"1dec707d16232521608c722299d03e6a34f47b20d3bbacb2a0738384c06fd029","tokens":[{"private":"15ba6cd3b7f2618e680180706ae65850093ee165d36fb743c4d64ec3a51bd823","adara":"4447b2c72b9aa03977af4b9f085feaf001587b652f36a914363d8eb709bc20bf","my_consortium":"b84057a0bf979e28d53b846c2f3148a1f58e07a282f1bd768ce73a0fce347aef","my_other_consortium":"8077ddc77cf8735dd6143d929f9f04deceec33e53331fe8466ad63934e33be3e","type":"email"},{"private":"8edeb62e51ff5e19bba160b2c00c1747578fc5f3ae0c2f10a1bafd1d3522fbf2","adara":"141dd951d0a54dfb320bdea0f5c35c9b379726780670d3b8cd6dd0d5341bb106","my_consortium":"b0a85ec62d29b137d403155301cdb613dcf7474d40345b9c141cd8d3b3a32dcd","my_other_consortium":"923c5edbd61954b09f20044be534b7a14c3ebae717eb0c25eb81679df064d5fd","type":"state_id"}]}Private vs. Common TokensHere's a helpful way to consider the difference betweenprivate tokensandcommon tokens:Private tokensfunction as your own unique handle on an identifier. Because they are generated with yourprivate salt, and only you should have access to your private salt, no one else is able to create the same tokens for a given identifier. If you use Adara's Privacy APIs, only you (verified through authentication) can use your private tokens to perform lookups. You can store private tokens as you see fit.Common tokensare shared amongst members of a consortium. Common tokens are generated with acommon salt, and anyone with access to that salt (i.e., the members of a consortium) can generate the same common token. Common tokens are useful for matching against private tokens and are therefore used to build an identity graph. If you use Adara's Privacy APIs, common tokenscannotbe used for lookups in any way, so there really isn't a point to storing common tokens yourself. The common tokens are submitted alongside private tokens so that the matching can occur internally.Package TokensTokens will be returned for each identifier and salt combination. For a given Identity instance, this can result in a large number of individual tokens, which is not necessarily convenient for storing alongside your data. To solve this issue, each tokenization result contains apackage token. This is a private token derived from all the identifiers within an indentity. Like all tokens, it is deterministic, but if you add or remove idenitifers from an identity, the package token will change accordingly (the order in which you add identifiers is not important).The package token isat least as good asthe best identifier token within the result. If you want to store a single token to reference the identity, use the package token.Root TokensRoot tokens are used as a deterministic "seed" in the process of generating all subsequent tokens. If you decide to transmitRoot tokenalong with other tokens, ADARA will be able to generate additionalcommon tokenswithout any actions from your end.Bydefaultthis feature isturned offand can be enabled by adding atransmit_root_tokenflag to your local configuration.{"client_id":"<optional: your client ID>","client_secret":"<optional: your client secret>","auth_uri":"https://auth.adara.com/oauth/token","privacy":{"transmit_root_token":true,"private_salt":"<!!REQUIRED!!: your PRIVATE salt value>","common_salts":{"adara":"7f0587e5843dff04240624af5f215fe57ba9d5841ae25c5a22b0d95900ceb3ed","my_consortium":"34ef58c071c6c125c06103bb8bd1ce239afd4046711676a8d999df0c6a0c0820","my_other_consortium":"3362e7e25a9b2616d954d71bc9fcac87fb6939b85f4cf04e8d926d0e995723bf"},"audience_uri":"https://api.adara.com","pipeline_id":"<optional: your pipeline ID>"}}LabelsIf you are interested in capturing individual identifier tokens, you may find it helpful tolabelyour identifiers. This is because a large number of identifiers in the result may get confusing to associate with specific identifiers, especially if you have more than one identifier of the same type.To label an identifier, simply use thelabeloption when invoking the call:my_identity=Identity(# labels help differentiate the tokens in the resultIdentifier('email','[email protected]',label="personal email"),Identifier('email','[email protected]',label="work email"),)and this would be the output:{"package_token":"e67529f593120f5b141b0920199ca6aabfea864c735ad6e3a1625227da735137","tokens":[{"private":"15ba6cd3b7f2618e680180706ae65850093ee165d36fb743c4d64ec3a51bd823","adara":"4447b2c72b9aa03977af4b9f085feaf001587b652f36a914363d8eb709bc20bf","my_consortium":"b84057a0bf979e28d53b846c2f3148a1f58e07a282f1bd768ce73a0fce347aef","my_other_consortium":"8077ddc77cf8735dd6143d929f9f04deceec33e53331fe8466ad63934e33be3e","type":"email","label":"personal email"},{"private":"b790e43743f7db5735e5b77034036bc040656b70dc969230a5ffeec182a10982","adara":"e0732ffd3b6524bc204df41a479f8143ceb7675f03f4152bec4daf36fd920483","my_consortium":"368405c8c20623bd234a5d242ae1b48806f9ee91513735dfc5c66671da7bc858","my_other_consortium":"63ae761a793f944207c6a030c7b355ef9e407196786f071a3d4b0e038dc59d45","type":"email","label":"work email"}]}Labels can be any string, so you can use something like a UUID to track tokens programmatically.Cherry Picking Common TokensYour configuration file should contain all the salts you may want to use for token creation. In some cases, however, you may only want to create common tokens for a subset of consortiums. This allows you to submit identity data to some identity graphs, while omitting it from others.To limit token results for any identifier to only a subset of your defined common salts, use thecommon_tokenskeyword argument in the Identifier instantiation:my_identity=Identity(# no "common_tokens" keyword, so all tokens will be generatedIdentifier('email','[email protected]',label="personal email"),# here, we omit "my_other_consortium"Identifier(email='[email protected]',label="work email",common_tokens=['adara','my_consortium']),){"package_token":"8736afea56e62d978360b8f304ea6f33c692203ba91649aaf1226eb0601ef353","tokens":[{"private":"15ba6cd3b7f2618e680180706ae65850093ee165d36fb743c4d64ec3a51bd823","adara":"4447b2c72b9aa03977af4b9f085feaf001587b652f36a914363d8eb709bc20bf","my_consortium":"b84057a0bf979e28d53b846c2f3148a1f58e07a282f1bd768ce73a0fce347aef","my_other_consortium":"8077ddc77cf8735dd6143d929f9f04deceec33e53331fe8466ad63934e33be3e","type":"email","label":"personal email"},{"private":"b790e43743f7db5735e5b77034036bc040656b70dc969230a5ffeec182a10982","adara":"e0732ffd3b6524bc204df41a479f8143ceb7675f03f4152bec4daf36fd920483","my_consortium":"368405c8c20623bd234a5d242ae1b48806f9ee91513735dfc5c66671da7bc858","type":"email","label":"work email"}]}Sending data to AdaraIf you want to send your tokens into Adara's Privacy API, you can use theAdaraPrivacyApiStreamerclass.You'll need to specify several of the "optional" settings in the configuration file for this, and you'll get these values from Adara's provisioning team. They'll setup a configuration file for you with everything you need, such as client secrets, pipeline IDs, and API endpoints.Here's some sample code that creates anIdentityinstance and send the tokenized result to Adara's Privacy API (note that tokenization is implicit):fromadara_privacyimportIdentity,Identifier,AdaraPrivacyApiStreamer# create instance of an API streameradara_api=AdaraPrivacyApiStreamer()# create an identity instancemy_identity=Identity(# labels help differentiate the tokens in the resultIdentifier('email','[email protected]',label="personal email"),Identifier(email='[email protected]',label="work email",common_tokens=['adara','my_consortium']),)# push the identity tokens to ADARAadara_api.save(my_identity)
adarnauth-esi
# adarnauth-esiDjango app for accessing the EVE Swagger Interface.## Quick Start1. Add `esi` to your `INSTALLED_APPS` setting:`INSTALLED_APPS += 'esi'`2. Include the esi urlconf in your project's urls:`url(r'^sso/', include('esi.urls', namespace='esi')),`3. Register an application with the [EVE Developers site](https://developers.eveonline.com/applications)If your application requires scopes, select `Authenticated API Access` and register all possible scopes your app can request. Otherwise `Authentication Only` will suffice.Set the `Callback URL` to `https://example.com/sso/callback`4. Add SSO client settings to your project settings:`ESI_SSO_CLIENT_ID = "my client id"``ESI_SSO_CLIENT_SECRET = "my client secret"``ESI_SSO_CALLBACK_URL = "https://example.com/sso/callback"`5. Run `python manage.py migrate` to create models.## Usage in ViewsWhen views require a token, wrap with the `token_required` decorator and accept a `token` arg:from esi.decorators import token_required@token_required()def my_view(request, token):...This will prompt the user to either select a token from their current ones, or if none exist create a new one via SSO.To specify scopes, add either a list of names or a space-delimited string:@token_required(scopes=['esi-location.read_ship_type.v1', 'esi-location.read_location.v1'])@token_required(scopes='esi-location.read_ship_type.v1 esi-location.read_location.v1')To require a new token, such as for logging in, add the `new` argument:@token_required(new=True)To request all of a user's tokens which have the required scopes, wrap instead with the `tokens_required` decorator and accept a `tokens` arg:@tokens_required(scopes='esi-location.read_ship_type.v1')def my_view(request, tokens):...This skips prompting for token selection and instead passes that responsibility to the view. Tokens are provided as a queryset.## Accessing the EVE Swagger Interfaceadarnauth-esi provides a convenience wrapper around the [bravado SwaggerClient](https://github.com/Yelp/bravado).### Getting a ClientTo get a SwaggerClient configured for ESI, call the factory:from esi.clients import esi_client_factoryclient = esi_client_factory()### Accessing Authenticated EndpointsTo get an authenticated SwaggerClient, add the token argument:client = esi_client_factory(token=my_token)Or, get the client from the specific token model instead:client = my_token.get_esi_client()Authenticated clients will auto-renew tokens when needed, or raise a `TokenExpiredError` if they aren't renewable.### Specifying Resource VersionsAs explained on the [EVE Developers Blog](https://developers.eveonline.com/blog/article/breaking-changes-and-you), it's best practice to call a specific version of the resource and allow the ESI router to map it to the correct route, being `legacy`, `latest` or `dev`.Client initialization begins with a base swagger spec. By default this is the version defined in settings (`ESI_API_VERSION`), but can be overridden with an extra argument to the factory:client = esi_client_factory(version='v4')client = token.get_esi_client(version='v4')Only resources with the specified version number will be available. For instance, if you specify `v4` but `Universe` does not have a `v4` version, it will not be available to that specific client. Only `legacy`, `latest` and `dev` are guaranteed to have all resources available.Individual resources are versioned and can be accessed by passing additional arguments to the factory:client = esi_client_factory(Universe='v1', Character='v3')client = token.get_esi_client(Universe='v1', Character='v3')A list of available resources is available on the [EVE Swagger Interface browser](https://esi.tech.ccp.is). If the resource is not available with the specified version, an `AttributeError` will be raised.This version of the resource replaces the resource originally initialized. If the requested base version does not have the specified resource, it will be added.Note that only one old revision of each resource is kept available through the legacy route. Keep an eye on the [deployment timeline](https://github.com/ccpgames/esi-issues/projects/2/) for resource updates.### Using a Local Spec FileSpecifying resource versions introduces one major problem for shared code: not all resources nor all their operations are available on any given version. This can be addressed by shipping a copy of the [versioned latest spec](https://esi.tech.ccp.is/_latest/swagger.json) with your app. **This is the preferred method for deployment.**To build a client using this local spec, pass an additional kwarg `spec_file` which contains the path to your local swagger.json:c = esi_client_factory(spec_file='/path/to/swagger.json')For example, a swagger.json in the current file's directory would look like:c = esi_client_factory(spec_file=os.path.join(os.path.dirname(os.path.abspath(__file__)), 'swagger.json'))If a `spec_file` is specified all other versioning is unavailable: ensure you ship a spec with resource versions your app can handle.### Accessing Alternate DatasourcesESI datasource can also be specified during client creation:client = esi_client_factory(datasource='tranquility')Available datasources are `tranquility` and `singularity`.## Cleaning the DatabaseTwo tasks are available:- `cleanup_callbackredirect` removes all `CallbackRedirect` models older than a specified age (in seconds). Default is 300.- `cleanup_token` checks all `Token` models, and if expired, attempts to refresh. If expired and cannot refresh, or fails to refresh, the model is deleted.To schedule these automatically with celerybeat, add them to your settings.py `CELERYBEAT_SCHEDULE` dict like so:from celery.schedules import crontabCELERYBEAT_SCHEDULE = {...'esi_cleanup_callbackredirect': {'task': 'esi.tasks.cleanup_callbackredirect','schedule': crontab(hour='*/4'),},'esi_cleanup_token': {'task': 'esi.tasks.cleanup_token','schedule': crontab(day_of_month='*/1'),},}Recommended intervals are four hours for callback redirect cleanup and daily for token cleanup (token cleanup can get quite slow with a large database, so adjust as needed). If your app does not require background token validation, it may be advantageous to not schedule the token cleanup task, instead relying on the validation check when using `@token_required` decorators or adding `.require_valid()` to the end of a query.## Operating on SingularityBy defalt, adarnauth-esi process all operations on the tranquility cluster. To operate on singularity instead, two settings need to be changed:- `ESI_OAUTH_URL` should be set to `https://sisilogin.testeveonline.com/oauth`- `ESI_API_DATASOURCE` should be set to `singularity`Note that tokens cannot be transferred between servers. Any tokens in the database before switching to singularity will be deleted next refresh.
adarsha-pdf
This is the home page for our project.
adarsh-distributions
No description available on PyPI.
adas
Adas: Adaptive Scheduling of Stochastic GradientsStatusTable of ContentsAdas: Adaptive Scheduling of Stochastic GradientsStatusIntroductionLicenseCiting AdasEmpirical Classification Results on CIFAR10 and CIFAR100QC MetricsRequirementsSoftware/HardwareComputational OverheadInstallationUsageCommon Issues (running list)TODOPytestIntroductionAdasis an adaptive optimizer for scheduling the learning rate in training Convolutional Neural Networks (CNN)Adas exhibits the rapid minimization characteristics that adaptive optimizers likeAdaMare favoured forAdas exhibitsgeneralization(low testing loss) characteristics on par with SGD based optimizers, improving on the poorgeneralizationcharacteristics of adaptive optimizersAdas introduces no computational overhead over adaptive optimizers (seeexperimental results)In addition to optimization, Adas introduces new probing metrics for CNN layer evaulation (quality metrics)This repository contains aPyTorchimplementation of the Adas learning rate scheduler algorithm as well as the Knowledge Gain and Mapping Condition metrics.Visit thepaperbranch to see the paper-related code. You can use that code to replicate experiments from the paper.LicenseAdas is released under the MIT License (refer to theLICENSEfile for more information)PermissionsConditionsLimitationsCommerical useLicense and Copyright NoticeLiabilityDistributionWarrantyModificationPrivate UseCiting Adas@article{hosseini2020adas, title={Adas: Adaptive Scheduling of Stochastic Gradients}, author={Hosseini, Mahdi S and Plataniotis, Konstantinos N}, journal={arXiv preprint arXiv:2006.06587}, year={2020} }Empirical Classification Results onCIFAR10,CIFAR100andTiny-ImageNet-200Figure 1: Training performance using different optimizers across three datasets and two CNNsTable 1: Image classification performance (test accuracy) with fixed budget epoch of ResNet34 trainingQC MetricsPlease refer toQC on Wikifor more information on two metrics of knowledge gain and mapping condition for monitoring training quality of CNNsRequirementsSoftware/HardwareWe usePython 3.7.Please refer toRequirements on Wikifor complete guideline.Computational OverheadAdas introduces no overhead (very minimal) over adaptive optimizers e.g. all mSGD+StepLR, mSGD+Adas, AdaM consume 40~43 sec/epoch to train ResNet34/CIFAR10 using the same PC/GPU platformInstallationYou can install Adas directly from PyPi using `pip install adas', or clone this repository and install from source.You can also download the files insrc/adasinto your local code base and use them directly. Note that you will probably need to modify the imports to be consistent with however you perform imports in your codebase.All source code can be found insrc/adasFor more information, also refer toInstallation on WikiUsageThe use Adas, simply import theAdas(torch.optim.optimier.Optimizer)class and use it as follows:fromadasimportAdasoptimizer=Adas(params=list(model.parameters()),lr:float=???,beta:float=0.8step_size:int=None,gamma:float=1,momentum:float=0,dampening:float=0,weight_decay:float=0,nesterov:bool=False):...forepochinepochs:forbatchintrain_dataset:...loss.backward()optimizer.step()optimizer.epoch_step(epoch)Note,optipmizer.epoch_step()is just to be called at the end of each epoch.Common Issues (running list)None :)TODOAdd medical imaging datasets (e.g. digital pathology, xray, and ct scans)Extension of Adas to Deep Neural NetworksPytestNote the following:Our Pytests write/download data/files etc. to/tmp, so if you don't have a/tmpfolder (i.e. you're on Windows), then correct this if you wish to run the tests yourself
adasamp-pareto
Adaptive optimization algorithm for black-box multi-objective optimization problems with binary constraints on the foundation of Bayes optimization. The algorithm aims to find the Pareto-optimal solution of\begin{equation*} max [ y(x) ] s.t. f(x) = feasible \end{equation*}in an iterative procedure. Here,\(y(x)\)denotes the multi-dimensional goals and\(f(x)\)the binary feasibility of the problem (in the sense that certain design variables\(x\)lead to invalid goals). All technical details can be found in the paper “Adaptive Sampling of Pareto Frontiers with Binary Constraints Using Regression and Classification” (https://arxiv.org/abs/2008.12005).InstallationInstall viapipor clone this repository. In order to usepip, type:$pipinstalladasamp-paretoUsageThe classAdaptiveSampleris used to define and solve a problem instance. Simple example:fromadasampimportAdaptiveSampler# Create instancesampler=AdaptiveSampler(func,# Problem definition: function returns (goals Y, feasibility f)X_limits,# Design variable limits to search solution inY_ref,# Reference point, has to be dominated by any goal Yiterations,# Number of solver iterationsY_model,# Regression model to predict goals Yf_model)# Classification model to predict feasibility f# Return the sampling suggestions X, the corresponding goals Y, and the corresponding feasibilities f.X,Y,f=sampler.sample()Demo notebooks can be found in theexamples/directory.DocumentationComplete documentation is available:https://adasamp-pareto.readthedocs.io/en/latest.📖CitationIf you find this code useful in your research, please consider citing:@misc{heesebortzCITE2020, title={Adaptive Sampling of Pareto Frontiers with Binary Constraints Using Regression and Classification}, author={Raoul Heese and Michael Bortz}, year={2020}, eprint={2008.12005}, archivePrefix={arXiv}, primaryClass={stat.ML} }
adase-api
ADA Sentiment Explorer APIIntroductionAlpha Data Analytics ("ADA") is a data analytics company, core product is ADA Sentiment Explorer (“ADASE”), build on an opinion monitoring technology that intelligently reads news sources and social platforms into machine-readable indicators. It is designed to provide unbiased visibility of people's opinions as a driving force of capital markets, political processes, demand prediction or marketingADA's vision is to democratise advanced AI-system supporting decisions, that benefit data proficient people and small- or medium- quantitative institutions.ADASE supportskeywordandtopicengines, as explained belowTo installpip install adase-apiSentiment Open QueryTo use API you need to provide API credentials as environment variables and search topicfromadase_api.schemas.sentimentimportCredentials,QuerySentimentTopicfromadase_api.sentimentimportload_sentiment_topiccredentials=Credentials(username='[email protected]',password='yourpass')search_topics=["inflation rates","OPEC cartel"]ada_query=QuerySentimentTopic(text=search_topics,credentials=credentials)sentiment=load_sentiment_topic(ada_query)sentiment.tail(10)score coverage query OPEC cartel inflation rates OPEC cartel inflation rates date_time 2024-01-12 03:00:00 0.170492 -3.210051 -0.270801 1.600013 2024-01-12 04:00:00 0.184400 -0.621429 -0.270801 1.600013 2024-01-12 05:00:00 0.170492 0.952482 -0.270801 0.414950 2024-01-12 06:00:00 0.170492 -0.114074 -0.270801 0.414950 2024-01-12 07:00:00 0.170492 0.804350 -0.270801 0.414950 2024-01-12 08:00:00 0.170492 0.241445 -0.270801 1.600013 2024-01-12 09:00:00 0.170492 1.548717 -0.270801 3.970140Returnscoverageandscore(sentiment) to a pandas DataFrame.Whennormalize_to_global=True data comes more sparse, since query hits most likely won't be found every hour.In this case missing records, bothcoverageandscoreare filled with 0'scoveragefield is usually seasonal, is adviced to apply a 7-day rolling averageBy default, is queriedlivedata, that comes on an hourly basis and includes 6 months historySearch topic syntaxPlain textIn contrast with keyword search, plain text relies on topics to query data on wider concept. It works the best when 2-5 words describe some concepts, examples:"stock market", it might also analyse terms as"Dow Jones","FAANG"etc."Airline travel demand""Energy disruptions in Europe""President Joe Biden"analysed scope depends on how words normally co-occur togetherBoolean searchSearch for exact keyword matchEach condition is placed inside of round brackets(), where+indicates a search term must be foundand-excludes itFor example"(+Ford +Motor*), asterix*will include bothMotor&Motorsimportpandasaspdsearch_topics=["(+inflation)"]ada_query=QuerySentimentTopic(text=search_topics,credentials=credentials,languages=['de','ro','pt','pl'],live=False,start_date=pd.to_datetime('2010-01-01'))This query will do a boolean search on historical data starting fromJan 1, 2010and include only data in specified languagesMobility IndexMonitor traffic (on the road) situation on the city-to-airport pairsBesides the news monitoring, the package also provides interface to query worldwide real-time traffic situation. This can be useful in the combination with media or standalone.fromadase_api.schemas.geoimportQueryTagGeo,GeoH3Interface,QueryTextMobility,QueryMobilityfromadase_api.geoimportload_mobility_by_textq=QueryTextMobility(credentials=credentials,tag_geo=QueryTagGeo(text=['Gdansk']),geo_h3_interface=GeoH3Interface(),mobility=QueryMobility(aggregated=True))mobility=load_mobility_by_text(q)API rate limitAll endpoints have set limit on API calls per minute, by default 10 calls / min.In case you don't have yet the credentials, you cansign up for freeData available since January 1, 2001Easy way to explore or backtestIn a trial version data lags 24-hoursProbably something else? Hopefully the data can inspire you for other use casesYou can follow us onLinkedInQuestions?For package questions, rate limit or feedback you can reach out [email protected]
adaseq
AdaSeq: An All-in-One Library for Developing State-of-the-Art Sequence Understanding ModelsEnglish |简体中文IntroductionAdaSeq(AlibabaDamoAcademySequence Understanding Toolkit) is an easy-to-use all-in-one library, built onModelScope, that allows researchers and developers to train custom models for sequence understanding tasks, including part-of-speech tagging (POS Tagging), chunking, named entity recognition (NER), entity typing, relation extraction (RE), etc.🌟Features:Plentiful Models:AdaSeq provide plenty of cutting-edge models, training methods and useful toolkits for sequence understanding tasks.State-of-the-Art:Our aim to develop the best implementation, which can beat many off-the-shelf frameworks on performance.Easy-to-Use:One line of command is all you need to obtain the best model.Extensible:It's easy to register a module, or build a customized sequence understanding model by assembling the predefined modules.⚠️Notice:This project is under quick development. This means some interfaces could be changed in the future.📢 What's New2022-07: [SemEval 2023] Our U-RaNER paper wonBest Paper Award!2022-03: [SemEval 2023] Our U-RaNER won1st place in 9 tracksatSemEval 2023 Task2: Multilingual Complex Named Entity Recognition!Model introduction and source code can be found here.2022-12:[EMNLP 2022] Retrieval-augmented Multimodal Entity Understanding Model (MoRe)2022-11:[EMNLP 2022] Ultra-Fine Entity Typing Model (NPCRF)2022-11:[EMNLP 2022] Unsupervised Boundary-Aware Language Model (BABERT)⚡ Quick ExperienceYou can try out our models via online demos built on ModelScope:[English NER][Chinese NER][CWS]More tasks, more languages, more domains: All modelcards we released can be found in this pageModelcards.🛠️ Model ZooSupported models:Transformer-based CRFPartial CRFRetrieval Augmented NERBiaffine NERGlobal-PointerMulti-label Entity Typing...💾 Dataset ZooWe collected many datasets for sequence understanding tasks. All can be found in this pageDatasets.📦 InstallationAdaSeq project is based onPython >= 3.7,PyTorch >= 1.8andModelScope >= 1.4. We assure that AdaSeq can run smoothly whenModelScope == 1.9.5.installation via pip:pip install adaseqinstallation from source:git clone https://github.com/modelscope/adaseq.git cd adaseq pip install -r requirements.txt -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.htmlVerify the InstallationTo verify whether AdaSeq is installed properly, we provide a demo config for training a model (the demo config will be automatically downloaded).adaseq train -c demo.yamlYou will see the training logs on your terminal. Once the training is done, the results on test set will be printed:test: {"precision": xxx, "recall": xxx, "f1": xxx}. A folderexperiments/toy_msra/will be generated to save all experimental results and model checkpoints.📖 TutorialsQuick StartBasicsLearning about ConfigsCustomizing Dataset[TODO] Common Architectures[TODO] Useful HooksHyperparameter OptimizationTraining with Multiple GPUsBest PracticeTraining a Model with Custom DatasetReproducing Results in Published Papers[TODO] Uploading Saved Model to ModelScope[TODO] Customizing your Model[TODO] Serving with AdaLA📝 ContributingAll contributions are welcome to improve AdaSeq. Please refer toCONTRIBUTING.mdfor the contributing guideline.📄 LicenseThis project is licensed under the Apache License (Version 2.0).
adash
ユーティリティライブラリinstallpipinstalladashuseitLodashのような使い方を推奨します。importadashas_s="abcabc"obj={"a":"!","b":"","c":"?"}_.replace_all(s,obj)#-> !?!?adash API documentation
adasher
ADasher - Analytics DashDash with Analytics based componentsInstallationpip install adasherDash resultsStatsStats with plotsAdvancedRun local samplegit clone https://github.com/Bhanuchander210/adasher.git cd adasher export PYTHONPATH=$PYTHONPATH:$(pwd)
adasigpy
No description available on PyPI.