package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
abqpy2021
abqpy wrapperA wrapper package for abqpy.
abqpy2022
abqpy wrapperA wrapper package for abqpy.
abqpy2023
abqpy wrapperA wrapper package for abqpy.
abqpy2024
abqpy wrapperA wrapper package for abqpy.
abra
Abra Eye-tracking ver.1.3Abra is an open-source python eye tracking data analysis tool which provides a simple interface for managing .edf files generated by EyeLink eye trackers.InstallationPython 3:pip install abraCheck out the documentation here:https://abra-eyetracking.github.io/
abracadabra
✨ABracadabra✨✨ABracadabra✨ is a Python framework consisting of statistical tools and a convenient API specialized for running hypothesis tests on observational experiments (aka “AB Tests” in the tech world). The framework has drivenQuizlet’s experimentation pipeline since 2018.FeaturesOffers a simple and intuitive, yet powerful API for running, visualizing, and interpreting statistically-rigorous hypothesis tests with none of the hastle of jumping between various statistical or visualization packages.Supports most common variable types used in AB Tests inlcuding:ContinuousBinary/ProportionsCounts/RatesImplements many Frequentist and Bayesian inference methods including:Variable TypeModel Classinference_methodparameterContinuousFrequentist'means_delta'(t-test)Bayesian'gaussian','student_t','exp_student_t'Binary / ProportionsFrequentist'proportions_delta'(z-test)Bayesian'binomial','beta_binomial','bernoulli'Counts/RatesFrequentist'rates_ratio'Bayesian'gamma_poisson'Non-parametricBootstrap'bootstrap'Supports multiple customizations:Custom metric definitionsBayesian priorsEasily extendable to support new inference methodsInstallationRequirements✨ABracadabra✨ has been tested onpython>=3.7.Install viapipfrom the PyPI index (recommended)pipinstallabracadabrafrom Quizlet's Github repopipinstallgit+https://github.com/quizlet/abracadabra.gitInstall from sourceIf you would like to contribute to ✨ABracadabra✨, then you'll probably want to install from source (or use the-eflag when installing fromPyPI):mkdir/PATH/TO/LOCAL/ABRACABARA&&cd/PATH/TO/LOCAL/ABRACABARA [email protected]:quizlet/abracadabra.gitcdabracadabra pythonsetup.pydevelop✨ABracadabra✨ BasicsObservations data✨ABracadabra✨ takes as input apandasDataFramecontaining experiment observations data. Each record represents an observation/trial recorded in the experiment and has the following columns:One or moretreatmentcolumns: each treatment column contains two or more distinct, discrete values that are used to identify the different groups in the experimentOne or moremetriccolumns: these are the values associated with each observation that are used to compare groups in the experiment.Zero or moreattributescolumns: these are associated with additional properties assigned to the observations. These attributes can be used for any additional segmentations across groups.To demonstrate, let's generate some artificial experiment observations data. Themetriccolumn in our dataset will be a series of binary outcomes (i.e.True/False, here stored asfloatvalues). This binarymetricis analogous toconversionorsuccessin AB testing. These outcomes are simulated from three different Bernoulli distributions, each associated with thetreatements named"A","B", and"C". and each of which has an increasing average probability ofconversion, respectively. The simulated data also contains fourattributecolumns, namedattr_*.fromabra.utilsimportgenerate_fake_observations# generate demo dataexperiment_observations=generate_fake_observations(distribution='bernoulli',n_treatments=3,n_attributes=4,n_observations=120)experiment_observations.head()"""id treatment attr_0 attr_1 attr_2 attr_3 metric0 0 C A0a A1a A2a A3a 1.01 1 B A0b A1a A2a A3a 1.02 2 C A0c A1a A2a A3a 1.03 3 C A0c A1a A2a A3a 0.04 4 A A0b A1a A2a A3a 1.0"""Running an AB test in ✨ABracadabra✨ is as easy as ✨123✨:The three key components of running an AB test are:TheExperiment, which references the observations recorded during experiment (described above) and any optional metadata associated with the experiment.TheHypothesisTest, which defines the hypothesis and statistical inference method applied to the experiment data.TheHypothesisTestResults, which is the statistical artifact that results from running aHypothesisTestagainst anExperiment's observations. TheHypothesisTestResultsare used to summarize, visualize, and interpret the inference results and make decisions based on these results.Thus running an hypothesiss test in ✨ABracadabra✨ follows the basic 123 pattern:Initialize yourExperimentwith observations and (optionally) any associated metadata.Define yourHypothesisTest. This requires defining thehypothesisand a relevantinference_method, which will depend on the support of your observations.Run the test against your experiment and interpret the resultingHypothesisTestResultsWe now demonstrate how to run and analyze a hypothesis test on the artificial observations data generated above. Since this simulated experiment focuses on a binarymetricwe'll want ourHypothesisTestto use aninference_methodthat supports binary variables. The"proportions_delta"inference method, which tests for a significant difference in average probability between two different samples of probabilities is a valid test for our needs. Here our probabilities equal either0or1, but the sample averages will likely be equal to some intermediate value. This is analogous to AB tests that aim to compare conversion rates between a control and a variation group.In addition to theinference_method, we also want to establish thehypothesiswe want to test. In other words, if we find a significant difference in conversion rates, do we expect one group to be larger or smaller than the other. In this test we'll test that thevariationgroup"C"has a"larger"average conversion rate than thecontrolgroup"A".Below we show how to run such a test in ✨ABracadabra✨.# Running an AB Test is as easy as 1, 2, 3fromabraimportExperiment,HypothesisTest# 1. Initialize the `Experiment`# We (optionally) name the experiment "Demo"exp=Experiment(data=experiment_observations,name='Demo')# 2. Define the `HypothesisTest`# Here, we test that the variation "C" is "larger" than the control "A",# based on the values of the "metric" column, using a Frequentist z-test,# as parameterized by `inference_method="proportions_delta"`ab_test=HypothesisTest(metric='metric',treatment='treatment',control='A',variation='C',inference_method='proportions_delta',hypothesis='larger')# 3. Run and interpret the `HypothesisTestResults`# Here, we run our HypothesisTest with an assumed# Type I error rate of alpha=0.05ab_test_results=exp.run_test(ab_test,alpha=.05)assertab_test_results.accept_hypothesis# Display resultsab_test_results.display()"""Observations Summary:+----------------+------------------+------------------+| Treatment | A | C |+----------------+------------------+------------------+| Metric | metric | metric || Observations | 35 | 44 || Mean | 0.4286 | 0.7500 || Standard Error | (0.2646, 0.5925) | (0.6221, 0.8779) || Variance | 0.2449 | 0.1875 |+----------------+------------------+------------------+Test Results:+---------------------------+---------------------+| ProportionsDelta | 0.3214 || ProportionsDelta CI | (0.1473, inf) || CI %-tiles | (0.0500, inf) || ProportionsDelta-relative | 75.00 % || CI-relative | (34.37, inf) % || Effect Size | 0.6967 || alpha | 0.0500 || Power | 0.9238 || Inference Method | 'proportions_delta' || Test Statistic ('z') | 3.4671 || p-value | 0.0003 || Degrees of Freedom | None || Hypothesis | 'C is larger' || Accept Hypothesis | True || MC Correction | None || Warnings | None |+---------------------------+---------------------+"""# Visualize Frequentist Test resultsab_test_results.visualize()We see that the Hypothesis test declares that the variation'C is larger'(than the control"A") showing a 43% relative increase in conversion rate, and a moderate effect size of 0.38. This results in a p-value of 0.028, which is lower than the prescribed $\alpha=0.05$.Bootstrap Hypothesis TestsIf your samples do not follow standard parametric distributions (e.g. Gaussian, Binomial, Poisson), or if you're comparing more exotic descriptive statistics (e.g. median, mode, etc) then you might want to consider using a non-parametricBootstrap Hypothesis Test. Running bootstrap tests is easy in ✨abracadabra✨, you simply use the"bootstrap"inference_method.# Tests and data can be copied via the `.copy` method.bootstrap_ab_test=ab_test.copy(inference_method='bootstrap')# Run the Bootstrap testbootstrap_ab_test_results=exp.run_test(bootstrap_ab_test)# Display resultsbootstrap_ab_test_results.display()"""Observations Summary:+----------------+------------------+------------------+| Treatment | A | C |+----------------+------------------+------------------+| Metric | metric | metric || Observations | 35 | 44 || Mean | 0.4286 | 0.7500 || Standard Error | (0.2646, 0.5925) | (0.6221, 0.8779) || Variance | 0.2449 | 0.1875 |+----------------+------------------+------------------+Test Results:+-----------------------------------------+-------------------+| BootstrapDelta | 0.3285 || BootstrapDelta CI | (0.1497, 0.5039) || CI %-tiles | (0.0500, inf) || BootstrapDelta-relative | 76.65 % || CI-relative | (34.94, 117.58) % || Effect Size | 0.7121 || alpha | 0.0500 || Power | 0.8950 || Inference Method | 'bootstrap' || Test Statistic ('bootstrap-mean-delta') | 0.3285 || p-value | 0.0020 || Degrees of Freedom | None || Hypothesis | 'C is larger' || Accept Hypothesis | True || MC Correction | None || Warnings | None |+-----------------------------------------+-------------------+"""## Visualize Bayesian AB test results, including samples from the modelbootstrap_ab_test_results.visualize()Notice that the"bootstrap"hypothesis test results above--which are based on resampling the data set with replacent--are very similar to the results returned by the"proportions_delta"parametric model, which are based on descriptive statistics and model the data set as a Binomial distribution. The results will converge as the sample sizes grow.Bayesian AB TestsRunning Bayesian AB Tests is just as easy as running a Frequentist test, simply change theinference_methodof theHypothesisTest. Here we run Bayesian hypothesis test that is analogous to"proportions_delta"used above for conversion rates. The Bayesian test is based on theBeta-Binomial model, and thus called with the argumentinference_method="beta_binomial".# Copy the parameters of the original HypothesisTest,# but update the `inference_method`bayesian_ab_test=ab_test.copy(inference_method='beta_binomial')bayesian_ab_test_results=exp.run_test(bayesian_ab_test)assertbayesian_ab_test_results.accept_hypothesis# Display resultsbayesian_ab_test_results.display()"""Observations Summary:+----------------+------------------+------------------+| Treatment | A | C |+----------------+------------------+------------------+| Metric | metric | metric || Observations | 35 | 44 || Mean | 0.4286 | 0.7500 || Standard Error | (0.2646, 0.5925) | (0.6221, 0.8779) || Variance | 0.2449 | 0.1875 |+----------------+------------------+------------------+Test Results:+----------------------+-------------------------------+| Delta | 0.3028 || HDI | (0.0965, 0.5041) || HDI %-tiles | (0.0500, 0.9500) || Delta-relative | 76.23 % || HDI-relative | (7.12, 152.56) % || Effect Size | 0.6628 || alpha | 0.0500 || Credible Mass | 0.9500 || p(C > A) | 0.9978 || Inference Method | 'beta_binomial' || Model Hyperarameters | {'alpha_': 1.0, 'beta_': 1.0} || Inference Method | 'sample' || Hypothesis | 'C is larger' || Accept Hypothesis | True || Warnings | None |+----------------------+-------------------------------+"""# Visualize Bayesian AB test results, including samples from the modelbayesian_ab_test_results.visualize()Above we see that the Bayesian hypothesis test provides similar results to the Frequentist test, indicating a 45% relative lift in conversion rate when comparing"C"to"A". Rather than providing p-values that are used to accept or reject a Null hypothesis, the Bayesian tests provides directly-interpretable probability estimatesp(C > A) = 0.95, here indicating that there is 95% chance that thevariation"C"is larger than thecontrol"A".Additional Documentation and TutorialsCHANGELOG
abrade
Abrade is a simple Python web scraper/parser that uses BS4 and requests.
abraham
# abraham DBC files for Lincoln MKZ and Ford FusionThis is a crowdsourced repository to decode Ford Fusion’s and Lincoln MKZ can bus, we will share data dumps for the different models here and creae issues for each of the id codes.Issues will have the following format <model acronym><year>x<can id>. For example, for a ford fusion 2017 and id 0x320:FF2017x320, we will share links to data dumps that contain that code and use the ticket description to explain it fully. Once it is explained and added to the.dbcfile we can close the ticket.# How do I help?Read this before you start:http://www.ioactive.com/pdfs/IOActive_Adventures_in_Automotive_Networks_and_Control_Units.pdfHelp close tickets by creating lines in the DBC that can correctly parse each id.Add your own data dumps as pull requests and reference interesting bits in the pull requests.
abraham3k
abrahamAlgorithmically predict public sentiment on a topic using flair sentiment analysis.InstallationInstallation is simple; just install via pip.$pip3installabraham3kBasic UsageThe most simple way of use is to use the_summaryfunctions.fromabraham3k.prophetsimportAbrahamfromdatetimeimportdatetime,timedeltawatched=["amd","tesla"]darthvader=Abraham(news_source="newsapi",newsapi_key="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",bearer_token="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",weights={"desc":0.33,"text":0.33,"title":0.34},)scores=darthvader.news_summary(watched,start_time=datetime.now()-timedelta(days=1)end_time=datetime.now(),)print(scores)'''{'amd': (56.2, 43.8), 'tesla': (40.4, 59.6)} # returns a tuple (positive count : negative count)'''scores=darthvader.twitter_summary(watched,start_time=datetime.now()-timedelta(days=1)end_time=datetime.now(),)print(scores)'''{'amd': (57, 43), 'tesla': (42, 58)} # returns a tuple (positive count : negative count)'''You can run the functionnews_sentimentto get the raw scores for the news. This will return a nested dictionary with keys for each topic.fromabraham3k.prophetsimportAbrahamfromdatetimeimportdatetime,timedeltadarthvader=Abraham(news_source="google")scores=darthvader.news_sentiment(["amd","microsoft","tesla","theranos"],)print(scores['tesla']['text'])'''desc datetime probability sentiment0 The latest PassMark ranking show AMD Intel swi... 2021-04-22T18:45:03Z 0.999276 NEGATIVE1 The X570 chipset AMD offer advanced feature se... 2021-04-22T14:33:07Z 0.999649 POSITIVE2 Apple released first developer beta macOS 11.4... 2021-04-21T19:10:02Z 0.990774 POSITIVE3 Prepare terror PC. The release highly anticipa... 2021-04-22T18:00:02Z 0.839055 POSITIVE4 Stressing ex x86 Canadian AI chip startup Tens... 2021-04-22T13:00:07Z 0.759295 POSITIVE.. ... ... ... ...95 Orthopaedic Medical Group Tampa Bay (OMG) exci... 2021-04-21T22:46:00Z 0.979155 POSITIVE96 OtterBox appointed Leader, proudly 100% Austra... 2021-04-21T23:00:00Z 0.992927 POSITIVE97 WATG, world's leading global destination hospi... 2021-04-21T22:52:00Z 0.993889 POSITIVE98 AINQA Health Pte. Ltd. (Headquartered Singapor... 2021-04-22T02:30:00Z 0.641172 POSITIVE99 Press Release Nokia publish first-quarter repo... 2021-04-22T05:00:00Z 0.894449 NEGATIVE'''The same way works for the twitter API (see below for integrating twitter usage).fromabraham3k.prophetsimportAbrahamfromdatetimeimportdatetime,timedeltadarthvader=Abraham(news_source="google")scores=darthvader.twitter_sentiment(["amd","microsoft","tesla","theranos"])You can also just use a one-off function to get the sentiment from both the news and twitter combined.fromabraham3k.prophetsimportAbrahamfromdatetimeimportdatetime,timedeltadarthvader=Abraham(news_source="google")scores=darthvader.summary(["tesla","amd"],weights={"news":0.5,"twitter":0.5})print(scores)'''{'amd': (59.0, 41.0), 'tesla': (46.1, 53.9)}'''There's also a built-in function for building a dataset of past sentiments. This follows the same format as the non-interval functions (twitter_summary_interval,news_summary_interval,summary_interval).fromabraham3k.prophetsimportAbrahamfromdatetimeimportdatetime,timedelta# this works best using the offical twitter api rather than twintdarthvader=Abraham(bearer_token="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx")scores=twitter_summary_interval(self,["tesla","amd"],oldest=datetime.now()-timedelta(days=1),newest=datetime.now(),interval=timedelta(hours=12),offset=timedelta(hours=1),size=100,)print(scores)'''timestamp positive negative lag0 2021-05-08 11:46:57.033549 61.0 39.0 0 days 12:00:001 2021-05-08 10:46:57.033549 54.0 46.0 0 days 12:00:002 2021-05-08 09:46:57.033549 68.0 32.0 0 days 12:00:003 2021-05-08 08:46:57.033549 78.0 22.0 0 days 12:00:004 2021-05-08 07:46:57.033549 71.0 29.0 0 days 12:00:005 2021-05-08 06:46:57.033549 74.0 26.0 0 days 12:00:006 2021-05-08 05:46:57.033549 63.0 37.0 0 days 12:00:007 2021-05-08 04:46:57.033549 74.0 26.0 0 days 12:00:008 2021-05-08 03:46:57.033549 53.5 46.5 0 days 12:00:009 2021-05-08 02:46:57.033549 51.0 49.0 0 days 12:00:0010 2021-05-08 01:46:57.033549 61.0 39.0 0 days 12:00:0011 2021-05-08 00:46:57.033549 46.9 53.1 0 days 12:00:0012 2021-05-07 23:46:57.033549 54.0 46.0 0 days 12:00:0013 2021-05-07 22:46:57.033549 52.0 48.0 0 days 12:00:0014 2021-05-07 21:46:57.033549 58.0 42.0 0 days 12:00:0015 2021-05-07 20:46:57.033549 46.0 54.0 0 days 12:00:0016 2021-05-07 19:46:57.033549 40.0 60.0 0 days 12:00:0017 2021-05-07 18:46:57.033549 40.0 60.0 0 days 12:00:0018 2021-05-07 17:46:57.033549 51.0 49.0 0 days 12:00:0019 2021-05-07 16:46:57.033549 21.0 79.0 0 days 12:00:0020 2021-05-07 15:46:57.033549 52.5 47.5 0 days 12:00:0021 2021-05-07 14:46:57.033549 36.0 64.0 0 days 12:00:0022 2021-05-07 13:46:57.033549 42.0 58.0 0 days 12:00:0023 2021-05-07 12:46:57.033549 40.0 60.0 0 days 12:00:0024 2021-05-07 11:46:57.033549 32.0 68.0 0 days 12:00:00'''Google trends is also in the process of being added. Currently, there's support for interest over time. You can access it like this.fromabraham3k.prophetsimportAbrahamfromdatetimeimportdatetime,timedeltadarthvader=Abraham()results=darthvader.interest_interval(["BTC USD","buy bitcoin"],start_time=(datetime.now()-timedelta(days=52)),end_time=datetime.now())print(results)'''BTC USD buy bitcoindate2021-03-24 62 182021-03-25 68 162021-03-26 58 122021-03-27 47 152021-03-28 48 15...2021-05-08 48 272021-05-09 38 252021-05-10 43 202021-05-11 44 242021-05-12 38 20'''Numbers represent search interest relative to the highest point on the chart for the given region and time. A value of 100 is the peak popularity for the term. A value of 50 means that the term is half as popular. A score of 0 means there was not enough data for this term.Changing News SourcesAbrahamsupports two news sources:Google NewsandNewsAPI. Default isGoogle News, but you can change it toNewsAPIby passingAbraham(news_source='newsapi', api_key='<your api key')when instantiating. I'd highly recommend usingNewsAPI. It's much better than theGoogle NewsAPI. Setup is really simple, just head to theregisterpage and sign up to get your API key.Twitter FunctionalityI'd highly recommend integrating twitter. It's really simple; just head toTwitter Developerto sign up and get your bearer token. If you don't want to sign up, you can actually use it free with the twint API (no keys needed). This is the default.UpdatesI've made it pretty simple (at least for me) to push updates. Once I'm in the directory, I can run$ ./build-push 1.2.0 "update install requirements"where1.2.0is the version and"update install requirements"is the git commit message. It will update to PyPi and to the github repository.NotesCurrently, there's another algorithm in progress (SALT), includingsalt.pyandsalt.ipynbin theabraham3k/directory and the entiremodels/directory. They're not ready for use yet, so don't worry about importing them or anything.ContributionsPull requests welcome!Detailed UsageComing soon. However, there is heavy documentation in the actual code.
abraia
Abraia-Multiple image analysis toolboxThe Abraia-Multiple image analysis toolbox provides and easy and practical way to analyze and classify multispectral and hyperspectral images directly from your browser. You just need to click on the open in Colab button to start with one of the available Abraia-Multiple notebooks:Hyperspectral image analysisHyperspectral image classificationThe Abraia-Multiple SDK has being developed byABRAIAin theMultiple projectto extend the Abraia Cloud Platform providing support for straightforward HyperSpectral Image (HSI) analysis and classification.InstallationAbraia-Multiple is a Python SDK and CLI which can be installed on Windows, Mac, and Linux:python-mpipinstall-UabraiaTo use the SDK you have to configure yourId and Keyas environment variables:exportABRAIA_ID=user_idexportABRAIA_KEY=user_keyOn Windows you need to usesetinstead ofexport:setABRAIA_ID=user_idsetABRAIA_KEY=user_keyHyperspectral image analysis toolboxMULTIPLE provides seamless integration of multiple HyperSpectral Image (HSI) processing and analysis tools, integrating starte-of-the-art image manipulation libraries to provide ready to go scalable multispectral solutions.For instance, you can directly load and save ENVI files, and their metadata.fromabraiaimportMultiplemultiple=Multiple()img=multiple.load_image('test.hdr')meta=multiple.load_metadata('test.hdr')multiple.save_image('test.hdr',img,metadata=meta)Upload and load HSI dataTo start with, we mayupload some datadirectly using the graphical interface, or using the multiple api:multiple.upload_file('PaviaU.mat')Now, we can load the hyperspectral image data (HSI cube) directly from the cloud:img=multiple.load_image('PaviaU.mat')Basic HSI visualizationHyperspectral images cannot be directly visualized, so we can get some random bands from our HSI cube, and visualize these bands as like any other monochannel image.fromabraiaimporthsiimgs,indexes=hsi.random(img)hsi.plot_images(imgs,cmap='jet')Pseudocolor visualizationA common operation with spectral images is to reduce the dimensionality, applying principal components analysis (PCA). We can get the first three principal components into a three bands pseudoimage, and visualize this pseudoimage.pc_img=hsi.principal_components(img)hsi.plot_image(pc_img,'Principal components')Classification modelTwo classification models are directly available for automatic identification on hysperspectral images. One is based on support vector machines ('svm') while the other is based on deep image classification ('hsn'). Both models are available under a simple interface like bellow:n_bands,n_classes=30,17model=hsi.create_model('hsn',(25,25,n_bands),n_classes)model.train(X,y,train_ratio=0.3,epochs=5)y_pred=model.predict(X)Image analysis toolboxAbraia provides a direct interface to load and save images as numpy arrays. You can easily load the image data and the file metadata, show the image, or save the image data as a new one.fromabraiaimportMultiplefromabraia.plotimportplot_imagemultiple=Multiple()img=multiple.load_image('usain.jpg')multiple.save_image('usain.png',img)plot_image(img,'Image')Read the image metadata and save it as a JSON file.importjsonmetadata=multiple.load_metadata('usain.jpg')multiple.save_file('usain.json',json.dumps(metadata)){'FileType': 'JPEG', 'MIMEType': 'image/jpeg', 'JFIFVersion': 1.01, 'ResolutionUnit': 'None', 'XResolution': 1, 'YResolution': 1, 'Comment': 'CREATOR: gd-jpeg v1.0 (using IJG JPEG v62), quality = 80\n', 'ImageWidth': 640, 'ImageHeight': 426, 'EncodingProcess': 'Baseline DCT, Huffman coding', 'BitsPerSample': 8, 'ColorComponents': 3, 'YCbCrSubSampling': 'YCbCr4:2:0 (2 2)', 'ImageSize': '640x426', 'Megapixels': 0.273}Upload and list filesUpload a localsrcfile to the cloudpathand return the list offilesandfolderson the specified cloudfolder.importpandasaspdfolder='test/'multiple.upload_file('images/usain-bolt.jpeg',folder)files,folders=multiple.list_files(folder)pd.DataFrame(files)To list the root folder just omit the folder value.Download and remove filesYou can download or remove an stored file just specifying itspath.path='test/birds.jpg'dest='images/birds.jpg'multiple.download_file(path,dest)multiple.remove_file(path)Command line interfaceThe Abraia CLI provides access to the Abraia Cloud Platform through the command line. It provides a simple way to manage your files and enables the resize and conversion of different image formats. It is an easy way to compress your images for web - JPEG, WebP, or PNG -, and get then ready to publish on the web.To compress an image you just need to specify the input and output paths for the image:abraiaconvertimages/birds.jpgimages/birds_o.jpgTo resize and optimize and image maintaining the aspect ratio is enough to specify thewidthor theheightof the new image:abraiaconvert--width500images/usain-bolt.jpegimages/usaint-bolt_500.jpegYou can also automatically change the aspect ratio specifying bothwidthandheightparameters and setting the resizemode(pad, crop, thumb):abraiaconvert--width333--height333--modepadimages/lion.jpgimages/lion_333x333.jpg abraiaconvert--width333--height333images/lion.jpgimages/lion_333x333.jpgSo, you can automatically resize all the images in a specific folder preserving the aspect ration of each image just specifying the targetwidthorheight:abraiaconvert--width300[path][dest]Or, automatically pad or crop all the images contained in the folder specifying bothwidthandheight:abraiaconvert--width300--height300--modecrop[path][dest]LicenseThis software is licensed under the MIT License.View the license.
abrain
Artificial Brains (ABrain) for PythonC++/Python implementation of fully evolvable Artificial Neural Networks. Uses the ES-HyperNEAT algorithms toindirectlyencode ANNs with bio-mimetic patterns (repetitions, symmetry...), large number of neurons and relative robustness to input/output variations between generations. The API is served in Python and computations are performed in C++.tested on latestDevelopmentOptional dependencies:Graphviz (dot)To generate directed graphs for the genomes. Can only be fetched by system installer (apt-get, yum, ...). Seehttps://graphviz.org/download/for instructionsKaleidoTo generate non-interactive images of ANN (through plotly). Due to inconsistent support, left as an optional dependency. Usepip install abrain[...,kaleido]to get itTodo list:Functionalities:Order-independent ANN evaluation (with back buffer)?Crossover / historical markingsActually needed?MANN IntegrationEasy extractionbuilt-in testingC++ wrapperVisuMisc:DocumentationAdvanced usagemove to scikit/poetry/... ?CI/CDRecent install givesno loadimage plugion for "svg:cairo"for pdf output
abrakadabra
Failed to fetch description. HTTP Status Code: 404
abramyanSolver
This is an example library which is published on the PyPI.
abrax
abraxasA tiny DSL to compile to quantum circuits. The goal is to speed up the time it takes to write small stupid circuits. Anything beyond a certain complexity should be written in the respective languages directlt directly.Qiskit•CudaQ•PennylaneInstallpipinstallabraxSyntaxStart with a-to denote a wire (you can also count-0,-1,-2as wires, these are just comments)All gates are case insensitive with NO SPACES BETWEEN GATE AND ARGUMENTSArguments are in parenthesis()and separated by commas,ex:H CX(2) CRX(3.1415,3)Abraxas is only the circuit part parser. All the other gymnastics of creating circuits/allocating memory/running them is still up to you.ExamplestoQiskitfromqiskitimportQuantumCircuitfromabraximporttoQiskitqc=QuantumCircuit(3)qc=toQiskit(qc,f"""- H CX(2) RX({3.1415})- H - CX(2)- H X RY(55)""")# IS THE SAME AS# ┌───┐┌───┐┌────────────┐# q_0: ┤ H ├┤ X ├┤ Rx(3.1415) ├────────────────# ├───┤└─┬─┘└────────────┘┌───┐# q_1: ┤ H ├──┼────────────────┤ X ├───────────# ├───┤ │ ┌───┐ └─┬─┘┌────────┐# q_2: ┤ H ├──■──────┤ X ├───────■──┤ Ry(55) ├─# └───┘ └───┘ └────────┘toPennylaneimportpennylaneasqmlfromabraximporttoPennyLaneCIRC=f"""- H CX(2) RX(θ1)- H - CX(2)- H X RY(θ2)"""maker,params=toPennylane(CIRC)defcirc():# 0.0, 0.1 since 2 paramsparams=[0.1*iforiinrange(len(params))]maker(qml,params)returnqml.probs()circuit=qml.QNode(circ,qml.device('default.qubit',wires=3))# IS THE SAME AS# 0: ──H─╭X──RX(0.00)───────────────┤ Probs# 1: ──H─│────────────╭X────────────┤ Probs# 2: ──H─╰●──X────────╰●──RY(0.01)──┤ ProbstoCudaqfromcudaqimportmake_kernel,samplefromabraximporttoCudaqCIRC=f"""-0 H CX(2) RX(θ1)-1 H - CX(2)-2 H X RY(θ2)"""kernel,thetas=make_kernel(list)qubits=kernel.qalloc(3)cudaO={'kernel':kernel,'qubits':qubits,'quake':thetas,# this gets overwritten by the parser'params':0,}kernel=toCudaq(cudaO,CIRC)# expect 0.0, 0.1 since 2 paramsvals=[0.1*iforiinrange(cudaO['params'])]result=sample(kernel,vals)print(result)toPrimeThe prime string acts as a translation intermediate between various libraries. You can come to prime from Qiskit and go anywhere. (Coming to Prime from Pennylane/CudaQ is not supported yet)fromqiskit.circuit.libraryimportEfficientSU2fromabraximporttoPrimeqc=EfficientSU2(3,reps=1).decompose()string=toPrime(qc)# IS THE SAME AS# -0 ry(θ[0]) rz(θ[3]) cx(1) ry(θ[6])# -1 ry(θ[1]) rz(θ[4]) cx(2) ry(θ[7])# -2 ry(θ[2]) rz(θ[5]) ry(θ[8]) rz(θ[11])You can now even take this string and pass intotoPennylaneortoCudaqto convert to run it in them. Ex.# string from abovemaker,params=toPennylane(string)defcirc():# 12 params so 0.0, 0.1...1.1params=[0.1*iforiinrange(len(params))]maker(qml,params)returnqml.probs()circuit=qml.QNode(circ,qml.device('default.qubit',wires=3))AppendAbraxas can also add to an existing circuit since it takes in your circuit and simply appends to it. So you can pass in existing QuantumCircuit/CUDA Kernel, or add more operations in the Pennylane circ wrapper.Supported conversions:graph TD A[Qiskit] -->|"toPrime()"| B[String] B -->|"toQiskit()"| C[Qiskit] B -->|"toPennylane()"| D[Pennylane] B -->|"toCudaq()"| E[CudaQuantum]
abraxas
Abraxas Collaborative Password UtilityIntroductionAbraxas is powerful password utility that can store or generate your passwords and produce them from the command line. It can also be configured to autotype your username and password into the current window so that you can log in with a simple keystroke.Abraxas is an alternative to the traditional password vault. The intent is not to store passwords, but rather to regenerate them as needed. This is done with the aid of two files. The first is an accounts file that contains useful information about each account along with the parameters that control how the password is generated for that account (which style of password to generate, how many characters or words to include, what alphabet to use, etc.) The second is the master password file. When you go to use the password generator, you will first need to unlock the master password file. You do so by providing its pass phrase, which only you should know. Thus, only you will be capable of generating the passwords associated with your accounts. Once generated, you can specify that they be displayed on the standard output, you can specify that they be copied to the clipboard, or you can specify that they be typed into some other program.In your master password file you can store more than one master password (the password used to generate the passwords for your accounts). In this way this password generator makes it easy to collaborate with friends and colleagues. Simply start by sharing a master password that you only use for shared accounts. A password generated for a particular account is computed from the name of the account and the master password. Since your partner and you are sharing the master password, you will both generate the same password for an account as long as you both use the same name for the account. In other words, if Alice and Bob share a master password, and if Alice wants to create a Google Docs account for sharing documents with Bob, she need only create the account using the password generated by Abraxas using the shared master password, and then simply tells Bob that she has created a Google Docs account with the name “abdocs” and uploaded several documents. Without actually sharing the password, Bob uses the shared master password and the account name to regenerate the account’s password himself and downloads the documents.Installing Prerequisites in Fedora with YumAbraxas is compatible with both python 2.6 and beyond or python 3.3 and beyond. It requires the following packages to fully function (run these commands as root):yum install python yum install python-setuptools yum install libyaml-devel yum install PyYAML yum install pygobject3 (if using python2) yum install python3-gobject (if using python3) yum install python-docutils yum install xdotool yum install xsel easy_install python-gnupgOn Centos you will also need:yum install python-argparseOn Redhat-based systems you can get these dependencies by running ./yum.sh.If you would like to run the tests, you will also need the inform package from my github account (https://github.com/KenKundert/inform.git).Installing Prerequisites in Arch Linux with PacmanAbraxas requires the following Arch Linux packages to fully function (run these commands as root):pacman -S git pacman -S python pacman -S python-setuptools pacman -S python-docutils pacman -S python-gobject pacman -S libyaml pacman -S xdotool pacman -S xsel easy_install python-gnupg easy_install PyYAMLYou can install these prerequisites by running ./pacman.sh.Installing Prerequisites in Ubuntu with Apt-GetAbraxas requires the following Ubuntu packages to fully function (run these commands as root):apt-get install git apt-get install libyaml-dev apt-get install python3 apt-get install python3-setuptools apt-get install python3-docutils apt-get install python3-gi apt-get install python3-yaml apt-get install xdotool apt-get install xsel easy_install3 python-gnupgYou can install these prerequisites by running ./ubuntu.sh. Ubuntu does not provide gpg2, so you will need to change GPG_BINARY inabraxas/prefs.pytogpg.Installing Prerequisites from SourceOr, you can install Python from source. First get and install Python using:$ cd ~/packages/python $ wget http://www.python.org/download/releases/3.3.2/Python-3.3.2.tgz $ tar zxf Python-3.3.2.tgz $ cd Python-3.3.2 $ ./configure --prefix=$HOME/.local $ make $ make installNow get easy_install:$ wget -O http://python-distribute.org/distribute_setup.py $ python3.3 distribute_setup.pyThen you can use easy_install to install python-gnupg, argparse, docutils, and PyYAML as above.Configuring GPG AgentIf you do not yet have a GPG key, you can get one using:$ gpg --gen-keyYou should probably choose 4096 RSA keys. Now, edit ~/.gnupg/gpg-conf and add the line:use-agentThat way, if you have an agent running (and most login environments such as Gnome or KDE will start an agent for you; if you do not have an agent running you can generally have one started for you when you login by configuring your Session settings) then you can just give your GPG key pass phrase once per login session.The ultimate in convenience is to use Gnome Keyring to act as the GPG agent because it allows you to unlock the agent simply by logging in. To do so, make sure Keyring is installed:yum install gnome-keyring gnome-keyring-pamIf you are using Gnome, it will start Keyring for you. Otherwise, you should modify your .xinitrc or .xsession file to add the following:# Start the message bus if it is not already running if test -z "$DBUS_SESSION_BUS_ADDRESS"; then eval $(dbus-launch --sh-syntax --exit-with-session) fi # Set ssh and gpg agent environment variables export $(gnome-keyring-daemon --start)GnuPG IssuesIf abraxas crashes with the message:ValueError: Unknown status message: u'PROGRESS'you have encountered a bug in python-gnupg. I can be resolved by adding “PROGRESS” to line 219 of gnupg.py in the python-gnupg install (the path varies based on the version and where you install it, but you might try something like: /usr/lib/python3.3/site-packages/python_gnupg-0.3.6-py3.3.egg/gnupg.py).If you use Gnome Keyring, you should be aware the Werner Koch is very annoyed at it and the latest versions of gnupg will emit a warning that Gnome Keyring has hijacked the GnuPG agent if you try to use Gnome Keyring as the GnuPG agent. You can safely ignore this message. The only way to use Gnome Keyring and avoid the message is to download the GnuPG source, delete the message, and compile it by hand.InstallingTo test the program, run:$ ./testor:$ ./test3if you plan to use python3 and have both python2 and python3 installed.Once you are comfortable that everything is in order, you should install the program. To do so, first open the install file and make sure your version of python is given in theset pythonline. Then run:$ ./installThe program along with the man pages should end up in ~/.local.Once installed, you should be able to get information as follows:$ man abraxas (information on how to use abraxas from the command line) $ man 3 abraxas (information on how to use the abraxas API) $ man 5 abraxas (information about the configuration files)Configuring VimTo be able to easily edit encrypted files (such as the Abraxas master password file), download the gnupg vim plugin from:http://www.vim.org/scripts/script.php?script_id=3645Then copy it into:cp gnupg.vim ~/.vim/pluginConfiguring AbraxasTo start using Abraxas you need to do a one-time setup to create your account directory (~/.config/abraxas):$ abraxas -I <GPG-Key>where<GPG-Key>would be replaced by the email you provided to GPG when you created your key.You will need to edit ~/.config/abraxas to add your accounts (seeman 5 abraxasfor the details). For example, to add a gmail accounts, add the following toaccounts:"gmail-derrickAsh": { 'aliases': ['gmail', 'google'], 'template': "=words", 'username': "derrickAsh", 'url': 'https://accounts.google.com', 'window': [ 'Gmail*', '*Google Accounts*', ], 'autotype': "{username}{tab}{password}{return}", },You can now test this account using:$ abraxas gmail PASSWORD: fallacy derby twinge cloneYou would then change your gmail password to the generated pass phrase. Alternatively, you can simply enter your existing password intopassword_overridesin~/.config/abraxas/master.gpguntil the next time you get around to changing your password.Configuring the Window Manager for Abraxas AutotypeIf you use Firefox or Thunderbird, I recommend you install the ‘Hostname in Titlebar’ add-on to both so that Abraxas can recognize the account to use purely from the URL.Finally, you will want to chose a keystroke sequence and configure the window manager to run the password generator when you trigger it with that keystroke. How you do that depends on your window manager. With Gnome, it requires that you open your Keyboard Shortcuts preferences and create a new shortcut. I recommendAlt-pas a reasonable keystroke sequence. Enter:$HOME/.local/bin/abraxas --autotypeas the command to run. Then, when you create your accounts, you should add the appropriate window titles to the account entry so that the appropriate account can be determined automatically from the window title. For example, with the gmail account entered above, you can go togmail.com, select the username field and then typeAlt pto login.Enjoy,-KenChangelog1.7 (2014-01-24)Replaced Zenity as the dialog tool for the account picker with an internal version that supports navigating and selecting with the keyboard (j, k, return, esc).Fixed a bug in –init (-I). Program alternately worked on either python 2 or 3, but not both.Provided an expanded set of templates.Refactored the abraxas test suite.Added –stateless option.1.6 (2014-01-13)Changed the name to Abraxas
abrconnection
# abrconnection An interface between Python and Autonomous Battle Royale <br> <br> Documentation (ABR v0.0.3): <br><br>Coordinate system is left-handed, with x being east, y being upwards, and z being north.RobotConnection(): class which handles connection to the game. Should be instantiated at beginning of script, and methodconnect()should be called immediately after.disconnect()ends connection.RobotConnection.set_tire_torque(tire_name, torque): sets torque of tiretire_nametotorque. Current tire names are “BackLeft”, “BackRight”, “FrontLeft”, and “FrontRight.”RobotConnection.set_tire_steering(tire_name, bering): sets tiretire_nametobering. All angles/berings are clockwise off of vertical (unity’s coordinate system is left-handed).RobotConnection.state_dict: Dictionary/Hashtable containing information about the state of the robot.Vectors are stored as dictionaries with keys“x”,“y”, and“z”.state_dict[“gps”]: Sensor containing position information of the robot.state_dict[“gps”][“position”]: Vector containing current position of robot relative to starting point.state_dict[“gyroscope”]: Sensor containing rotation information of the robot:state_dict[“gyroscope”][“right”]: Unit vector pointing right RELATIVE to the robot. For example, if the robot was facing in the default direction, its right vector would be <1, 0, 0> because its right direction is east. If the robot turned 90 degees counterclockwise, its right vector would be <0, 0, 1>. If the robot was facing a bering of 45 degrees and was climbing a 20 degree grade, its right vector would be <cos(45), sin(20), sin(45)> / sqrt(cos(45)^2 + sin(20)^2 + sin(45)^2).state_dict[“gyroscope”][“up”]: Unit vector pointing up RELATIVE to the robot. Same idea as before.state_dict[“gyroscope”][“forward”]: Unit vector pointing up RELATIVE to the robot. Same idea as before.state_dict[“lidar”]: Array containing distance to any object at 1 degree increments.state_dict[“lidar”][0]would describe how many meters of clearance the robot has in front of itself,state_dict[“lidar”][90]would describe its clearance to the right, and so on. If the robot has more than 100 meters of clearance in a particular direction, the value will capped at 100. In future updates, lidar upgrades might include an increase in range or density for in-game currency. Vertical FOV will be coming soon.
abreai
InstallPypipip install abreaiGitHubpip install git+https://github.com/1Marcuth/encurtanet-py.gitSimple use examplefromabreaiimportAbreAishortener=AbreAi()url_info=shortener.shorten(url="https://google.com",# Your urlalias="url-alias"# Alias of the url)shortened_url=url_info.get_shortened_url()print(url_info.get_shortened_url())
abrechnung
AbrechnungTheAbrechnung(German forreckoning,settlement,revenge) aims to be a versatile and user-centricpayment,transactionandbookkeepingmanagement tool for human groups and events.You can simplytryourdemo instance!Abrechnung is a tool to trackmoney,purchases(and its items) anddebtorsfor:Group lifeEventsTravellingFlat share roommatesCooking revelriesHoliday tripsYour HackerspaceLAN partiesBusiness tripsFamily lifeRegular partiesAdventures.........All this is possible through thebasic blocks:Purchase & transaction trackingAccounts with balance compensationInvoice handling + optional position specificationAssignment (by fractions or counts) of positions and invoicesClearing accounts for merging transactionsMultitenant accessDocumentationTo help you set up your instance or understand the inner workings:Read the documentation!Technical foundationTechnologyComponentPythonBackend logicReactWeb UI frameworkPostgresSQLDatabaseHomo SapiensMagic sauceContributingIf there isthat featureyou really want to see implemented, you found abugor would like to help in some other way, the project of course benefits from your contributions!Contribution guideIssue trackerCode contributionsDevelopment roadmapContactTo directly reach developers and other users, we have chat rooms. For questions, suggestions, problem support, please join and just ask!ContactWhere?Issue TrackerSFTtech/abrechnungMatrix Chat#sfttech:matrix.orgSupport usLicenseReleased under theGNU Affero General Public Licenseversion 3 or later, seeCOPYINGandLICENSEfor details.
ab_resbase
No description available on PyPI.
abricot
abricotThe abricot is the first word in French vocabulary book, just like abanbon. Mean this is my first project on PyPl.
abridge
abridgeEffortlessly shorten videos.Aboutabridgecan automatically shorten video files by removing parts from the video where not much happens. This is great for making timelapse videos more engaging and removes the need for manual editing to cut these dead spots from the videos.Installationpip install abridgeabridgemakes use ofmoviepy, which releys onffmpeg.ffmpegshould be installed when the package is installed, but this may not work on some systems.Dockeradbridgecan be run as a docker image, which gaurentees it will run on all systems.docker pull freshollie/abridge:latestdocker run freshollie/abridgeUsageusage: abridge [-h] [-w workers] [-o outdir] [-t diff-threshold] [-r repetition-threshold] clip [clip ...] Effortlessly shorten videos positional arguments: clip Clip to cut or glob group optional arguments: -h, --help show this help message and exit -w workers Number of clip processors -o outdir -t diff-threshold Difference threshold required between frames for a frames to be considered different -r repetition-threshold Number of frames in a row required to make a cutApifromabridgeimportabridge_clipabridge_clip("/path/to/clip")DevelopingTheabridgeproject is managed and packaged bypoetryUsepoetry installto download the required packages for developmentpoetry run pre-commit installshould be run to install the pre-commit scripts which help with ensuring code is linted before push.TestsTests are written withpytestand can be run withmake testLintingabridgeis linted withpylintand formatted withblackandisortmypyis used throughout the project to ensure consitent types.make lintwill check linting, code formatting, and typesmake formatwill format code to required standardsTODO:Test coverage on processorLicenseMIT
abridger
UNKNOWN
abrije
#abrijeabrije is a generic log parser and summariser.
abrilskopsorting
Este es una libreria para ejcutar metodos de ordenamiento como Insertion sort, Merge sort, Selection sort, Quick Sort, Bubble Sort, y Seleccion binaria.Change Log0.0.1 (03/04/2023)Primer Lanzamiento
abris
UNKNOWN
abritamr
logo by Charlie Higgs (PhD candidate)Taming the AMR beastabriTAMR is an AMR gene detection pipeline that runs AMRFinderPlus on a single (or list ) of given isolates and collates the results into a table, separating genes identified into functionally relevant groups.abriTAMR is accredited by NATA for use in reporting the presence of reportable AMR genes in Victoria Australia.Acquired resistance mechanims in the form of point mutations (restricted to subset of species)Streamlined output.Presence of virulence factorsInstallCondaabritAMR is best installed withcondaas described below (~2 minutes on laptop)conda create -n abritamr -c bioconda abritamr conda activate abritamrA note on dependenciesabriTAMR requiresAMRFinder Plus, this can be installed separately withcondaif required.abriTAMR comes packaged with a version of the AMRFinder DB consistent with current NATA accreditation. If you would like to use another DB please download it usingamrfinder -Uand use the-dflag to point to your database.Current version of AMRFinder Plus compatible with abritAMR 3.10.42 (tested on versions down to 3.10.16)Command-line toolabritamr run --help optional arguments: -h, --help show this help message and exit --contigs CONTIGS, -c CONTIGS Tab-delimited file with sample ID as column 1 and path to assemblies as column 2 OR path to a contig file (used if only doing a single sample - should provide value for -pfx). (default: ) --prefix PREFIX, -px PREFIX If running on a single sample, please provide a prefix for output directory (default: abritamr) --jobs JOBS, -j JOBS Number of AMR finder jobs to run in parallel. (default: 16) --identity IDENTITY, -i IDENTITY Set the minimum identity of matches with amrfinder (0 - 1.0). Defaults to amrfinder preset, which is 0.9 unless a curated threshold is present for the gene. (default: ) --amrfinder_db AMRFINDER_DB, -d AMRFINDER_DB Path to amrfinder DB to use (default: /<path_to_installation>/abritamr/abritamr/db/amrfinderplus/data/2021-09-30.1) --species {Neisseria,Clostridioides_difficile,Acinetobacter_baumannii,Campylobacter,Enterococcus_faecalis,Enterococcus_faecium,Escherichia,Klebsiella,Salmonella,Staphylococcus_aureus,Staphylococcus_pseudintermedius,Streptococcus_agalactiae,Streptococcus_pneumoniae,Streptococcus_pyogenes}, -sp {Neisseria,Clostridioides_difficile,Acinetobacter_baumannii,Campylobacter,Enterococcus_faecalis,Enterococcus_faecium,Escherichia,Klebsiella,Salmonella,Staphylococcus_aureus,Staphylococcus_pseudintermedius,Streptococcus_agalactiae,Streptococcus_pneumoniae,Streptococcus_pyogenes} Set if you would like to use point mutations, please provide a valid species. (default: )You can also run abriTAMR inreportmode, this will output a spreadsheet which is based on reportable/not-reportable requirements in Victoria. You will need to supply a quality control file (comma separated) (-q), with the following columns:ISOLATESPECIES_EXP (the species that was expected)SPECIES_OBS (the species that was observed during the quality control analysis)TEST_QC (PASS or FAIL)--soprefers to the type of collation and reporting pipelinegeneralstandard reporting structure for aquired genes, output as reportable and non-reportableplusInferred AST based on validation undertaken at MDUabritamr report --help optional arguments: -h, --help show this help message and exit --qc QC, -q QC Name of checked MDU QC file. (default: ) --runid RUNID, -r RUNID MDU RunID (default: Run ID) --matches MATCHES, -m MATCHES Path to matches, concatentated output of abritamr (default: summary_matches.txt) --partials PARTIALS, -p PARTIALS Path to partial matches, concatentated output of abritamr (default: summary_partials.txt) --sop {general,plus} The MDU pipeline for reporting results. (default: general)OutputabritAMR runOutputs 4 summary files and retains the raw AMRFinderPlus output for each sequence input.amrfinder.outraw output from AMRFinder plus (per sequence). For more information please see AMRFinderPlus helpheresummary_matches.txtTab-delimited file, with a row per sequence, and columns representing functional drug classesOnly genes recovered from sequence which have >90% coverage of the gene reported and greater than the desired identity threshold (default 90%).I. Genes annotated with*indicate >90% coverage and > identity threshold < 100% identity.II. No further annotation indicates that the gene recovered exhibits 100% coverage and 100% identity to a gene in the gene catalog.III. Point mutations detected (if--speciessupplied) will also be present in this file in the form ofgene_AAchange.summary_partials.txtTab-delimited file, with a row per sequence, and columns representing functional drug classesGenes recovered from sequence which have >50% but <90% coverage of the gene reported and greater than the desired identity threshold (default 90%).summary_virulence.txtTab-delimited file, with a row per sequence, and columns representing AMRFinderPlus virulence gene classificationGenes recovered from sequence which have >50% coverage of the gene reported and greater than the desired identity threshold (default 90%).Genes recovered with >50% but <90% coverage of a gene in the gene catalog will be annotated with^.Genes annotated with*indicate >90% coverage and > identity threshold < 100% identity.abritamr.txtTab-delimited file, combiningsummary_matches.txt,summary_partials.txt,summary_virulence.txtwith a row per sequence, and columns representing AMRFinderPlus virulence gene classification and/or functional drug classes.Genes recovered from sequence which have >50% coverage of the gene reported and greater than the desired identity threshold (default 90%).Genes recovered with >50% but <90% coverage of a gene in the gene catalog will be annotated with^.Genes annotated with*indicate >90% coverage and > identity threshold < 100% identity.abritamr reportwill output spreadsheetsgeneral_runid.xlsx(NATA accredited) orplus_runid.xlsx(validated - not yet accredited) depending upon the sop chosen.general_rundid.xlsxhas two tabs, one for matches and one for partials (corresponding to genes reported in thesummary_matches.txtandsummary_partials.txt). Each tab has 7 columnsColumnInterpretationMDU sample IDSample IDItem codesuffix (MDU specific)Resistance genes (alleles) detectedgenes detected that are reportable (based on species and drug classification)Resistance genes (alleles) det (non-rpt)other genes detected that are not not reportable for the species detected.Species_obsSpecies observed (supplied in input file)Species_expSpecies expected (supplied in input file)db_versionVersion of the AMRFinderPlus DB usedplus_runid.xlsxoutput is a spreadsheet with the different drug resistance mechanims and the corresponding interpretation (based on validation of genotype and phenotype) for drug-classes relevant to reporting of anti-microbial resistance inSalmonella enterica(other species will be added as validation of genotype vs phenotype is performed).AmpicillinCefotaxime (ESBL)Cefotaxime (AmpC)TetracyclineGentamicinKanamycinStreptomycinSulfathiazoleTrimethoprimTrim-SulphaChloramphenicolCiprofloxacinMeropenemAzithromycinAminoglycosides (RMT)Colistin
abroca
abroca"abroca" is a python package to provide basic functionality for computing and visualizing the Absolute Between-ROC Area (ABROCA).InstallThe source code is currently hosted ongithuband Python package atPyPi# PyPIpipinstallabrocaExampleYou can find the .ipynb file under example folder. It is a basic example which demonstrates the use of theabrocapackage to compute the ABROCA for a simple logistic regression classifier.#Compute Abrocaslice=compute_abroca(df_test,pred_col='pred_proba',label_col='returned',protected_attr_col='Gender',compare_type='binary',n_grid=10000,plot_slices=True)The plot is automatically saved to a file and is displayed on-screen. The link to download the data is given in the comments in the example file. Parameters are self explainatory through the example file. Parameter details below.df - dataframe containing colnames matching pred_col, label_col and protected_attr_colpred_col - name of column containing predicted probabilities (string)label_col - name of column containing true labels (should be 0,1 only) (string)protected_attr_col - name of column containing protected attribute (string)compare_type - comparison group being 'overall', 'multiple' (compare one majority with all classes) or 'binary'majority_protected_attr_val(optional) - name of 'majority' group with respect to protected attribute (string)n_grid (optional) - number of grid points to use in approximation (numeric) (default of 10000 is more than adequate for most cases)plot_slices (optional) - if TRUE, ROC slice plots are generated and saved to file_name (boolean)lb (optional) - Lower limit of integration (use -numpy.inf for -infinity) Default is 0ub (optional) - Upper limit of integration (use -numpy.inf for -infinity) Default is 1limit (optional) - An upper bound on the number of subintervals used in the adaptive algorithm.Default is 1000Reference Paper: Josh Gardner, Christopher Brooks, and Ryan Baker (2019). Evaluating the Fairness of Predictive Student Models Through Slicing Analysis. Proceedings of the 9th International Conference on Learning Analytics and Knowledge (LAK19); March 4-8, 2019; Tempe, AZ, USA.https://doi.org/10.1145/3303772.3303791Getting HelpIf you encounter a clear bug, please file a minimal reproducible example ongithub, or contact the package maintainers directly (see the package documentation).
abroute
abroute|Autobahn experiments in router to router deliveryMore information:abroute Site
abrox
![Logo](abrox/gui/icons/readme_logo.png)# Approximate Bayes rocks!`ABrox` is a python package for Approximate Bayesian Computation accompanied by a user-friendly graphical interface.## Features* Model comparison via approximate Bayes factors+ rejection+ random forest* Parameter inference+ rejection+ MCMC* Cross-validation## InstallationNote that `ABrox`only works with Python 3.`ABrox` can be installed via pip. Simply open a terminal and type:```bashpip install abrox```It might take a few seconds since there are several dependencies that you might have to install as well.### MacPortsIf you installed Python via MacPorts, the `abrox-gui` command after installation of `abrox` does not work.You can alternatively start the GUI via (assuming Python version 3.5):```bashcd /Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/abrox/gui/python3.5 main.py```### WindowsUnfortunately, the installation under Windows is a bit cumbersome. We explain the relevant steps below.If not already done, install a Python3 version from [here](https://www.python.org/).Check the version of Python that is installed by typing `python` into the console.![Python on Windows](abrox/gui/icons/python_windows2.png)Now, install Visual Studio Build Tools from:1. [here](http://landinghub.visualstudio.com/visual-cpp-build-tools)Now visit the following page to install the Scipy wheel. Choose the link that fitsyour Python version (see picture above). `cp` should be followed by the actual version (e.g. `cp36`) whilethe last part of the link should match the bit-version (e.g. `win32`).2. [here](http://www.lfd.uci.edu/~gohlke/pythonlibs/#scipy)After the installation, open a console in the directory the wheel has been downloaded into and type:```bashpython -m pip install #name_of_the_whl_file```Repeat the same steps for the Numpy wheel:3. [here](http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy)Now, open a terminal and type:```bashpython -m pip install abrox```You are now ready to use `ABrox`!## ABrox using the GUIAfter `ABrox` has been installed, you can start the user interface by typing `abrox-gui`.We provide several templates in order to get more familiar with the GUI.## ABrox using PythonIf you are more comfortable with plain Python, you can run your project once from the GUI andcontinue working with the Python-file that has been generated in the output folder.## TemplatesWe provide a few example project files so you can see how `ABrox` works ([here](https://github.com/mertensu/ABrox/tree/master/project_files)).Currently, we provide:* Two-sample t-test* Levene-Test### Contributors* [Ulf Mertens](http://www.psychologie.uni-heidelberg.de/ae/meth/team/mertens/)* Stefan Radev
abrsh-mac
Get current MAC address and change it if you want.
abrupt
No description available on PyPI.
abrviz
abrvizInstallationpip3 install abrvizExemple 0CodefromabrvizimportArbre,Noeudvaleurs=[3,4,7,9,5,8,1,0,6,2]noeuds=[Noeud(i)foriinvaleurs]a=Arbre()forninnoeuds:a.inserer(n)texte=f"""Affichage imbriquée :{a}Hauteur :{a.hauteur()}Taille :{len(a)}Les parcours fournissent des listes d'objets :Parcours en largeur :{a.largeur}Pour obtenir les clés, on demande les valeurs :Parcours en largeur :{[n.valeurfornina.largeur]}Parcours prefixe :{[n.valeurfornina.prefixe]}Parcours infixe :{[n.valeurfornina.infixe]}Parcours suffixe :{[n.valeurfornina.suffixe]}Une liste 'complète' en largeur peut être obtenue :{[nifnotnelsen.valeurfornina.liste_aplatie()]}Recherche du noeud de clé 5 :le noeud :{a.rechercher(5).__repr__()}son arborescence :{a.rechercher(5)}sa valeur :{a.rechercher(5).valeur}son contenu (éventuellement transporté dans sa structure) :\{a.rechercher(5).contenu}le chemin qui y mène :{[n.valeurfornina.chemin_vers(a.rechercher(5))]}Si on connaît une référence vers le noeud, on peut l'utiliser.le noeud de clé 5 est le 4ème de la liste noeuds :\{noeuds[4]}"""print(texte)a.supprimer(noeuds[0])texte=f"""Nouveau parcours en largeur après suppression de la racine :{[n.valeurfornina.largeur]}"""print(texte)Sortie consoleAffichageimbriquée:(((None--0--None)--1--(None--2--None))--3--(None--4--((None--5--(None--6--None))--7--((None--8--None)--9--None))))Hauteur:5Taille:10Lesparcoursfournissentdeslistesd'objets :Parcoursenlargeur:[<abrviz.abrviz.Noeudobjectat0x7fec0c283eb0>,<abrviz.abrviz.Noeudobjectat0x7fec0c283880>,<abrviz.abrviz.Noeudobjectat0x7fec0c283f10>,<abrviz.abrviz.Noeudobjectat0x7fec0c38c0d0>,<abrviz.abrviz.Noeudobjectat0x7fec0c14da90>,<abrviz.abrviz.Noeudobjectat0x7fec0c283f40>,<abrviz.abrviz.Noeudobjectat0x7fec0c283fa0>,<abrviz.abrviz.Noeudobjectat0x7fec0c283d30>,<abrviz.abrviz.Noeudobjectat0x7fec0c14d760>,<abrviz.abrviz.Noeudobjectat0x7fec0c283c70>]Pourobtenirlesclés,ondemandelesvaleurs:Parcoursenlargeur:[3,1,4,0,2,7,5,9,6,8]Parcoursprefixe:[3,1,0,2,4,7,5,6,9,8]Parcoursinfixe:[0,1,2,3,4,5,6,7,8,9]Parcourssuffixe:[0,2,1,6,5,8,9,7,4,3]Uneliste'complète'enlargeurpeutêtreobtenue:[3,1,4,0,2,None,7,None,None,None,None,None,None,5,9,None,None,None,None,None,None,None,None,None,None,None,None,None,6,8]Recherchedunoeuddeclé5:lenoeud:<abrviz.abrviz.Noeudobjectat0x7fec0c283fa0>sonarborescence:(None--5--(None--6--None))savaleur:5soncontenu(éventuellementtransportédanssastructure):Nonelecheminquiymène:[3,4,7,5]Sionconnaîtuneréférenceverslenoeud,onpeutl'utiliser.lenoeuddeclé5estle4èmedelalistenoeuds:(None--5--(None--6--None))Nouveauparcoursenlargeuraprèssuppressiondelaracine:[2,1,4,0,7,5,9,6,8]Exemple 1CodefromabrvizimportArbre,Noeuda=Arbre()liste=[Noeud(i)foriin[3,2,1,5,4,6]]foriinliste:a.inserer(i)# on visualise l'arbrea.sortie(a.racine,"exemple1_0","png")# on demande une visualisation en arbre binaire complet : un peu# plus d'espace dans la dernière lignea.sortie(a.racine,"exemple1_1","png",style="complet")# on demande une visualisation à partir du 2ème noeud rentréa.sortie(liste[1],"exemple1_2","png")# on supprime la racine puis on visualise le nouvel arbre en pdfa.supprimer(liste[0])a.sortie(a.racine,"exemple1_3","pdf")Sortie ImagesExemple 2CodefromabrvizimportArbre,Noeudimportrandomliste=list(range(15))random.shuffle(liste)a=Arbre()foreinliste:a.inserer(Noeud(e))# Visualisation de l'arbremon_noeud=a.racinea.sortie(mon_noeud,"exemple2_0","png")# on peut demander une version "complète" avec les noeuds# invisibles d'un arbre binaire complet : l'apparence est très# largea.sortie(mon_noeud,"exemple2_1","png",style="complet")# s'il y a un sous-arbre gauche, on le visualiseifmon_noeud.gaucheisnotNone:a.sortie(mon_noeud.gauche,"exemple2_2","png")Sortie ImagesExemple 3CodefromabrvizimportArbre,Noeudimportrandomliste=list(range(20))random.shuffle(liste)a=Arbre()# on change la fonction de la relation d'ordrea.fonction_ordre=lambdax,y:str(x.valeur)<str(y.valeur)foreinliste:a.inserer(Noeud(e))a.sortie(a.racine,"exemple3_0","png")# on change le styleArbre.options('node',{"style":"filled"})Arbre.options('edge',{"arrowhead":"diamond","arrowsize":"1"})a.sortie(a.racine,"exemple3_1","png")# les flèches se courbentArbre.options('graph',{"splines":"true"})a.sortie(a.racine,"exemple3_2","png")Sortie ImagesExemple 4CodefromabrvizimportArbre,Noeud# dictionnaire, les valeurs seront les clés de l'arbredico_contenu={"abricot":2,"poire":5,"pomme":1,"ananas":7,"kiwi":0}a=Arbre()forkindico_contenu:a.inserer(Noeud(dico_contenu[k],k))# les noeuds montreront le contenu du noeud et non la clé ("valeur")Arbre.etiquette="contenu"a.sortie(a.racine,"exemple4_0","png")# pour visualiser un mix des deux (clés et valeurs du dictionnaire)# on redéfinit l'arbre et les noeuds :a=Arbre()forkindico_contenu:noeud=Noeud(dico_contenu[k])# le contenu du noeud reprend les données complètes du dictionnairenoeud.contenu=f"{k}({dico_contenu[k]})"a.inserer(noeud)Arbre.etiquette="contenu"a.sortie(a.racine,"exemple4_1","png")Sortie ImagesExemple 5CodefromabrvizimportArbre,Noeudliste=[2,3,6,0,4,5,1]liste_noeuds=[Noeud(i)foriinliste]a=Arbre()foreinliste_noeuds:a.inserer(e)# On peut effectuer des mouvements de rotation à droite ou à gauche# L'arbre reste un ABRa.sortie(a.racine,"exemple5_0","png")# le noeud "racine" du changement est passé en argumenta.rotation_gauche(a.rechercher(2))a.sortie(a.racine,"exemple5_1","png")# on peut alors équilibrer l'arbrea.rotation_gauche(a.rechercher(0))a.rotation_droite(a.rechercher(2))a.rotation_gauche(a.rechercher(4))a.rotation_droite(a.rechercher(6))a.sortie(a.racine,"exemple5_2","png")Sortie ImagesLicenceCC-BY-NC-SA
abs
No description available on PyPI.
abs2rel
abs2relA python script that traverses a whole package turning local absolute imports into relative ones.It can reduce substantialy the time needed to convert local absolute imports from a Python package into local relative imports. Additionally, because the conversion is automated, it is less error-prone than having someone make the changes manually.This command only recognizes local absolute imports which are in thefrom ... import ...format, where thefromandimportkeywords are on the same line.This script was originally tested on thenodezator appat commitbe4c17f, managing to convert all 1685 existing absolute local imports across 270 python files into relative imports successfully, without needing any additional manual work.If you want to try this yourself, just clone nodezator, checkout the mentioned commit and, inside the nodezator/nodezator folder, execute theabs2relcommand. Then, go up one folder level and launch the app withpython -m nodezatorand you'll see that the local relative imports work properly, launching the app without problems.Installationpipinstallabs2relUsageAfter installing, just go to your package top-level directory and execute theabs2relcommand. Before actually changing the files, the script presents the number of imports and files that are about to be changed and asks the user to confirm in order to proceed. Remember to make a backup of your package before executing this script, just in case.Licenseabs2rel is dedicated to the public domain withThe Unlicense.
absa
count-lineUsageA cross-platform command line tool to do sentiment analysis taskInstallationYou can install, upgrade, uninstall SATK with these commands(without $):$ pip install count-line $ pip install --upgrade count-line $ pip unstall count-line
absample
No description available on PyPI.
absarsnew
Getting Started with APIMATIC CalculatorGetting StartedIntroductionSimple calculator API hosted on APIMATICInstall the PackageThe package is compatible with Python versions2 >=2.7.9and3 >=3.4. Install the package from PyPi using the following pip command:pipinstallabsarsnew==1.1You can also view the package at:https://pypi.python.org/pypi/absarsnewInitialize the API ClientThe following parameters are configurable for the API Client:ParameterTypeDescriptiontimeoutfloatThe value to use for connection timeout.Default: 60max_retriesintThe number of times to retry an endpoint call if it fails.Default: 3backoff_factorfloatA backoff factor to apply between attempts after the second try.Default: 0The API client can be initialized as follows:fromapimaticcalculator.apimaticcalculator_clientimportApimaticcalculatorClientfromapimaticcalculator.configurationimportEnvironmentclient=ApimaticcalculatorClient(environment=,)Client Class DocumentationAPIMATIC Calculator ClientThe gateway for the SDK. This class acts as a factory for the Controllers and also holds the configuration of the SDK.ControllersNameDescriptionsimple_calculatorGets SimpleCalculatorControllerAPI ReferenceList of APIsSimple CalculatorSimple CalculatorOverviewGet instanceAn instance of theSimpleCalculatorControllerclass can be accessed from the API Client.simple_calculator_controller = client.simple_calculatorGet CalculateCalculates the expression using the specified operation.:information_source:NoteThis endpoint does not require authentication.defget_calculate(self,operation,x,y)ParametersParameterTypeTagsDescriptionoperationOperationTypeEnumTemplate, RequiredThe operator to apply on the variablesxfloatQuery, RequiredThe LHS valueyfloatQuery, RequiredThe RHS valueResponse TypefloatExample Usageoperation=OperationTypeEnum.MULTIPLYx=222.14y=165.14result=simple_calculator_controller.get_calculate(operation,x,y)Model ReferenceEnumerationsOperation TypeOperation TypePossible operators are sum, subtract, multiply, divideClass NameOperationTypeEnumFieldsNameSUMSUBTRACTMULTIPLYDIVIDEUtility Classes DocumentationApiHelperA utility class for processing API Calls. Also contains classes for supporting standard datetime formats.MethodsNameDescriptionjson_deserializeDeserializes a JSON string to a Python dictionary.ClassesNameDescriptionHttpDateTimeA wrapper for datetime to support HTTP date format.UnixDateTimeA wrapper for datetime to support Unix date format.RFC3339DateTimeA wrapper for datetime to support RFC3339 format.Common Code DocumentationHttpResponseHttp response received.ParametersNameTypeDescriptionstatus_codeintThe status code returned by the server.reason_phrasestrThe reason phrase returned by the server.headersdictResponse headers.textstrResponse body.requestHttpRequestThe request that resulted in this response.HttpRequestRepresents a single Http Request.ParametersNameTypeTagDescriptionhttp_methodHttpMethodEnumThe HTTP method of the request.query_urlstrThe endpoint URL for the API request.headersdictoptionalRequest headers.query_parametersdictoptionalQuery parameters to add in the URL.parametersdict | stroptionalRequest body, either as a serialized string or else a list of parameters to form encode.filesdictoptionalFiles to be sent with the request.
absarsnewer
Getting Started with Zype-V1IntroductionTODO: Add a descriptionInstall the PackageThe package is compatible with Python versions2 >=2.7.9and3 >=3.4. Install the package from PyPi using the following pip command:pipinstallabsarsnewer==2.1You can also view the package at:https://pypi.python.org/pypi/absarsnewerTest the SDKYou can test the generated SDK and the server with test cases.unittestis used as the testing framework andnoseis used as the test runner. You can run the tests as follows:Navigate to the root directory of the SDK and run the following commandspip install -r test-requirements.txt nosetestsInitialize the API ClientNote:Documentation for the client can be foundhere.The following parameters are configurable for the API Client:ParameterTypeDescriptionhttp_client_instanceHttpClientThe Http Client passed from the sdk user for making requestsoverride_http_client_configurationboolThe value which determines to override properties of the passed Http Client from the sdk usertimeoutfloatThe value to use for connection timeout.Default: 60max_retriesintThe number of times to retry an endpoint call if it fails.Default: 0backoff_factorfloatA backoff factor to apply between attempts after the second try.Default: 2retry_statusesArray of intThe http statuses on which retry is to be done.Default: [408, 413, 429, 500, 502, 503, 504, 521, 522, 524]retry_methodsArray of stringThe http methods on which retry is to be done.Default: ['GET', 'PUT']The API client can be initialized as follows:fromzypev1.zypev_1_clientimportZypev1Clientfromzypev1.configurationimportEnvironmentclient=Zypev1Client(environment=Environment.PRODUCTION,)List of APIsVideosPlaylistsManaging Playlist RelationshipsCategoriesAppsDevicesDevice CategoriesPlayersMaster ManifestsSegmentsVideo ImportsVideo SourcesCategory Content RulesPlaylist Content RulesVideo Content RulesConsumersDevice LinkingO AuthVideo EntitlementsPlaylist EntitlementsSubscription EntitlementsSubscriptionsPlansTransactionsRedemption CodesAd TagsRevenue ModelsVideo FavoritesZobject TypesZobjectClasses DocumentationUtility ClassesHttpResponseHttpRequest
absarv3testing8
Getting Started with RESTAPI SDKInstall the PackageThe package is compatible with Python versions2 >=2.7.9and3 >=3.4. Install the package from PyPi using the following pip command:pipinstallabsarv3testing8==1.1.1You can also view the package at:https://pypi.python.org/pypi/absarv3testing8Test the SDKYou can test the generated SDK and the server with test cases.unittestis used as the testing framework andnoseis used as the test runner. You can run the tests as follows:Navigate to the root directory of the SDK and run the following commandspip install -r test-requirements.txt nosetestsInitialize the API ClientNote:Documentation for the client can be foundhere.The following parameters are configurable for the API Client:ParameterTypeDescriptioncontent_typestringbody content type for post requestDefault:'application/json'accept_languagestringresponse formatDefault:'application/json'base_urlstringDefault:'https://localhost:443'environmentEnvironmentThe API environment.Default:Environment.PRODUCTIONhttp_client_instanceHttpClientThe Http Client passed from the sdk user for making requestsoverride_http_client_configurationboolThe value which determines to override properties of the passed Http Client from the sdk usertimeoutfloatThe value to use for connection timeout.Default: 60max_retriesintThe number of times to retry an endpoint call if it fails.Default: 0backoff_factorfloatA backoff factor to apply between attempts after the second try.Default: 2retry_statusesArray of intThe http statuses on which retry is to be done.Default: [408, 413, 429, 500, 502, 503, 504, 521, 522, 524]retry_methodsArray of stringThe http methods on which retry is to be done.Default: ['GET', 'PUT']The API client can be initialized as follows:fromrestapisdk.restapisdk_clientimportRestapisdkClientfromrestapisdk.configurationimportEnvironmentclient=RestapisdkClient(content_type='application/json',accept_language='application/json',environment=Environment.PRODUCTION,base_url='https://localhost:443',)AuthorizationThis API usesOAuth 2 Bearer token.List of APIsAPISessionUserGroupMetadataDatabaseConnectionClasses DocumentationUtility ClassesHttpResponseHttpRequest
absbox
AbsBoxa structured finance cashflow engine wrapper for structured credit professionals:transparency -> open source for both wrapper and backend engine.human readable waterfall -> no more coding/scripting, justlistsandmapsin Python !easy interaction with Python numeric libraries as well as databases/Excel to accomodate daily work.installationpip install absboxDocumentationEnglish ->https://absbox-doc.readthedocs.ioChinese ->https://absbox.readthedocs.ioGoalStructuringEasy way to create different pool assets/deal capital structures and waterfallsUser can tell how key variables(service fee/bond WAL/bond cashflow etc) changes in different structure of transaction.InvestorGiven powerful modeling language to build cashflow model , user can price bonds of transaction after setting pool performance assumptionWhat it doesProvide building blocks to create cashflow models for ABS/MBSAdapt to multiple asset classesResidential Mortgage / AdjustRateMortgage / Auto LoansCorp LoansConsumer CreditLeaseFix AssetFeaturesSensitivity Analysis on different scenarios or deal structuressensitivity analysis on pool performance assumptionssensitivity analysis on capital structures or any deal componentsBond Cashflow/Pool Cashflow Forecast, PricingData flowCommunity & SupportDiscussionMiscProposed Rule regarding Asset-Backed Securities: File No. S7-08-10
abscab
Accurate Biot-Savart routines with Correct Asymptotic BehaviourThis library can be used to compute the magnetic field and the magnetic vector potential of filamentary current carriers in the form of a circular loop and straight segments. Arbitrary geometries of conductors can be approximated by a polygon along its contour and the connecting segments between the polygon vertices are modeled by straight segments. Finite-width conductors can be approximated by arranging multiple filaments throughout the cross section of the current carrier.Please consider leaving a GitHub star if you like this software.If you use this software for scientific work, we kindly ask you to citethe corresponding article:@article{abscab_2023,title={{Biot-Savart routines with minimal floating point error}},author={Jonathan Schilling and Jakob Svensson and Udo Höfel and Joachim Geiger and Henning Thomsen},journal={Computer Physics Communications},pages={108692},year={2023},issn={0010-4655},doi={10.1016/j.cpc.2023.108692}}This is the Python implementation ofABSCAB.descriptionlink to filemain implementationabscab.pyunit teststest_abscab.pydemo codedemo_abscab.pyparallelized:heavy_multiplication_x:
absdata
No description available on PyPI.
absdataset
SUMMARY:Just some datasets from KaggleINSTALLATION:$pipinstall-UabsdatasetUSAGE:importabsdatasetabsdataset.help()LICENSE:MIT License Copyright (c) 2022 AbsoluteWinter Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
absdga
Brief DescriptionThis is a introductory level project that teaches RubyLearning’s “An Introduction to Python” course students on how to uplaod their modules to Python Package Index.Usageimport absdgaprint(absdga.absdga(-20))orfrom absdga import *print(absdga(-20))
absense
abSENSE: a method to interpret undetected homologsINTRODUCTIONabSENSE is a method that calculates the probability that a homolog of a given gene would fail to be detected by a homology search (using BLAST or a similar method) in a given species, even if the homolog were present and evolving normally.The result of this calculation informs how one interprets the result of a homology search failing to find homologs of a gene in some species. One possibility to explain such a result is that the gene isactually absentfrom the genome in that species: a biological, and potentially interesting (e.g. if due to a gene loss or the birth of a new gene), result.A second explanation, often ignored, is that the homologispresent in the genome of that species, but that the homology search merely lacks statistical power to detect it. Here, the apparent absense of the homolog is a technical/statistical limitation, and does not reflect underlying biology.By calculating the probability that your homology search would fail to detect a homologeven if one were presentandeven if it were evolving normally(e.g. no rate accelerations on a specific branch, potentially suggestive of biologically interesting changes), abSENSE informs the interpretation of a negative homology search result. If abSENSE finds that there is a high probability of a homolog being undetected even if present, you may not be as inclined to invoke a biological explanation for the result: the null model of a failure of the homology search is sufficient to explain what you observe.The method is explained in complete detail in the paper in which it's introduced. This is the paper that is referred to in the rest of this README:Weisman CM, Murray AW, Eddy SR (2020) Many, but not all, lineage-specific genes can be explained by homology detection failure. PLoS Biol 18(11): e3000862.https://doi.org/10.1371/journal.pbio.3000862There, it is applied to the specific case of lineage-specific genes, for which homologs appear absent in all species outside of a narrow lineage. The method itself is applicable to any case in which a homolog appears absent (e.g. a single species missing a homolog that one might interpret as a gene loss), and likewise, this code is applicable to all such cases.This is a rewrite of theoriginal repositoryUsageabSENSE THE BASICSQuickstart: main analysisThe main analysis script,Run_abSENSE.py, calculates the probabilities of homologs of genes from some "focal" species being undetected in a set of N other species. It can perform this analysis for an arbitrary number of genes in the taxon at one time. It requires a minimum of two input files:i) A file containing the bitscores of homologs of each gene to be analyzed in at least three of the species (including the focal species itself, so two others).ii) A file containing the N evolutionary distances, in substitutions/site, between the focal species and each other species. The distance between the focal species and itself should be 0. (If you don't already have such distances, a description of how to calculate them relatively painlessly can be found in Weisman et al 2020).Examples of both of these files for a subset of genes from S. cerevisiae and their orthologs 11 other fungal species (the same species analyzed in Weisman et al 2020) can be found in the folder Quickstart_Examples: the bitscore file is Fungi_Example_Bitscores, and the distance file is Fungi_Distances. They exemplify the formatting required for abSENSE to run (explained in more detail below).To run abSENSE on a given bitscore and distance file:python Run_abSENSE.py \ --distfile <distance_file> \ --scorefile <score_file>For example, to run abSENSE on the example fungal genes, type:python Run_abSENSE.py \ --distfile Quickstart_Examples/Fungi_Distances \ --scorefile Quickstart_Examples/Fungi_Example_BitscoresFor each gene in the input bitscore file, the following will be computed:a) The probabilities of a homolog being undetected in each species (in the fileDetection_Failure_Probabilities);b) The expected bitscores of homologs in each species (in the filePredicted_bitscores);c) The 99% confidence interval around this bitscore in each species (low and high bounds listed in separate files:Bitscore_99PI_lowerbound_predictionsandBitscore_99PI_higherbound_predictions)These results will be output to a set of tab-delimited files in a separate directory, by default named with the start time of the analysis. (You can specify the name with a command line option, see below). Additional information on output files is below.Quickstart: visualizationA supplemental visualization script,Plot_abSENSE.py, performs the same analysis as above, but for one gene at a time, and also produces a visualization of the results (see Weisman et al 2020). It is run in the same way, except that it also requires specifying which single gene in the bitscore input file you wish to analyze.To run abSENSE on gene GENEID contained in a given bitscore with a given distance file:python Plot_abSENSE.py \ --distfile <distance_file> \ --scorefile <score_file> \ --gene <GENEID>For example, to analyze the S. cerevisiae gene Uli1, listed in the bitscore file under its RefSeq ID (NP_116682.3), type:python Plot_abSENSE.py \ --distfile Quickstart_Examples/Fungi_Distances \ --scorefile Quickstart_Examples/Fungi_Example_Bitscores \ --gene NP_116682.3The same results as above will be computed, but now they will be output to the terminal, and then the visualization will be shown.All OptionsYou can specify advanced options with the additional command line options. You can view them all withpython Run_abSENSE.py --helpThey are:--out: The prefix of the directory to which your results will be output. Default is the time at which you ran the analysis (to avoid overwriting of results).--Eval: The E-value threshold to be used (above this value, homologs will be considered undetected). Default is 0.001 (fairly permissive).--genelenfile: Allows you to specify a file containing the lengths (in aa) of all genes in the bitscore file to be analyzed. Default is 400 amino acids (~average protein size in many species) for all proteins.abSENSE predicts a bitscore, which is then converted to an E-value to determine detectability; this conversation technically requires knowledge of both the size of the database in which the search occurs (see below) and the length of the gene being searched. Because the conversion between these values and E-value is logarithmic, though, only fairly large changes in these values substantially affect results.Examples of such files containing the lengths of all S. cerevisiae and D. melanogaster genes can be found in Fungi_Data/S_cer_Protein_Lengths and Insect_Data/D_mel_Protein_Lengths. The format required is described more below.--dblenfile: Allows you to specify a file containing the sizes (in aa) of the database in which the homology search for each of your N species is performed. Default is 400 amino acids * 20,000 amino acids / gene = 8,000,000 amino acids (~average protein and proteome size in many species) for all species.abSENSE predicts a bitscore, which is then converted to an E-value to determine detectability; this conversation technically requires knowledge of both the size of the database in which the search occurs and the length of the gene being searched (see above). Because the conversion between these values and E-value is logarithmic, though, only fairly large changes in these values substantially affect results.Examples containing the lengths of all S. cerevisiae and D. melanogaster genes can be found in Fungi_Data/Fungi_Database_Lengths and Insect_Data/Insect_Database_Lengths. The format required is described more below.--predall: Default is False. When True, Causes abSENSE to calculate the probability of homologs being undetected, the expected bitscores, and 99% confidence intervals not only in species in which homologs were actually undetected, but also for species in which homologs have been found. This is obviously not the main use case, and is especially uninformative when those homologs and their bitscores have been used in the prediction itself (see below). May potentially be useful to see if a homolog in one species, although detected, seems to be behaving anomalously compared to those in other species (eg due to rate acceleration).--includeonly: Allows you to restrict the species whose bitscores are used in predicting bitscores in other species. Mainly useful to do control-type analyses, such as Figure 5 in Weisman et al 2020, to show that predictions made from only a subset of the data are nonetheless reliable. If not specified, abSENSE uses all available bitscores in the prediction. The format here should be a list of the species names as they are written in the distance and bitscore files, separated by commas and NO SPACES. For example, to include only data from the yeasts S. cerevisiae, S. paradoxus, and S. kudriavzevii in an analysis based on the files inFungal_Data,you would type:--includeonly S_cer,S_par,S_kudFor example, to run an analysis on all S. cerevisiae proteins in the selected fungal species in which the lengths of each S. cerevisiae protein and the sizes of each species' search database (their annotated proteomes as indicated in the supplement of Weisman et al 2020) are specified:python Run_abSENSE.py \ --distfile Fungi_Data/Fungi_Distances \ --scorefile Fungi_Data/Fungi_Bitscores \ --genelenfile Fungi_Data/S_cer_Protein_Lengths \ --dblenfile Fungi_Data/Fungi_Database_LengthsThe visualization scriptPlot_abSENSE.pytakes all of the same options, with the exception of again requiring that the gene to be analyzed is specified by the --gene option, and also that the --genelenfile option is instead --genelen, after which should be entered an integer corresponding to the length of the gene. (With a single gene, it's hardly worth requiring a whole file: just give the number.)Output FilesRun_abSENSE.pyoutputs six output files. Examples resulting from running abSENSE on the provided insect and fungal data can be found inFungi_Data/Fungi_abSENSE_Results/andInsect_Data/Insect_abSENSE_Results/respectively.Detection_failure_probabilitiesThe central output of the program. For each gene in the analysis, this contains the predicted probability that a homolog in each species would be undetected at the specified E-value by a homology search, even if the homolog were present.By default, this is only calculated in species in which the gene was not detected. Results for species in which homologs were detected are therefore listed as "detected". The setting --predall will calculate this value for all species, even those in which a homolog was in fact detected.If not enough data for a gene was provided to generate a bitscore prediction (bitscores of homologs from at least three species are needed), the results will read "not_enough_data".Predicted_bitscoresFor each gene in the analysis, this contains the predicted (maximum likelihood) bitscore of a homolog in each species.By default, bitscores are only predicted in species in which the gene was not detected. Results for species in which homologs were detected are therefore listed as "detected". The setting --predall will calculates this value for all species, even those for which a homolog was in fact detected. Here, the known bitscore (often used in the prediction process; see the option --includeonly) will be shown alongside the prediction. If the known bitscore was used in the fitting process, of course, these will usually be quite similar!If not enough data for a gene was provided to generate a bitscore prediction (bitscores of homologs from at least three species are needed), the results will read "not_enough_data".Bitscore_99PI_upperbound_predictionsFor each gene in the analysis, this contains the upper bound of the 99% confidence interval for the bitscore of a homolog in each species.By default, this is only calculated in species in which the gene was not detected. Results for species in which homologs were detected are therefore listed as "detected". The setting --predall will calculates this value for all species, even those for which a homolog was in fact detected. Here, the known bitscore (often used in the prediction process; see the option --includeonly) will be shown alongside the prediction. If the known bitscore was used in the fitting process, of course, these will usually be quite similar!If not enough data for a gene was provided to generate a bitscore prediction (bitscores of homologs from at least three species are needed), the results will read "not_enough_data".Bitscore_99PI_lowerbound_predictionsFor each gene in the analysis, this contains the lower bound of the 99% confidence interval for the bitscore of a homolog in each species.By default, this is only calculated in species in which the gene was not detected. Results for species in which homologs were detected are therefore listed as "detected". The setting --predall will calculates this value for all species, even those for which a homolog was in fact detected. Here, the known bitscore (often used in the prediction process; see the option --includeonly) will be shown alongside the prediction. If the known bitscore was used in the fitting process, of course, these will usually be quite similar!If not enough data for a gene was provided to generate a bitscore prediction (bitscores of homologs from at least three species are needed), the results will read "not_enough_data".Parameter_valuesFor each gene in the analysis, this contains the best-fit (maximum likelihood) values of the a and b parameters. (See Weisman et al 2020 for full explanation.)These a and b parameters are calculated from bitscores of homologs in species included in the prediction process. If the command line option --includeonly is used, this will be only the species specified by that option. By default, all provided bitscores are used.If not enough data for a gene was provided to generate a bitscore prediction (bitscores of homologs from at least three species are needed), the results will read "not_enough_data".Run_infoContains information about the analysis, including names of input files, options/settings used, and the analysis time.Input File FormatsRequired files:The bitscore fileFor an analysis of M genes in N species (including the focal species), the bitscore file should be a tab-delimited file of N+1 columns by M+1 rows. The first row should begin with a blank entry, and should be followed by N entries containing the names of the N species in your analysis. These names should match those in the distance file (below) exactly. The remaining M rows should each begin with the name/identifier of the gene from the focal species to be analyzed, followed by the bitscore of that gene against its homolog in the species indicated at the top of the given column. For species in which homologs are undetected, this value should be0. For species in which a homolog is detected, but the orthology is unclear and so you wish to exclude it from being used in the fit (see Weisman et al 2020), this value should be'N/A'. Two examples are provided: Fungi_Data/Fungi_\Bitscores and Insect_\Data/Insect_Bitscores.The distance fileFor an analysis with N species (including the focal species), the distance file should be a tab-delimited file of 2 columns by N rows. Entries in the first column should contain the name of each species in your analysis. These names should match those in the bitscore file (above) exactly. Entries in the second column should contain the evolutionary distance between each species in the indicated column and the focal species. (The distance between the focal species and itself should always be 0.) Two examples are provided: Fungi_Data/Fungi_Distances and Insect_Data/Insect_Distances.Optional files:The gene length fileFor an analysis of M genes, the gene length file should be a tab-delimited file of 2 columns by M rows. The first column should contain the names/identifiers of the gene from the focal species to be analyzed. These should match exactly the names in the bitscore file (above). The second column should contain the length in amino acids of that gene.If you don't already have such a file, here is a command to make one (named OUTPUTFILENAME), from a FASTA file (SEQFILENAME) containing all of the sequences: (It requires the easel package, which comes with the HMMER software, available athttp://hmmer.org/documentation.html.)esl-seqstat -a (SEQFILENAME) | \ awk '{print $2 "\t" $3}' | \ tac | \ sed -e '1,7d' | \ tac > (OUTFILENAME)The database length fileFor an analysis of N species, the database length file should be a tab-delimited file of 2 columns by N rows. The first column should contain the names of the N species. These should match exactly the names in the bitscore and distance files (above). The second column should contain the sizes, in amino acids, of the database in which the homology search for each species is performed. For example, if you are searching for a gene in each of the species' annotated proteomes, it should be the size of that species' proteome in amino acids. If instead you are searching against a pan-specific database, like for example NR, for all species, it should be the size of that database in amino acids.If you don't already have such a file, you can make one easily if you have the databases themselves in eg FASTA format (again requiring the easel package, downloadable with HMMER as in c) above: just run the commandesl-seqstat (FASTA FILE)on each database; this will report the total length in aa of each database file. You can then put these into a tab-delimited file manually.If your BLAST search is done on a nucleotide genome of the outgroup species via TBLASTN, the database length that you should use for each genome is 2N, where N is the genome length in nucleotides. (For a genome of N nucleotides, there are ~N/3 codons in it, which can be read in each of 6 possible reading frames, for a total of 6N/3 = 2N amino acids.)
absent
No description available on PyPI.
abseqPy
Table of contentsIntroductionAbSeqDevelopersPrerequisitesSeamless installation of dependenciesManual installation of dependenciesabseqPy InstallationInstall frompipInstall from sourceUsageBasic usageAdvanced usageGotchasHelpSupported platformsIntroductionAbSeqAbSeqis a comprehensive bioinformatic pipeline for the analysis of sequencing datasets generated from antibody libraries andabseqPyis one of its packages. Given FASTQ or FASTA files (paired or single-ended),abseqPygenerates clonotypes tables,V-(D)-J germline annotations,functional rates, anddiversity estimatesin a combination of csv and HDF files. More specialized analyses for antibody libraries likeprimer specificity,sequence motif analysis, andrestriction sites analysisare also on the list.This program is intended to be used in conjunction withabseqR, a reporting and statistical analysis package for the data generated byabseqPy. AlthoughabseqPyworks finewithoutabseqR, it is highly recommended that users also install the R package in order to take the advantage of the interactive HTML reporting capabilities of the pipeline.abseqR's project page shows a fewexamplesof the type of analysisAbSeqprovides; the full documentation can be found inabseqR's vignette.DevelopersAbSeqis developed by Monther Alhamdoosh and JiaHong FongFor comments and suggestions, email m.hamdoosh <at> gmail <dot> comPrerequisitesabseqPydepends on a few external software to work and they should be properly installed and configured before runningabseqPy.abseqPyruns on Python 2.7. Python 3.6 support is underway.Seamless installation of dependenciesThis is therecommended wayof installing abseqPy's external dependencies.A python script is availableherewhich downloads and installs all the necessary external dependencies.This script assumes the following is already available:perlgitpythonJava JREversion 1.6 or higherC/C++ compilers(not required for Windows)make(not required for Windows)CMake(not required for Windows)To install external dependencies into a folder named~/.local/abseq:$mkdir-p~/.local/abseq $pythoninstall_dependencies.py~/.local/abseqThis script doesnotinstallabseqPyitself, only its external dependencies.This script works with Python 2 and 3, and~/.local/abseqcan be replaced with any directory. However:this directory will be there to stay, so choose wiselythe installation script will dump more than just binaries in this directory, it will contain databases and internal filesas soon as the installation succeeds, users will be prompted with an onscreen message to update their environment variables to include installed dependencies in~/.local/abseq.Manual installation of dependenciesThis section is for when one:finds that the installation script failedis feeling adventurousrefer tothis documentfor a detailed guide.abseqPy installationThis section demonstrates how to installabseqPy.Install frompip$pipinstallabseqPyInstall from source$gitclonehttps://github.com/malhamdoosh/abseqPy.git $cdabseqPy $pipinstall. $abseq--versionTheabseqcommand should now be available on your command line.installingabseqPyalso installs other python packages, consider using a python virtual environment to prevent overriding existing packages. Seevirtualenvorconda.UsageBasic usageTo get up and running, the following command is often sufficient:$abseq-f1<read1>-f2<read2>-oresults--threads4--taskall-f2is only required if it is a paired-end sequencing experiment.Advanced usageBesides callingabseqwith command line options,abseqalso supports-y <file>or--yaml <file>that reads parameters defined infile. This enables multiple samples to be analyzed at the same time, each having shared or independentabseqparameters.The basic YAML syntax offileiskey: valwherekeyis anabseq"long"1option (seeabseq --helpfor all the "long" option names) andvalis the value supplied to the "long" option. Additional samples are specified one after another separated by triple dashes---.ExampleAssuming a file namedexample.ymlhas the following content:# sample one, PCR1name:PCR1file1:fastq/PCR1_R1.fastq.gzfile2:fastq/PCR1_R2.fastq.gz---# sample two, PCR2name:PCR2file1:fastq/PCR2_R1.fastq.gzfile2:fastq/PCR2_R2.fastq.gzbitscore:300# override the defaults' 350 for this sample onlytask:abundance# override the defaults' "all" for this sample onlydetailedComposition:~# enables detailedComposition (-dc) for this sample only---# more samples can go here---# "defaults" is the only special key allowed.# It is not in abseq's options, but is used here# to denote default values to be used for ALL samples# if they're not specified.defaults:task:alloutdir:resultsthreads:7bitscore:350sstart:1-3then executingabseq -y example.ymlis equivalent to simultaneously running 2 instances ofabseqwith the parameters in thedefaultsfield applied to both samples. Here's an equivalent:$abseq--taskall--outdirresults--threads7--bitscore350--sstart1-3\>--namePCR1--file1fastq/PCR1_R1.fastq.gz--file2fastq/PCR1_R2.fastq.gz $abseq--taskabundance--outdirresults--threads7--bitscore300--sstart1-3\>--namePCR2--file1fastq/PCR2_R1.fastq.gz--file2fastq/PCR2_R2.fastq.gz\>--detailedCompositionUsing--yamlis recommended because it is self-documenting, reproducible, and simple to run.GotchasIn the above example, specifyingthreads: 7in thedefaultskey ofexample.ymlwill runeachsample with 7 threads, that is,abseqPywill be running with 7 *number of samplestotal processes.HelpInvokingabseq -hin the command line will display the optionsabseqPyuses.Supported platformsabseqPyworks on most Linux distros, macOS, and Windows.Some features aredisabledwhen running inWindowsdue to software incompatibility, they are:Upstream clustering in--task 5utrSequence logo generation in--task diversity1long option names are option names with a double dash prefix, for example,--helpis a long option while-his not↩
abserdes
No description available on PyPI.
abses
An Agent-Based computational framework makes modeling artificialSocial-ecological systemseasier.WhyABSESpy?Agent-based model (ABM) is essential for social-ecological systems (SES) research.ABSESpyis designed for modelingcouples humans and nature systemsby:Architectural Elegance for Modular Socio-Ecological Systems Modeling.Unveiling Human Decision Dynamics in SES Modeling by an advanced human behavior simulating framework.Mastering Time in SES Modeling with Real-world Precision and Dynamic Updates.Basic Usage & DocumentsInstall with pip or your favorite PyPI package manager.pipinstallabsesAccess theDocumentation here.Get in touchFor enthusiastic developers and contributors, all contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.SES researchersare welcome to use this package in social-ecological system (SES) studies. It would be appreciated if you contribute a published model to our gallery.If you need any help when usingABSESpy, don't hesitate to get in touch with us through:Ask usage questions ("How to do?") onGitHubDiscussions.Report bugs, suggest features, or view the source codeonGitHubIssues.Use themailing listfor less well-defined questions or ideas or to announce other projects of interest toABSESpyusers.LicenseCopyright 2023,ABSESpyShuang SongLicensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License athttps://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.ABSESpybundles portions ofMesa,mesa-geo,pandas,NumPy, andXarray; the full text of these licenses are included in the licenses directory.Thanks to all contributors
absfuyu
SUMMARY:TL;DR: A collection of codeINSTALLATION:$pipinstall-UabsfuyuUSAGE:importabsfuyuhelp(absfuyu)DOCUMENTATION:hereDEV SETUPCreate virtual environmentpython-mvenvenvNote: Might need to run this in powershell (windows)Set-ExecutionPolicy-ExecutionPolicyUnrestricted-ScopeCurrentUserInstall all required packagespython-mpipinstall-e.[all]orpython-mpipinstall-e.[dev]LICENSE:MIT License Copyright (c) 2022-2023 AbsoluteWinter Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
absfuyu-res
SUMMARY:TL;DR: A collection of codeINSTALLATION:$pipinstall-Uabsfuyu-resLICENSE:MIT License Copyright (c) 2023 AbsoluteWinter Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
absgd
This is a package for improving the paper that 'Attentional Biased Stochastic Gradient for Imbalanced Classification.' paper
abs-import
No description available on PyPI.
abs-imports
abs-importsA pre-commit hook to automatically convert relative imports to absolute.Installationpip install abs-importsUsage as a pre-commit hookSeepre-commitfor instructionsSample.pre-commit-config.yaml:-repo:https://github.com/MarcoGorelli/abs-importsrev:v0.2.1hooks:-id:abs-importsCommand-line example$catmypackage/myfile.pyfrom . import __version__$abs-importsmypackage/myfile.py$catmypackage/myfile.pyfrom mypackage import __version__If your package follows the popular./srclayout, you can pass your application directories via--application-directories, e.g.$catsrc/mypackage/myfile.pyfrom . import __version__$abs-importssrc/mypackage/myfile.py--application-directoriessrc$catsrc/mypackage/myfile.pyfrom mypackage import __version__Multiple application directories should be comma-separated, e.g.--application-directories .:src. This is the same as inreorder-python-imports.See alsoCheck outpyupgrade, which I learned a lot from when writing this.
absio
Protect your application’s sensitive data with Absio’s Secured Containers.DocumentationFull documentation isavailable.Obtaining an API KeyTheabsiolibrary requires a valid API Key that must be passed into theabsio.initialize(...)function. Obtain an API Key by contacting ushereor sending an email [email protected]. An API key should be considered private and protected as such.Quick StartInstallation:pipinstallabsiocryptography--no-binary=cryptographyImport and initialize the module:importabsioabsio.initialize(api_key='your api key')Create accounts:alice=absio.user.create('password','reminder','passphrase')bob=absio.user.create('password','reminder','passphrase')Log in with an account:absio.login(alice.id,'password','passphrase')Create and share an Absio Secured Container:container=absio.container.create(header={'some sensitive metadata':None},content=open('/some/sensitive/data.bin','rb').read(),access=[bob.id,alice.id],)Securely access this container from another system:absio.login(bob.id,'password','passphrase')# Access the container with the container ID returned during creation, or a Container Event.container=absio.container.get('container_id')
absl
Abseil Python.
absl-extra
ABSL-ExtraA collection of utils I commonly use for running my experiments. It will:Notify on execution start, finish or failed.By default, Notifier will just log those out tostdout.I prefer receiving those in Slack, though (see example below).Log parsed CLI flags fromabsl.flags.FLAGSand config values fromconfig_file:get_config()Select registered task to run based on --task= CLI argument.Minimal exampleimportosfromabslimportloggingimporttensorflowastffromabsl_extraimporttf_utils,tasks,[email protected]_task(notifier=notifier.SlackNotifier(slack_token=os.environ["SLACK_BOT_TOKEN"],channel_id=os.environ["CHANNEL_ID"]))@tf_utils.requires_gpudefmain()->None:iftf_utils.supports_mixed_precision():tf.keras.mixed_precision.set_global_policy("mixed_float16")withtf_utils.make_gpu_strategy().scope():logging.info("Doing some heavy lifting...")if__name__=="__main__":tasks.run()flax_utils.pyCommon utilities used for training flax models, which I got tired of copy-pasting in every project.
abslib
No description available on PyPI.
absl-py
Abseil Python Common LibrariesThis repository is a collection of Python library code for building Python applications. The code is collected from Google's own Python code base, and has been extensively tested and used in production.FeaturesSimple application startupDistributed commandline flags systemCustom logging module with additional featuresTesting utilitiesGetting StartedInstallationTo install the package, simply run:pipinstallabsl-pyOr install from source:pythonsetup.pyinstallRunning TestsTo run Abseil tests, you can clone the git repo and runbazel:gitclonehttps://github.com/abseil/abseil-py.gitcdabseil-py bazeltestabsl/...Example CodePlease refer tosmoke_tests/sample_app.pyas an example to get started.DocumentationSee theAbseil Python Developer Guide.Future ReleasesThe current repository includes an initial set of libraries for early adoption. More components and interoperability with Abseil C++ Common Libraries will come in future releases.LicenseThe Abseil Python library is licensed under the terms of the Apache license. SeeLICENSEfor more information.
abso
No description available on PyPI.
absolang
A sentiment analysis library for detecting absolutist language.QuickstartInstallation:$ pip install absolang $ python -m spacy download en_core_web_smDetermining absolutist index for text:>>> from absolang import absolutist, absolutist_index >>> absolutist_index("The bigger dog is running.") 0.0 >>> absolutist("The bigger dog is running.") False >>> absolutist_index("He was completely bowled over.") 0.2 >>> absolutist("He was completely bowled over.") TrueAlgorithmParse text into tokens using Spacy’s en_core_web_sm language model.Count the number of word tokens (a token is considered a word if it consists solely of characters from the alphabet).Count the number of absolutist word tokens (a token is considered absolutist if its stem word is in the dictionary of absolutist words and it is not preceded by a negation, modifier or interjection).The absolutist index is the number of absolutist words divided by the total number of words.Text is considered absolutist if the index is greater than 1.1 percent.CaveatsThe frequency of absolutist words in control texts (ones written by people presumed not to suffer from anxiety or depression more than the average person) is about 1%, so one needs a few hundred words of texts before results start becoming meaningful.ReferencesIn an Absolute State: Elevated Use of Absolutist Words Is a Marker Specific to Anxiety, Depression, and Suicidal IdeationFCA Occasional Paper No. 8: Consumer Vulnerability
absolufy-imports
absolufy-importsA tool and pre-commit hook to automatically convert relative imports to absolute.Installation$pipinstallabsolufy-importsUsage as a pre-commit hook (recommended)Seepre-commitfor instructionsSample.pre-commit-config.yaml:-repo:https://github.com/MarcoGorelli/absolufy-importsrev:v0.3.0hooks:-id:absolufy-importsCommand-line example$absolufy-importsmypackage/myfile.py- from . import __version__+ from mypackage import __version__ConfigurationApplication directoriesIf your package follows the popular./srclayout, you can pass your application directories via--application-directories, e.g.$absolufy-importssrc/mypackage/myfile.py--application-directoriessrc- from . import __version__+ from mypackage import __version__Multiple application directories should be colon-separated, e.g.--application-directories .:src. This is the same as inreorder-python-imports.Only use relative importsUse the--neverflag, e.g.$absolufy-importsmypackage/myfile.py--never- from mypackage import __version__+ from . import __version__
absolute
UNKNOWN
absolute32
UNKNOWN
absolute-control
Absolute ControlIntroductionA library for assisting with control of computer processes using python.Install via pip through PyPipipinstallabsolute-controlUsageGet all processes running on the systemfromabsolute_controlimportget_all_processesprocesses=get_all_processes()You may also filter by pid, or by name.fromabsolute_controlimportget_process_by_id,get_process_by_namepid_processes=get_process_by_id(12345)name_processes=get_process_by_name('chrome')Kill all processes on the system.Note: This will kill all processes on the system, including the ones you have not started provided you have permission to do so.fromabsolute_controlimportkill_all_processeskill_all_processes()You may kill processes by name or pid.fromabsolute_controlimportkill_process_by_id,kill_processes_by_namekill_process_by_id(12345)kill_processes_by_name('chrome')Start a new process.fromabsolute_controlimportopen_process_using_commandprocess=open_process_using_command('chrome')TestingTo test the library, run the following commands:cdsrc pythonrun_tests.pyLicenseThis project is licensed under the MIT license.ContributionThis project is open source. You can contribute by making a pull request or by sending an email toJohnny Irvin
absolute-import
absolute-importAboutPackage providing a one-line functionality to set up a project for ability to perform absolute imports both after setup and with plain download. Additionally, provides namespaces support.UsageAbsolute importIn the root__init__.py.fromabsolute_importimportabsolute_importabsolute_import(file=__file__)In the root__main__.pyor root directly executable python files.fromabsolute_importimportabsolute_importif__name__=="__main__":absolute_import(file=__file__)Absolute import and namespaceIn the root__init__.py.fromabsolute_importimportabsolute_importabsolute_import(file=__file__,name=__name__,path=__path__)In the root__main__.pyor root directly executable python files.fromabsolute_importimportabsolute_importif__name__=="__main__":absolute_import(file=__file__,name=__name__,path=__path__)
absolutely
absolutelyA/B testing complete experimentation arena in Python
absolutely-nothing
No description available on PyPI.
absolutlynothing
jhbjhbghujhbgyuhgvhyhbvgyujhbvgyuujhbghyjnbvgyjhgyujhgyujhgtyuiujhyikjhuy78uhgyt7yt67ygftygtyuygtyyug
absolutlyuseless
Absolutly uselessAn absolutly useless package which your lonely or curious self decided the installGithub-flavored Markdown
absolutly-useless
Failed to fetch description. HTTP Status Code: 404
absolutly-useless-thinkerdesigns
Failed to fetch description. HTTP Status Code: 404
absorb
No description available on PyPI.
absorbing_centrality
This is an implementation of theabsorbing random-walk centralitymeasure for nodes in graphs. For the definition of the measure, as well as a study of the related optimization problem and algorithmic techniques, please see the pre-print publication onarXiv. A short version of this paper will appear in theICDM 2015.To cite this work, please useMavroforakis, Charalampos, Michael Mathioudakis, and Aristides Gionis. "Absorbing random-walk centrality: Theory and algorithms" Data Mining (ICDM), 2015 IEEE International Conference on. IEEE, 2015.InstallationYou can install theabsorbing_centralitypackage by executing the following command in a terminal.pip install git+https://github.com/harrymvr/absorbing-centrality#Egg=absorbing_centralityDocumentationFor instructions on how to use the package, consultits documentation.DevelopmentTo run all the tests for the code, you will needtox– check its webpage for instructions on how to install it.Oncetoxis installed, use your terminal to enter the directory with the local copy of the code (here it’s named ‘absorbing-centrality’) and simply type the following command.absorbing-centrality $ toxIf everything goes well, you’ll receive a congratulatory message.Note that the code is distributed under the Open Source Initiative (ISC) license. For the exact terms of distribution, see theLICENSE.Copyright (c) 2015, absorbing-centrality contributors, Charalampos Mavroforakis <[email protected]>, Michael Mathioudakis <[email protected]>, Aristides Gionis <[email protected]>Changelog0.1.0 (2015-08-31)Working version of the package.
absotone-melz
# absotone-melzA Machine Learning Library for Everyone!Change Log0.0.1 (Oct 3, 2021) 0.0.2 (Oct 3, 2021) 0.0.3 (Oct 3, 2021) 0.0.4 (Oct 3, 2021)
absp
welcome to my module
abspath
abspathis a command line tool that prints the absolute paths of all given files. File names can be piped viaSTDINor given as arguments.Usageabspath file1.txt path/to/file2.pdf abspath Desktop/* find . -name *.pdf | abspathInstallationInstallation from PyPIsudo pip install abspathInstallation from the repositorygit clone https://github.com/arne-cl/abspath.git cd abspath sudo python setup.py installLicense3-clause BSD.
abspath-notfresh
IntroThis is a command tool like linux commandpwdbut stronger.It's really efficient.When you want to copy a absolute path of some file in your shell, you have to use pwd and then append the file name, and select it with your mouse or usecontrol + c, but now, just use abspathfilenameand it will copy the absolute path to your clipboard.This just saves you 3 seconds at least per operation.Requirementsjust require python3.6+ and a pip packagepyperclip.InstallIn a python3 enviroment and usepip install abspath-notfreshto install.UsageYou can use the commandabspathwithout param to copy the current directory path orabspath file_or_dir_in_this_directoryto copy the full path to your clipbord.Just so easy and enjoy it!Maintaineremail:[email protected]: github.com/notfresh
absplit
ABSplitSplit your data into matching A/B/n groupsTable of ContentsAbout The ProjectCalculationGetting StartedInstallationTutorialsDo it yourselfUsageAPI ReferenceContributingLicenseContactAbout the projectABSplit is a python package that uses a genetic algorithm to generate as equal as possible A/B, A/B/C, or A/B/n test splits.The project aims to provide a convenient and efficient way for splitting population data into distinct groups (ABSplit), as well as and finding matching samples that closely resemble a given original sample (Match).Whether you have static population data or time series data, this Python package simplifies the process and allows you to analyze and manipulate your population data.This covers the following use cases:ABSplit class: Splitting an entire population into n groups by given proportionsMatch class: Finding a matching group in a population for a given sampleCalculationABSplit standardises the population data (so each metric is weighted as equally as possible), then pivots it into a three-dimensional array, by metrics, individuals, and dates.The selection from the genetic algorithm, along with its inverse, is applied across this array with broadcasting to compute the dot products between the selection and the population data.As a result, aggregated metrics for each group are calculated. The Mean Squared Error is calculated for each metric within the groups and then summed for each metric. The objective of the cost function is to minimize the overall MSE between these two groups, ensuring the metrics of both groups track each other as similarly across time as possible.(back to top)Getting StartedUse the package managerpipto install ABSplit and it's prerequisites.ABSplit requirespygad==3.0.1Installationpipinstallabsplit(back to top)TutorialsPlease seethis colabfor a range of examples on how to use ABSplit and MatchDo it yourselfSeethis colabto learn how ABSplit works under the hood, and how to build your own group splitting tool usingPyGAD,(back to top)UsagefromabsplitimportABSplitimportpandasaspdimportdatetimeimportnumpyasnp# Synthetic datadata_dct={'date':[datetime.date(2030,4,1)+datetime.timedelta(days=x)forxinrange(3)]*5,'country':['UK']*15,'region':[itemforsublistin[[x]*6forxin['z','y']]foriteminsublist]+['x']*3,'city':[itemforsublistin[[x]*3forxin['a','b','c','d','e']]foriteminsublist],'metric1':np.arange(0,15,1),'metric2':np.arange(0,150,10)}df=pd.DataFrame(data_dct)# Identify which columns are metrics, which is the time period, and what to split onkwargs={'metrics':['metric1','metric2'],'date_col':'date','splitting':'city'}# Initialiseab=ABSplit(df=df,split=[.5,.5],# Split into 2 groups of equal size**kwargs,)# Generate splitab.run()# Visualise generation fitnessab.fitness()# Visualise dataab.visualise()# Extract bin splitsdf=ab.results# Extract data aggregated by binsdf_agg=ab.aggregations# Extract summary statisticsdf_dist=ab.distributions# Population counts between groupsdf_rmse=ab.rmse# RMSE between groups for each metricdf_mape=ab.mape# MAPE between groups for each metricdf_totals=ab.totals# Total sum of each metric for each group(back to top)API ReferenceAbsplitABSplit(df, metrics, splitting, date_col=None, ga_params={}, metric_weights={}, splits=[0.5, 0.5], size_penalty=0)Splits population into n groups. Mutually exclusive, completely exhaustiveArguments:df(pd.DataFrame): Dataframe of population to be splitmetrics(str, list): Name of, or list of names of, metric columns in DataFrame to be considered in splitsplitting(str): Name of column that represents individuals in the population that is getting split. For example, if you wanted to split a dataframe of US counties, this would be the county name columndate_col(str, optional): Name of column that represents time periods, if applicable. If left empty, it will perform a static split, i.e. not across timeseries, (defaultNone)ga_params(dict, optional): Parameters for the genetic algorithmpygad.GAmodule parameters, seeherefor arguments you can pass (default:{})splits(list, optional): How many groups to split into, and relative size of the groups (default:[0.5, 0.5], 2 groups of equal size)size_penalty(float, optional): Penalty weighting for differences in the population count between groups (default:0)sum_penalty(float, optional): Penalty weighting for the sum of metrics over time. If this is greater than zero, it will add a penalty to the cost function that will try and make the sum of each metric the same for each group (default:0)cutoff_date(str, optional): Cutoff date between fitting and validation data. For example, if you have data between 2023-01-01 and 2023-03-01, and the cutoff date is 2023-02-01, the algorithm will only perform the fit on data between 2023-01-01 and 2023-02-01. IfNone, it will fit on all available data. If cutoff date is provided, RMSE scores (gotten by using theab.rmseattribute) will only be for validation period (i.e., from 2023-02-01 to end of timeseries)missing_dates(str, optional): How to deal with missing dates in time series data, options:['drop_dates', 'drop_population', '0', 'median'](default:median)metric_weights(dict, optional): Weights for each metric in the data. If you want the splitting to focus on one metrics more than the other, you can prioritise this here (default:{})MatchMatch(population, sample, metrics, splitting, date_col=None, ga_params={}, metric_weights={})Takes DataFramesampleand finds a comparable group inpopulation.Arguments:population(pd.DataFrame): Population to search for comparable group (Must exclude sample data)sample(pd.DataFrame): Sample we are looking to find a match for.metrics(str, list): Name of, or list of names of, metric columns in DataFramesplitting(str): Name of column that represents individuals in the population that is getting splitdate_col(str, optional): Name of column that represents time periods, if applicable. If left empty, it will perform a static split, i.e. not across timeseries, (defaultNone)ga_params(dict, optional): Parameters for the genetic algorithmpygad.GAmodule parameters, seeherefor arguments you can pass (default:{})splits(list, optional): How many groups to split into, and relative size of the groups (default:[0.5, 0.5], 2 groups of equal size)metric_weights(dict, optional): Weights for each metric in the data. If you want the splitting to focus on one metrics more than the other, you can prioritise this here (default:{})(back to top)ContributingI welcome contributions to ABSplit! For major changes, please open an issue first to discuss what you would like to change.Please make sure to update tests as appropriate.(back to top)LicenseMIT(back to top)
absplots
Matplotlibsubplots with absolute margins in mm or inches.Installation:pip install absplotsExample usage:import absplots as apl # init 150x100 mm figure with 5 mm margins between and around subplots fig, grid = apl.subplots_mm(figsize=(150, 100), nrows=2, ncols=2, gridspec_kw=dict(left=5, right=5, wspace=5, bottom=5, top=5, hspace=5))
absp-socket
server using :import socket-client-msgserver = socket-client-msg.connect(ip,port) server.server()client using :import socket-client-msgserver = sock-client-msg.connecr(ip,port) server.client()
abspy
Introductionabspyis a Python tool for 3Dadaptive binary space partitioningand beyond: an ambient 3D space is adaptively partitioned to form a linear cell complex with planar primitives, where an adjacency graph is dynamically obtained. The tool is designed primarily to support compact surface reconstruction and other applications as well.Key featuresManipulation of planar primitives from point cloud or reference meshLinear cell complex creation with adaptive binary space partitioning (a-BSP)Dynamic BSP-tree (NetworkXgraph) updated locally upon primitive insertionSupport of polygonal surface reconstruction with graph cutCompatible data structure withEasy3Don point cloud, primitive, mesh and cell complexRobust spatial operations underpinned by the rational ring fromSageMath's exact kernelInstallationAll-in-one installationCreate a conda environment with the latestabspyrelease and all its dependencies installed:gitclonehttps://github.com/chenzhaiyu/abspy&&cdabspy condaenvcreate-fenvironment.yml&&condaactivateabspyManual installationStill easy! Create a conda environment and enter it:condacreate--nameabspypython=3.10&&condaactivateabspyInstall the dependencies:condainstall-cconda-forgenetworkxnumpytqdmscikit-learnmatplotlibcolorlogscipytrimeshrtreepygletsage=10.0Alternatively, you can usemambafor faster package parsing installation:condainstallmamba-cconda-forge mambainstall-cconda-forgenetworkxnumpytqdmscikit-learnmatplotlibcolorlogscipytrimeshrtreepygletsage=10.0Preferably, the latestabspyrelease can be found and installed viaPyPI:pipinstallabspyOtherwise, you can install the latest version locally:gitclonehttps://github.com/chenzhaiyu/abspy&&cdabspy pipinstall.Quick startExample 1 - Reconstruction from point cloudThe example loads a point cloud toVertexGroup(.vg), partitions ambient space into a cell complex, creats the adjacency graph, and extracts the object's outer surface.fromabspyimportVertexGroup,AdjacencyGraph,CellComplex# load a point cloud in VertexGroupvertex_group=VertexGroup(filepath='points.vg')# normalise point cloudvertex_group.normalise_to_centroid_and_scale()# additional planes to append (e.g., bounding planes)additional_planes=[[0,0,1,-1],[1,2,3,-4]]# initialise cell complexcell_complex=CellComplex(vertex_group.planes,vertex_group.bounds,vertex_group.obbs,vertex_group.points_grouped,build_graph=True,additional_planes=additional_planes)# refine planar primitivescell_complex.refine_planes()# prioritise certain planes (e.g., vertical ones)cell_complex.prioritise_planes(prioritise_verticals=True)# construct cell complexcell_complex.construct()# print info about cell complexcell_complex.print_info()# build adjacency graph from cell complexadjacency_graph=AdjacencyGraph(cell_complex.graph)# assign weights (e.g., occupancy by neural network prediction) to graphadjacency_graph.assign_weights_to_n_links(cell_complex.cells,attribute='area_overlap',factor=0.001,cache_interfaces=True)adjacency_graph.assign_weights_to_st_links(...)# perform graph cut to extract surface_,_=adjacency_graph.cut()# save surface model to an OBJ fileadjacency_graph.save_surface_obj('surface.obj',engine='rendering')Example 2 - Convex decomposition from meshThe example loads a mesh toVertexGroupReference, partitions ambient space into a cell complex, identify cells inside reference mesh, and visualize the cells.fromabspyimportVertexGroupReferencevertex_group_reference=VertexGroupReference(filepath='mesh.obj')# initialise cell complexcell_complex=CellComplex(vertex_group_reference.planes,vertex_group_reference.bounds,vertex_group_reference.obbs,build_graph=True)# construct cell complexcell_complex.construct()# cells inside reference meshcells_in_mesh=cell_complex.cells_in_mesh('mesh.obj',engine='distance')# save cell complex filecell_complex.save('complex.cc')# visualise the inside cellsiflen(cells_in_mesh):cell_complex.visualise(indices_cells=cells_in_mesh)Please find the usage ofabspyatAPI reference. For the data structure of a.vg/.bvgfile, please refer toVertexGroup.FAQHow can I installabspyon Windows?For Windows users, you may need to buildSageMath from sourceor install all other dependencies into apre-built SageMath environment. Otherwise, virtualization withdockermay come to the rescue.How can I useabspyfor surface reconstruction?As demonstrated inExample 1, the surface can be addressed by graph cut — in between adjacent cells where one beinginsideand the other beingoutside— exactly where the cut is performed. For more information, please refer toPoints2Polywhich wrapsabspyfor building surface reconstruction.LicenseMITCitationIf you useabspyin a scientific work, please consider citing the paper:@article{chen2022points2poly,title={Reconstructing compact building models from point clouds using deep implicit fields},journal={ISPRS Journal of Photogrammetry and Remote Sensing},volume={194},pages={58-73},year={2022},issn={0924-2716},doi={https://doi.org/10.1016/j.isprsjprs.2022.09.017},url={https://www.sciencedirect.com/science/article/pii/S0924271622002611},author={Zhaiyu Chen and Hugo Ledoux and Seyran Khademi and Liangliang Nan}}
absract
Failed to fetch description. HTTP Status Code: 404
absresgetter
absresgetterGet absolute resource path of exterior packageSetuppython -m pip install absresgetterDescriptionThere is only one method.getabsres(res: str) -> strThis package find the module in the stack which includes relative resource path like 'ico/sample.png'.You can get the absolute path like 'C:/.../sample_exists_directory/ico/sample.png'.Exampleprint(getabsres('ico/dark-notepad.svg'))# C:/.../ico_dark_notepad_exists_directory/ico/dark-notepad.svg
abssmt
Brief DescriptionThis is a introductory level project that teaches RubyLearning’s “An Introduction to Python” course students on how to uplaod their modules to Python Package Index.Usageimport abssmtprint(abssmt.abssmt(-20))
absstream
# AbsStreamAbsStream is abstract stream for string and list and tuple.# Install$ pip install absstream# Usageimport absstream.stream import Streamstrm = Stream(‘text’)while not strm.eof():c = strm.get() print(c)
abst
Auto BastionManage your bastion sessions automatically without pain of creating them by clicking and copy pasting commandsSupportedOSNo specific requirements here, whatever runs PythonCloud ProvidersOracle CloudRequirementsPython3.7+Configuration of OCI SDKAdd key in your Oracle cloud profileGenerate new keyCopy configuration file generated in your profile to~/.oci/configReplace path of your*.pemkey on line with#TODOInstallingMacOSOn most MacOS machines there ispip3instead ofpipuse pip3 for installInstall and update usingpip:pipinstallabst or pip3installabstHow to set upUseabst config fill {context}to fill your credentials for usage, you can find all the credentials on cloud provider site, leave context empty if you want to fill defaultUseabst generate {context}to generate default config for context, leave context empty if you want to generate to defaultUsageBoth commands do automatic reconnect on idle SSH Tunnel terminationabst create forward/managed {context}for automatic Bastion session creation once deleted, will keep your, leave context empty if you want to use default connection alive till you kill this scriptabst cleanfor removal all the saved credentialsabst use {context}for using different config that you had filled,defaultis the default context increds.jsonUseabst locate {context}to locate your configs, leave context empty if you want to locate defaultParallel executionIf you are more demanding and need to have connection to your SSH Tunnels ready at all times you can use parallel executed Bastions that currently work for full-auto forwarded settingChange local port in the setting to port that is unique to other configs, and it will be running on all the added ports Until you kill theabstcommand, it will automatically remove all generated Bastion sessions by this programabst parallel add {context}will add context from yourcontext folderto stack that will be executedabst parallel remove {context}will remove context from yourcontext folderto stack that will be executedabst parallel run {context}will run all the stacked contextsabst parallel displaywill display current stacked contextsHelm registry commandsabst helm loginwill log you in with credentials set in config.json, you set these credentials when running this command first time. Edit with flag --edit 1-n number is the index of credential in listabst helm push <chart-name>will push to specified remote branch, if more credentials preset it will let you pick which one to useKubectl commandsabst cp secret secret_namespace target_namespace source_namespace(optional)this will copy secret to target namespace, without providing source namespace it will use current as a sourceDid I made your life less painful ?Support my coffee addiction ;)
abstar
abstarVDJ assignment and antibody sequence annotation. Scalable from a single sequence to billions of sequences.Source code:github.com/briney/abstarDocumentation:abstar.readthedocs.orgDownload:pypi.python.org/pypi/abstarDocker:hub.docker.com/r/briney/abstar/installpip install abstaruseTo run abstar on a single FASTA or FASTQ file:abstar -i <input-file> -o <output-directory> -t <temp-directory>To iteratively run abstar on all files in an input directory:abstar -i <input-directory> -o <output-directory> -t <temp-directory>To run abstar using the included test data as input:abstar -o <output-directory> -t <temp-directory> --use-test-dataWhen using the abstar test data, note that although the test data file contains 1,000 sequences, one of the test sequences is not a valid antibody recombination. Only 999 sequences should be processed successfully.When using BaseSpace as the input data source, you can optionally provide all of the required directories:abstar -i <input-directory> -o <output-directory> -t <temp-directory> -bOr you can simply provide a single project directory, and all required directories will be created in the project directory:abstar -p <project_directory> -badditional options-l LOG_LOCATION, --log LOG_LOCATIONChange the log directory location. Default is the parent directory of<output_directory>.-m, --mergeInput directory should contain paired FASTQ (or gzipped FASTQ) files. Paired files will be merged with PANDAseq prior to processing with abstar. Note that when using the BaseSpace option (-b, --basespace), this option is implied.-b, --basespaceDownload a sequencing run from BaseSpace, which is Illumina's cloud storage environment. Since Illumina sequencers produce paired-end reads,--mergeis implied.-u N, --uaid NSequences contain a unique antibody ID (UAID, or molecular barcode) of length N. The uaid will be parsed from the beginning of each input sequence and added to the JSON output. Negative values result in the UAID being parsed from the end of the sequence.-s SPECIES, --species SPECIESSelect the species from which the input sequences are derived. Supported options are 'human', 'mouse', and 'macaque'. Default is 'human'.-c, --clusterRuns abstar in distributed mode on a Celery cluster.-h, --helpPrints detailed information about all runtime options.-D --debugMuch more verbose logging.apiMost core abstar functions are available through a public API, making it easier to run abstar as a component of integrated analysis pipelines. See the abstardocumentationfor more detail about the API.helper scriptsA few helper scripts are included with abstar:batch_mongoimportautomates the import of multiple JSON output files into a MongoDB database.build_abstar_germline_dbcreates abstar germline databases from IMGT-gapped FASTA files of V, D and J gene segments.make_basespace_credfilemakes a credentials file for BaseSpace, which is required if downloading sequences from BaseSpace with abstar. Developer credentials are required, and the process for obtaining them is explainedheretestingTo run the test suite, clone or download the repository and runpytest ./from the top-level directory.requirementsPython 3.8+abutilsbiopythoncelerynwalign3pymongopytestscikit-bioAll of the above dependencies can be installed with pip, and will be installed automatically when installing abstar with pip.If you're new to Python, a great way to get started is to install theAnaconda Python distribution, which includes pip as well as a ton of useful scientific Python packages.sequence merging requiresPANDAseqbatch_mongoimport requiresMongoDBBaseSpace downloading requires theBaseSpace Python SDK
abstcal
Abstinence CalculatorA Python package to calculate abstinence results using the timeline followback interview dataInstallationUse the package in PythonInstall the package using the pip tool. If you need instruction on how to install Python, you can find information at thePythonwebsite. The pip tool is the most common Python package management tool, and you can find information about its use instruction at thepypawebsite. The pip tool will be pre-installed if you download Python 3.6+ frompython.orgdirectly.Once your computer has Python and pip installed, you can run the following command in your command line tool, which will install abstcal and its required dependencies, mainly thepandas(for data processing) and thestreamlit(for the web app development).pip install abstcalIf you're not familiar with Python coding, you can run theJupyter Notebookincluded on this page onGoogle Colab, which is an online platform to run your Python code remotely on a server hosted by Google without any cost. With this option, you don't have to worry about installing Python and any Python packages on your local computer.Web App InterfaceIf you don't want to use the non-GUI environment, you can use the package's web app, please go toabstcal Hosted by Streamlit. The web app provides the core functionalities of the package. You can find more detailed instructions on the web page.If you're concerned about data privacy and security associated with using the web app hosted online, you can use the web app hosted locally on your computer. However, it requires the installation of Python and Python packages on your computer. Here's the overall instruction.Install Python 3.6+ on your computer (Official Python Downloads).Install abstcal on your computer (Using the command line tool, runpip install abstcal).Download the entire package's zip file from the GitHub page.Unzip the file to the desired directory on your computer.Locate the web app file namedcalculator_web_app.pyand get its full path.Launch the web app locally (Using your command line tool, runstreamlit run the_full_path_to_the_web_app.Overview of the PackageThis package is developed to score abstinence using the Timeline Followback (TLFB) and visit data in clinical substance use research. It provides functionalities to preprocess the datasets to remove duplicates and outliers. In addition, it can impute missing data using various criteria.It supports the calculation of abstinence of varied definitions, including continuous, point-prevalence, and prolonged using either intent-to-treat (ITT) or responders-only assumption. It can optionally integrate biochemical verification data.Required DatasetsThe Timeline Followback Data (Required)The dataset should have three columns:id,date, andamount. The id column stores the subject ids, each of which should uniquely identify a study subject. The date column stores the dates when daily substance uses are collected. The date column can also use day counters using an anchor date for each subject. The amount column stores substance uses for each day.Using the raw dateiddateamount100002/03/201910100002/04/20198100002/05/201912100002/06/20199100002/07/201910100002/08/20198Using the day counteriddateamount100011010002810003121000491000510100068The Biochemical Measures Dataset (Optional)The dataset should have three columns:id,date, andamount. The id column stores the subject ids, each of which should uniquely identify a study subject. The date column stores the dates when daily substance uses are collected. Similar to the TLFB dataset, the biochemical measures dataset can also use day counters for the date column. The amount column stores the biochemical measures that verify substance use status.iddateamount100002/03/20194100002/11/20196100003/04/201910100003/22/20198100003/28/20196100004/15/20195The Visit Data (Required)It needs to be in one of the following two formats.The long format.The dataset should have three columns:id,visit, anddate. The id column stores the subject ids, each of which should uniquely identify a study subject. The visit column stores the visits. The date column stores the dates for the visits.idvisitdate1000002/03/20191000102/10/20191000202/17/20191000303/09/20191000404/07/20191000505/06/2019The wide format.The dataset should have the id column and additional columns with each representing a visit.idv0v1v2v3v4v5100002/03/201902/10/201902/17/201903/09/201904/07/201905/06/2019100102/05/201902/13/201902/20/201903/11/201904/06/201905/09/2019For both formats, the date values can be day counters, just as the TLFB dataset.Supported Abstinence DefinitionsThe following abstinence definitions have both calculated as the intent-to-treat (ITT) or responders-only options. By default, the ITT option is used.Continuous Abstinence: No substance use in the defined time window. Usually, it starts with the target quit date.Point-Prevalence Abstinence: No substance use in the defined time window preceding the assessment time point.Prolonged Abstinence Without Lapses: No substance use after the grace period (usually 2 weeks) until the assessment time point.Prolonged Abstinence With Lapses: Lapses are allowed after the grace period until the assessment time point.Use ExampleOnce you have installed abstcal and prepared your datasets according to the format requirements listed above, you can start to use the tool.1. Import the PackagefromabstcalimportTLFBData,VisitData,AbstinenceCalculator,abstcal_utilsTheabstcal_utilsis an optional module, which provides some utility functions as discussed in the Optional Features section.2. Process the TLFB Data2a. Read the TLFB dataYou can either specify the full path of the TLFB data or just the filename if the dataset is in your current work directory. Supported file formats include comma-separated (.csv), tab-delimited (.txt), and Excel spreadsheets (.xls, .xlsx).Note:If the date column uses day counters, don't forget to setFalseto theuse_raw_dateparameter.# Use the default settingstlfb_data=TLFBData('path_to_tlfb.csv')# Use additional parameterstlfb_data=TLFBData('path_to_tlfb.csv',abst_cutoff=0,included_subjects="all",use_raw_date=True)# abst_cutoff: set the custom abstinence cutoff, default=0# included_subjects: set the list of subject ids to include in the processed data, default using all subjects# use_raw_date: the TLFB dataset uses the raw dates for the date column when True, if the date column uses day counters, set it to False2b. Profile the TLFB dataIn this step, you will see a report of the data summary, such as the number of records, the number of subjects, and any applicable abnormal data records, including duplicates and outliers. In terms of outliers, you can specify the minimal and maximal values for the substance use amounts. Those values outside the range are considered outliers and are shown in the summary report.# No outlier identificationtlfb_data.profile_data()# Identify outliers that are outside of the rangetlfb_data.profile_data(0,100)# Use the returned values of the functiontlfb_summary_overall,tlfb_summary_subject,tlfb_hist_plot=tlfb_data.profile_data()# tlfb_summary_overall: the overall summary of the TLFB data# tlfb_summary_subject: the data summary by subject# tlfb_hist_plot: a histogram of the TLFB amount records# to show the histogram, you can use the utility function, which is just a convenience method to use matplotlib to show the imageabstcal_utils.show_figure()2c. Drop data records with any missing valuesThose records with missingid,date, oramountwill be removed. The number of removed records will be reported.tlfb_data.drop_na_records()2d. Check and remove any duplicate recordsDuplicate records are identified based onidanddate. There are different ways to remove duplicates:min,max, ormean, which keep the minimal, maximal, or mean of the duplicate records. You can also have the options to remove all duplicates. You can also simply view the duplicates and handle these duplicates manually.# Check only, no actions for removing duplicatestlfb_data.check_duplicates(None)# Check and remove duplicates by keeping the minimaltlfb_data.check_duplicates("min")# Check and remove duplicates by keeping the maximaltlfb_data.check_duplicates("max")# Check and remove duplicates by keeping the computed mean (all originals will be removed)tlfb_data.check_duplicates("mean")# Check and remove all duplicatestlfb_data.check_duplicates(False)Thecheck_duplicatesfunction will return any duplicate records.2e. Recode outliers (optional)Those values outside the specified range are considered outliers. All these outliers will be removed by default. However, if the users set the drop_outliers argument to be False, the values lower than the minimal will be recoded as the minimal, while the values higher than the maximal will be recoded as the maximal.# Set the minimal and maximal values for outlier detection, by default, the outliers will be droppedtlfb_data.recode_outliers(0,100)# Alternatively, we can recode outliers by replacing them with bounding valuestlfb_data.recode_outliers(0,100,False)Therecode_outliersfunction returns the summary of the identified outliers.2f. Impute the missing TLFB dataTo calculate the ITT abstinence, the TLFB data will be imputed for the missing records. All contiguous missing intervals will be identified. Each of the intervals will be imputed based on the two values, the one before and the one after the interval.You can choose to impute the missing values for the interval using the mean of these two values or interpolate the missing values for the interval using the linear values generated from the two values. Alternatively, you can specify a fixed value, which will be used to impute all missing values.Imputation ModeParametersImputed ValuesUniformuniformQt= (Q0+ Q1) / 2LinearlinearQt= m * (t - t0) + Q0where m is (Q1- Q0) / (t1- t0)Fixeda numeric valueUse the numeric value to fill all missing gapsNote. Q0and Q1represent the substance use amount before (t0) and after (t1) the missing TLFB interval. Qtrepresents the interpolated substance use amount at the time t.The following figure shows you some examples of these different imputation modes.# Use the meantlfb_data.impute_data("uniform")# Use the linear interpolationtlfb_data.impute_data("linear")# Use a fixed value, whichever is appropriate to your research questiontlfb_data.impute_data(1)tlfb_data.impute_data(5)# A calling that uses all possible featurestlfb_data.impute_data("linear",last_record_action="ffill",maximum_allowed_gap_days=30,biochemical_data=bio_data,overridden_amount="infer")# last_record_action: how you interpolate TLFB records using each subject's last record, default="ffill", fill forward# maximum_allowed_gap_days: the maximum allowed days for TLFB data imputation# biochemical_data: the biochemical dataset for abstinence verification (details will be provided later)# overridden_amount: with the presence of biochemical data, how false negative TLFB records will be overridden3. Process the Visit Data3a. Read the visit dataSimilar to reading the TLFB data, you can read files in .csv, .txt, .xls, or .xlsx format. It's also supported if your visit dataset is in the univariate format, which means that each subject has only one row of data, and the columns are the visits and their dates.Importantly, it will also detect if any subjects have their visits with the dates that are out of the order. By default, the order is inferred using the numeric or alphabetic order of the visits. These records with incorrect data may result in wrong abstinence calculations.# Read the visit data in the long format (the default option)visit_data=VisitData("file_path.csv")# Read the visit data in the wide formatvisit_data=VisitData("file_path.csv","wide")# Read the visit data and specify the order of the visitvisit_data=VisitData("file_path.csv",expected_ordered_visits=[1,2,3,5,6])Note:The name of this visit dataset is nominal. It does not only refer to actual in-person and telephone visits, it also refers to other important milestones or timepoints (e.g., Target Quit Day) in clinical cessation trials. Thus, the visit dataset should incluse all these visits that you need to calculate abstinence. Relatedly, this package has a pre-processing tool that allows you to create "virtual" visits based on existing visits. You can find the instruction on this feature at the end of this page.If you prefer referring to the visit data as time points or milestones, you can do so by creating the visit dataset as following:# If you prefer using time pointstimepoint_data=TimePointData("file_path.csv")# If you prefer using milestonesmilestone_data=MilestoneData("file_path.csv")Note:If the date column uses the day counters, you'll have to set theuse_raw_datetoFalse, just as processing the TLFB data.# When the dates are day countersvisit_data=VisitData("file_path.csv",expected_ordered_visits=[1,2,3,5,6],use_raw_data=False)3b. Profile the visit dataYou will see a report of the data summary, such as the number of records, the number of subjects, and any applicable abnormal data records, including duplicates and outliers. In terms of outliers, you can specify the minimal and maximal values for the dates. The dates will be inferred from strings. Please use the formatmm/dd/yyyy.# No outlier identificationvisit_data.profile_data()# Outlier identificationvisit_data.profile_data("07/01/2000","12/08/2020")# Use the returned values of the functionvisit_summary_overall,visit_summary_subject,visit_hist_plot=visit_data.profile_data()# visit_summary_overall: the overall summary of the TLFB data# visit_summary_subject: the data summary by subject# visit_hist_plot: a histogram of the visit records# to show the histogram, you can use the utility function, which is just a convenience method to use matplotlib to show the imageabstcal_utils.show_figure()3c. Drop data records with any missing valuesThose records with missingid,visit, ordatewill be removed. The number of removed records will be reported.visit_data.drop_na_records()3d. Check and remove any duplicate recordsDuplicate records are identified based onidandvisit. There are different ways to remove duplicates:min,max, ormean, which keep the minimal, maximal, or mean of the duplicate records. The options are the same as how you deal with duplicates in the TLFB data. Calling this function will return the duplicate records.# Check only, no actions for removing duplicatesvisit_data.check_duplicates(None)# Check and remove duplicates by keeping the minimalvisit_data.check_duplicates("min")# Check and remove duplicates by keeping the maximalvisit_data.check_duplicates("max")# Check and remove duplicates by keeping the computed mean (all originals will be removed)visit_data.check_duplicates("mean")# Check and remove all duplicatesvisit_data.check_duplicates(False)3e. Recode outliers (optional)Those values outside the specified range are considered outliers. The syntax and usage is the same as what you deal with the TLFB dataset# Set the minimal and maximal, and outliers will be removed by defaultvisit_data.recode_outliers("07/01/2000","12/08/2020")# Set the minimal and maximal, but keep the outliers by replacing them with bounding valuesvisit_data.recode_outliers("07/01/2000","12/08/2020",False)3f. Impute the missing visit dataTo calculate the ITT abstinence, the visit data will be imputed for the missing records. The program will first find the earliest visit date as the anchor visit, which should be non-missing for all subjects. Then it will calculate the difference in days between the later visits and the anchor visit. Based on these difference values, the following two imputation options are available. The"freq"option will use the most frequent difference value, which is the default option. The"mean"option will use the mean difference value.Imputation ModeParametersInterpolated ValuesFrequentfreqReference visit’s date + The most frequent intervalMeanmeanReference visit’s date + The mean intervalDictionarya dict objectReference visit’s date + The specified days of intervalNote. The reference visit is specified by the user, for which all subjects have valid dates. When it is not specified, the calculator will infer the earliest visit as the anchor visit.The following figure illustrates the different options for imputation. For the sake of a better illustration, the tables use the wide format of the visit data. You don't need to transform you visit data, and everything will be handled under the hood for you.# Use the most frequent difference value between the missing visit and the anchor visitvisit_data.impute_data(impute="freq")# Use the mean difference value between the missing visit and the anchor visitvisit_data.impute_data(impute="mean")# Specify which visit should serve as the anchor or reference visitvisit_data.impute_data(anchor_visit=1)4. Calculate Abstinence4a. Create the abstinence calculator using the TLFB and visit dataTo calculate abstinence, you instantiate the calculator by setting the TLFB and visit data. By default, only those who have both TLFB and visit data will be scored.abst_cal=AbstinenceCalculator(tlfb_data,visit_data)4b. Check data availability (optional)You can find out how many subjects have the TLFB data and how many have the visit data.abst_cal.check_data_availability()Thecheck_data_availabilityfunction returns the data availablility summary.4c. Calculate abstinenceFor all the function calls to calculate abstinence, you can request the calculation to be ITT (intent-to-treat) or RO (responders-only). You can optionally specify the calculated abstinence variable names. By default, the abstinence names will be inferred. Another shared argument is whether you want to include the ending date. Notably, each method will generate the abstinence dataset and a dataset logging first lapses that make a subject nonabstinent for a particular abstinence calculation.shared parameterdefault valueimplicationabst_var_names'infer'calculated abstinence variables will have names generated automatically based on inputincluding_endFalsethe time window used for abstinence calculation will not include the end visit datemode'itt'use ITT assumption, if set as 'ro', the responders-only assumption will be usedContinuous abstinenceTo calculate the continuous abstinence, you need to specify the visit when the window starts and the visit when the window ends. To provide greater flexibility, you can specify a series of visits to generate multiple time windows.# Calculate only one windowabst_df,lapse_df=abst_cal.abstinence_cont(2,5)# Calculate two windowsabst_df,lapse_df=abst_cal.abstinence_cont(2,[5,6])# Calculate three windows with abstinence names specifiedabst_df,lapse_df=abst_cal.abstinence_cont(2,[5,6,7],["abst_var1","abst_var2","abst_var3"])Point-prevalence abstinenceTo calculate the point-prevalence abstinence, you need to specify the visits. You'll need to specify the number of days preceding the time points. To provide greater flexibility, you can specify multiple visits and multiple numbers of days.# Calculate only one time point, 7-d point-prevalenceabst_df,lapse_df=abst_cal.abstinence_pp(5,7)# Calculate multiple time points, multiple day conditionsabst_df,lapse_df=abst_cal.abstinence_pp([5,6],[7,14,21,28])Prolonged abstinenceTo calculate the prolonged abstinence, you need to specify the quit visit and the number of days for the grace period (the default length is 14 days). You can calculate abstinence for multiple time points. There are several options regarding how a lapse is defined. See below for some examples.# Lapse isn't allowedabst_df,lapse_df=abst_cal.abstinence_prolonged(3,[5,6],False)# Lapse is defined as exceeding a defined amount of substance useabst_df,lapse_df=abst_cal.abstinence_prolonged(3,[5,6],'5 cigs')# Lapse is defined as exceeding a defined number of substance use daysabst_df,lapse_df=abst_cal.abstinence_prolonged(3,[5,6],'3 days')# Lapse is defined as exceeding a defined amount of substance use over a time windowabst_df,lapse_df=abst_cal.abstinence_prolonged(3,[5,6],'5 cigs/7 days')# Lapse is defined as exceeding a defined number of substance use days over a time windowabst_df,lapse_df=abst_cal.abstinence_prolonged(3,[5,6],'3 days/7 days')# Combination of these criteriaabst_df,lapse_df=abst_cal.abstinence_prolonged(3,[5,6],('5 cigs','3 days/7 days'))4d. Responders-only abstinence calculationBy default, the calculation of the above-mentioned abstinence is based on the ITI assumption. To calculate responders-only abstinence, you need to set the mode parameter to "ro" when you call these calculation-related functions.abst_cal.abstinence_pp(5,7,mode="ro")The above function call will calculate visit=5's 7-day point-prevalance abstinence with the assumption of responders-only. Under the hood, the calculator will consider abstinent only if 1) the subject had 7 TLFB data records before v5 2) the subject did not smoke at all in these 7 days. If a subject had less than 7 TLFB data records before v5, he or she is considered a non-responder, and the abstinence outcome will be N/A. If a subject had 7 TLFB data records and smoked any day, he or she is considered non-abstinent.5. Output Datasets5a. The abstinence datasetsTo output the abstinence datasets that you have created from calling the abstinence calculation methods, you can use the following method to create a combined dataset, something like below.iditt_abst_cont_v5_v2itt_abst_cont_v6_v2itt_abst_pp7_v5itt_abst_pp7_v6100011111001101010021111100300111004001010050001# The output data will merge these individual DataFrame objects, and save it to the file that you specify.abst_cal.merge_abst_data([abst_df0,abst_df1,abst_df2],"merged_abstinence_data.csv")# Merge DataFrame objects only, no data will be saved to your computerabst_cal.merge_abst_data([abst_df0,abst_df1,abst_df2])5b. The lapse datasetsTo output the lapse datasets that you have created from calling the abstinence calculation methods, you can use the following method to create a combined dataset, something like below.iddateamountabst_name100002/03/201910itt_abst_cont_v5100103/05/20198itt_abst_cont_v5100204/06/201912itt_abst_cont_v5100002/06/20199itt_abst_cont_v6100104/07/201910itt_abst_cont_v6100205/08/20198itt_abst_cont_v6# The output data will merge these individual DataFrame objects, and save it to the file that you specify.abst_cal.merge_lapse_data([lapse_df0,lapse_df1,lapse_df2],"merged_lapse_data.csv")# Merge DataFrame objects only, no data will be saved to your computerabst_cal.merge_abst_data([abst_df0,abst_df1,abst_df2])Additional FeaturesI. Integration of Biochemical Verification DataIf your study has collected biochemical verification data, such as carbon monoxide for smoking or breath alcohol concentration for alcohol intervention, these biochemical data can be integrated into the TLFB data. In this way, non-honest reporting can be identified (e.g., self-reported of no use, but biochemically un-verified), the self-reported value will be overridden, and the updated record will be used in later abstinence calculation.The following code shows you a possible work flow. Please note that the biochemical measures dataset should have the same data structure as you TLFB dataset. In other words, it should have three columns:id,date, andamount. The biochemical data model shares the same data model with the TLFB data, both of which uses the TLFBData class.Note:If day counters are used for the date column, please setuse_raw_datetoTruewhen you create thebiochemical_datavariable below.Ia. Prepare the Biochemical DatasetA key operation to prepare the biochemical dataset is to interpolate extra meaningful records based on the exiting records using theinterpolate_biochemical_datafunction, as shown below.# First read the biochemical verification databiochemical_data=TLFBData("test_co.csv",included_subjects=included_subjects,abst_cutoff=4)biochemical_data.profile_data()# Interpolate biochemical records based on the half-lifebiochemical_data.interpolate_biochemical_data(half_life_in_days=0.5,maximum_days_to_interpolate=1)# half_life_in_days: the half life of the biochemical measure in days# maximum_days_to_interpolate: the maximum number of days to interpolate before the measurement day# Other data cleaning stepsbiochemical_data.drop_na_records()biochemical_data.check_duplicates()Ib. Integrate the Biochemical Dataset with the TLFB dataThe following code shows you how the integration can be performed. Everything else stays the same, except that in theimpute_datamethod, you need tospecify thebiochemical_dataargument.tlfb_data=TLFBData("test_tlfb.csv",included_subjects=included_subjects)tlfb_sample_summary,tlfb_subject_summary,tlfb_hist_plot=tlfb_data.profile_data()tlfb_data.drop_na_records()tlfb_data.check_duplicates()tlfb_data.recode_data()tlfb_data.impute_data(biochemical_data=biochemical_data)II. Calculate Retention RatesYou can also calculate the retention rate with the visit data with a simple function call, as shown below. If a filepath is specified, it will write to a file.# Just show the retention rates resultsvisit_data.get_retention_rates()# Write the retention rates to an external filevisit_data.get_retention_rates('retention_rates.csv')III. Calculate Abstinence RatesYou can calculate the computed abstinence by providing the list of pandas DataFrame objects.# Calculate abstinence by various definitionsabst_pp,lapses_pp=abst_cal.abstinence_pp([9,10],7,including_end=True)abst_pros,lapses_pros=abst_cal.abstinence_prolonged(4,[9,10],'5 cigs')abst_prol,lapses_prol=abst_cal.abstinence_prolonged(4,[9,10],False)# Calculate abstinence rates for eachabst_cal.calculate_abstinence_rates([abst_pp,abst_pros,abst_prol])abst_cal.calculate_abstinence_rates([abst_pp,abst_pros,abst_prol],'abstinence_results.csv')It will create the following DataFrame as the output. If a filepath is specified, it will write to a file.Abstinence NameAbstinence Rateitt_pp7_v90.159091itt_pp7_v100.170455itt_prolonged_5_cigs_v90.159091itt_prolonged_5_cigs_v100.113636itt_prolonged_False_v90.102273itt_prolonged_False_v100.068182Pre-Processing ToolsData Converision Tool (wide to long format)The package is best to work with datasets in the long format. If your datasets are in the wide format (one subject per row with columns storing data), you can use the following function.# import the module if you've not done this yetfromabstcalimportabstcal_utilslong_df=abstcal_utils.from_wide_to_long("filepath_to_wide.csv",data_source_type="tlfb",subject_col_name="id")# data_source_type: specify the data source is tlfb or visit, using which the function will use the desired column names after the transformation# subject_col_name: the original name for the subject columnThefrom_wide_to_longfunction will return the DataFrame in the long format with correctly named columns.Date Masking ToolFor privacy concerns, you may want to mask the dates in the datasets. To provide consistent mapping between all related datasets, you need to map TLFB, Visit, and Biochemical (optional) datasets altogether.# Use a particular visit as reference (each subject's date for the visit will be used)abstcal_utils.mask_dates("path_to_tlfb.csv","path_to_bio.csv","path_to_visit.csv",0)# Use a date (mm/dd/yyyy) as reference for all subjectsabstcal_utils.mask_dates("path_to_tlfb.csv","path_to_bio.csv","path_to_visit.csv","12/29/2020")# If you don't have biochemical data, please specify the second parameter as Noneabstcal_utils.mask_dates("path_to_tlfb.csv",None,"path_to_visit.csv",0)Themask_datesfunction returns the masked datasets.Visit Date Creation ToolSometimes, we need to create extra "virtual visit" dates that use existing visits plus a specific number of days' difference. This is possible with thatadd_additional_visit_datesfunction.abstcal_utils.add_additional_visit_dates("path_to_visit.csv",[('TQD','v0',7),('v7','v8',-5)],use_raw_date=True)The above example will read the long-format visit data from the specified path and add two new visit variables. The first one will be named TQD, which is equal to each subject's v0 date plus 7 days, and the other will be named v7, which is each subject's v8 date plus -5 days. Theuse_raw_dateparameter just specifies whether the visit data uses raw dates or day counters.Output DataFrame to FilesMany of these data processing functions produce DataFrame objects as the return value. If you want to save these DataFrame objects to external files on your computer, use thewrite_data_to_pathfunction.abstcal_utils.write_data_to_path(df,"filepath_to_output.csv",index=False)# index: when True, the output speadsheet will keep the index column, while False, it won'tQuestions or CommentsIf you have any questions about this package or would like to contribute to this project, please feel free to leave comments here or send me an email [email protected] License
abstention
No description available on PyPI.
abstochkin
AbStochKin: Agent-based Stochastic KineticsAlternate name: PyStochKin (Particle-based Stochastic Kinetics)AbStochKinis an agent-based (or particle-based) Monte-Carlo simulator of the time evolution of systems composed of species that participate in coupled processes. The population of a species is considered as composed of distinct individuals, termedagents, orparticles. This allows for the specification of the kinetic parameters describing the propensity ofeach agentto participate in a given process.Although the algorithm was originally conceived for simulating biochemical systems, it is applicable to other disciplines where there is a need to model how populations change over time and to study the effects of heterogeneity, or diversity, in the composition of species populations on the dynamics of a system.InstallationTheabstochkinpackage can be installed viapipin an environment with Python 3.10+.$ pip install abstochkinFor an overview of installing packages in Python, see thePython packaging user guide.RequirementsThe package relies only on Python's scientific ecosystem libraries (numpy,scipy,matplotlib,sympy) and the standard library for implementing the core components of the algorithm. These requirements can be easily met in any Python (3.10+) environment.What processes can be modeled?Simple processes (0th, 1st, 2nd order).Processes obeying Michaelis-Menten kinetics (1st order).Processes that are regulated by one or more species through activation or repression (0th, 1st, 2nd order).Processes that are regulatedandobey Michaelis-Menten kinetics (1st order).UsageHere is a simple example of how to run a simulation: consider the process $A \rightarrow B$, the conversion of agents of species $A$ to agents of species $B$. Notice that we represent the process in standard chemical notation, therefore there are 'reactants' and 'products' and each species has a stoichiometric coefficient associated with it (implied to be $1$ if it is not explicitly written). The rate constant for this process is specified to be $k=0.2$ and has units of reciprocal seconds. Here, we assume a homogeneous population; that is, all agents of species $A$ have the same propensity to 'transition' to species $B$. Thus, the value $k=0.2$ applies to all $A$ agents when determining the transition probability within a given time step.We then run an ensemble of simulations by specifying the initial population sizes ($A$: $100$ agents, $B$: $0$ agents) and the simulated time of $10$ seconds. Behind the scenes, default values for unspecified but necessary arguments are used (specifically, the number of simulations that comprise the ensemble, $n=100$, and the duration of the fixed time interval for each step in the simulation, $dt=0.01$ seconds).fromabstochkinimportAbStochKinsim=AbStochKin()sim.add_process_from_str('A -> B',k=0.2)sim.simulate(p0={'A':100,'B':0},t_max=10)When the simulation is completed, the results are presented in graphical form.ConcurrencyThe algorithm performs an ensemble of simulations to obtain the mean time trajectory of all species and statistical measures of the uncertainty thereof. To facilitate the rapid execution of the simulation,multithreadingis enabled by default. This is done becausenumpy, whose core algorithms can bypass the Global Interpreter Lock (GIL), is used extensively during the algorithm's runtime. For instance, the simple usage example presented above uses multithreading.When running a series of jobs (each with its own ensemble of simulations) where a parameter is varied (e.g., a parameter sweep),process-based parallellismcan be used. The user does not have to worry about the details of setting up the code for multiprocessing. Instead, they can simply call a method of the base class.fromabstochkinimportAbStochKinsim=AbStochKin()# Define a process that obeys Michaelis-Menten kinetics:sim.add_process_from_str("A -> B",k=0.3,catalyst='E',Km=10)# Vary the initial population size of species A:series_kwargs=[{"p0":{'A':a,'B':0,'E':10},"t_max":10}forainrange(40,51)]sim.simulate_series_in_parallel(series_kwargs)DocumentationSee the documentationhere.A monograph detailing the theoretical underpinnings of theAgent-based Kineticsalgorithm and a multitude of case studies highlighting its use can be foundhere.ContributingWe welcome any contributions to the project in the form of bug reports, feature requests, and pull requests. Feel free to contact the core developer and maintainer at alex dot plaka at alumni dot princeton.edu to introduce yourself and discuss possible ways to contribute.Financial contribution or supportIf you would like to financially contribute to or further support the development of this project, please contact the author.
absTools
absTools - An abstraction of miscellaneous functionality for generic reuse
abstra
✨ Abstra ✨Abstra is a simple way to build business processes in Python, with no engineering overhead and complexity.It's a powerful backoffice engine with:drag'n drop workflow builderdynamic formsserveless endpointsscript schedulerszero-config authenticationone-click scalable deploycloud managed databaseplug'n play api integrationsautomatic audit loggingaccess controland much more! ⚡️🚦 Getting startedThis package is compatible with Python >= 3.8To install, run the following:pip install abstraRun the CLI server in the directory where you'd like to create your Abstra project. This can be any folder:abstra serve ./your-project-directory🧩 Workflow builder for PythonUse Workflows to automate processes that require a mix of manual steps and integrations between systems.A Workflow is made up of Python-coded steps, which are then assembled visually in the editor. All steps share an environment, and can share variables and functions.📝 Scriptable formsForms are Python scripts that allow for user interaction. They are the quickest way to build interactive UIs on the web.With a Form, you can collect user input and use Python code to work with that information however you need. Some examples are making calculations with specialized libs, generating documents and graphs, and sending it to other systems via Requests.🛟 Useful linksWebsite|Docs|Cloud|Youtube|Privacy
abstrackr
UNKNOWN
abstra-cli
Abstra CLICommand Line Interface for Abstra CloudGetting StartedInstallationDownload preferably using pipx:pipxinstallabstra-cliOr using pip:pipinstallabstra-cliAuthenticationYou will need to be authenticated to run most commands.Run command below:abstraloginCredentials are stored at.abstra/credentialspath inside current directory.Alternatively you can set theABSTRA_API_TOKENenvironment variable.Alternatively you can runabstra configure <token>if you already have a token.CRUD CommandsThe general structure of the commands follows the pattern below:abstra<command><resource>[<argument>...][--<optional-argument-name><optional-argument-value>...]The available commands are:listaddupdateremoveplaylogsRemote resources can be:formshooksjobsfilesvarspackagessubdomaindashList resourcesList remote resources on your workspace.abstralistRESOURCE{forms,hooks,jobs,files,vars,packages}Examples:abstralistpackages abstralistvars abstralistfiles abstralistforms abstralisthooks abstralistjobsabstralistsubdomain# Saving cloud packages to a requirements.txt fileabstralistpackages>requirements.txt# Saving cloud environment variables to a .env fileabstralistvars>.envAdd resourceAdds remote resources on your workspace.abstraaddRESOURCE[...OPTIONS]The current options for each resource are:forms:--nameor--nor--title: string--path: string--fileor--f: file_path*--codeor--c: string*--enabled: boolean--background: image_path or string--main-color: string--start-message: string--error-message: string--end-message: string--start-button-text: string--restart-button-text: string--timeout-message: string--logo-url: string--show-sidebar: boolean--log-messages: boolean--font-color: string--auto-start: boolean--allow-restart: boolean--welcome-title: string--brand-name: string--upsert: boolean* Note: set either file or code, not both.Examples:abstraaddform--name="my_form"-fmain.py--background'#fffeee'abstraaddform--path=test-ftest.py--enabled=False abstraaddform--name="Form Name"--code"from hackerforms import * \n\ndisplay('hello_world')"--background'#fffeee'--main-colorred--start-message'start message'--error-message'error-message'--end-message'end message'--start-button-text'start button text'--show-sidebar--allow-restarthooks--nameor--nor--title: string--path: string--fileor--f: file_path*--codeor--c: string*--enabled: boolean--upsert: boolean* Note: set either file or code, but not both.Examples:abstraaddhook--name="test hook"-fmain.py--upsert abstraaddhook--path=test-ftest.py--enabled=Falsejobs--nameor--nor--title: string--identifieror--idt: string--scheduleor--crontab: string--fileor--f: file_path--codeor--c: string--enabled: boolean--upsert: boolean* Note: set either file or code, but not both.Examples:abstraaddjob--idtnew-job--noenabled--name="Test Job"--upsert abstraaddjob--idtdaily--schedule="00 00 1 * *"--name="Every midnight"filesfile_path[]: list of file or directory paths. Defaults to.Examples:abstraaddfilesfoo.txtbar.log abstraaddfilesfoo/./varsenvironment_variable[]: list of Key=Value env vars-f or --file: file_path (ex. -f .env)Examples:abstraaddvarsENVIRONMENT=productionVERSION=1.0.0 abstraaddvars-f.env abstraaddvars--file.envpackagespackage_name[]: list of packages with optional version (ex. numpy=1.0.1)-f or --file: file_path (ex. --file requirements.txt). Defaults torequirements.txtExamples:abstraaddpackagespandasnumpy=1.0.1scipy>=1.0.1 abstraaddpackages-frequirements.txt abstraaddpackages-rrequirements.txt abstraaddpackages--filerequirements.txt abstraaddpackages--requirementrequirements.txtUpdate resourceUpdates remote resources on your workspace.Currently only available for forms, hooks and jobsabstraupdate[IDENTIFIERORPATH][...OPTIONS]The options for each resource are:formsform_path: string (required parameter)--name: string--path: string--file: file_path--code: string--enabled: boolean--background: image_path or string--main-color: string--start-message: string--error-message: string--end-message: string--start-button-text: string--restart-button-text: string--timeout-message: string--logo-url: string--show-sidebar: boolean--log-messages: boolean--font-color: string--auto-start: boolean--allow-restart: boolean--welcome-title: string--brand-name: stringExamples:abstraupdateformnew-onboarding--name="Another name"--allow-restartjobs:identifier: string (required parameter)--nameor--nor--title: string--identifieror--idt: string--scheduleor--crontab: string--fileor--f: file_path--codeor--c: string--enabled: booleanExample:abstraupdatejobdaily--schedule="00 00 5 * *"hookshook_path: string (required parameter)--nameor--nor--title: string--path: string--fileor--f: file_path--codeor--c: string--enabled: booleanExamples:abstraupdatehookstripe-callback--enabledsubdomain--name: string (required parameter)Examples:abstraupdatesubdomainnew-subdomain-nameRemove resourceRemove remote resources from your workspace.abstraremoveRESOURCE[...OPTIONS]Examples:abstraremoveformsales-onboarding abstraremovejobmonthly abstraremovehookstripe-test abstraremovefilesfoo.txtbar.log abstraremovevarsENVIRONMENTVERSION abstraremovepackagespandasnumpyscipyPlay resourceRun the resource on Abstra Cloud.Currently only available for forms.abstraplayRESOURCE[...OPTIONS]Examples:abstraplayformb2b-ingestionDeploy CommandThis command allows you to specify several resources in a JSON file and deploy them in one command (great for CI/CD workflows).The default path isabstra.jsonin the root directory.abstradeploy[--fileor-f]Examples:abstradeploy-fprod.jsonThe file shoud follow a structure similar to what you can pass in each resource add command (using deploy the upsert flag will be added).Example file:{"forms":[{"name":"Main Form","path":"main","file":"forms/main.py"},{"name":"Secondary Form","path":"secondary","code":"forms/secondary.py","enabled":false}],"hooks":[{"name":"Test","path":"test","file":"hooks/test.py"},{"name":"Stripe","path":"stripe","file":"hooks/stripe.py"}],"jobs":[{"name":"Monthly","idt":"month","file":"jobs/month.py","schedule":"00 00 1 * *","enabled":false},{"name":"Weekly","idt":"week","file":"jobs/week.py","schedule":"00 00 * * 1"}],"files":["root.json","files/"],"packages":{"file":"requirements.txt"},"vars":{"file":".env"}}For packages and vars you can also specify manually:{"packages":["pydash","stripe==1.1.0"],"vars":["ABSTRA_CLOUD=test","STRIPE_KEY=foobar"]}Logs CommandThis command allows you to see the logs of one resource in your workspace.abstralogsRESOURCE[...OPTIONS]The options for each resource are:dash--path: string (optional)--limit: integer (optional, default to 20. To disable set to 0)--offset: integer (optional, default to 0)Examples:abstralogsdash--pathmy-dash--limit10--offset0form--id: string (optional)--limit: integer (optional, default to 20. To disable set to 0)--offset: integer (optional, default to 0)abstralogsform--limit10--offset0hooksIt lists all the logs in your workspace.--limit: integer (optional, default to 20. To disable set to 0)--offset: integer (optional, default to 0)abstralogshooks--limit10--offset0hook--idor--log_id: string--limit: integer (optional, default to 20. To disable set to 0)--offset: integer (optional, default to 0)abstralogshook--id6a7788c1-7eaf-46a6-93d5-13dfba962e90--limit10--offset0jobsIt lists all the logs in your workspace.--limit: integer (optional, default to 20. To disable set to 0)--offset: integer (optional, default to 0)abstralogsjobs--limit10--offset0jobIt lists all the logs within a specific job or a specific log.--idor--log_id: string--limit: integer (optional, default to 20. To disable set to 0)--offset: integer (optional, default to 0)abstralogsjob--id6a7788c1-7eaf-46a6-93d5-13dfba962e90--limit10--offset0workspaceIt lists the logs of all resources in your workspace.--limit: integer (optional, default to 20. To disable set to 0)--offset: integer (optional, default to 0)abstralogsworkspaceAliasesSome commands have aliases.upload# Alias for `abstra add files` with default argument `.`abstraupload[FILESorDIRECTORIES,default:.]ls# Alias for `abstra list files`abstralsrm# Alias for `abstra remove files`abstrarminstall# Alias for `abstra add packages`abstrainstall[PACKAGES]Ignoring filesYou can ignore files placing a text file named.abstraignoreat the target directory. The file.abstraignoreitself will always be ignored.Example:__pycache__ tests/ *.ipynb
abstract
AbstractAbstract is a Python library for creating and drawing graphs and taking advantage of graph properties.InstallationpipinstallabstractGraphIntroductionIn computer science, a graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from mathematics; specifically, the field of graph theory.[1]A graph data structure consists of a finite (and possibly mutable) set of vertices or nodes or points, together with a set of unordered pairs of these vertices for an undirected graph or a set of ordered pairs for a directed graph. These pairs are known as edges, arcs, or lines for an undirected graph and as arrows, directed edges, directed arcs, or directed lines for a directed graph. The vertices may be part of the graph structure, or may be external entities represented by integer indices or references.[1]UsageTheGraphclass allows you to create nodes and edges and visualize the resulting graph. Edges can have direction which indicates parent-child relationship.To construct a new graph, useGraph().fromabstractimportGraphgraph=Graph(direction='LR')# default direction is 'LR', other options are: 'TB', 'BT', 'RL'add_node(...)Theadd_nodemethod creates a node in the graph and returns aNodeobject.It takes the following arguments:name: name of the new node (should be unique); snake case is recommendedlabel(optional): it can be any string, if it is missing the name will be displayedvalue(optional): can be any objectstyle(optional): it should be aNodeStyleobject and is only used for renderingif_node_exists(optional): what to do if a node with this name exists and can be 'warn', 'error', or 'ignore'; default is 'warn'Let's use theRock, Paper, Scissors, Lizard, Spockgame to show howGraphworks. The following list shows the order in which an object in the game beats the object to its right of it in the list and gets beaten by the object left of it. Please note that there are only five objects and they are repeated to illustrate all possible pairs.node_list=['scissors','paper','rock','lizard','Spock','scissors','lizard','paper','Spock','rock','scissors']Now let's createnodeswith the same names:# create a set to avoid duplicatesfornodeinset(node_list):node=graph.add_node(name=node)graph.display(direction='TB')# left-right direction is too tallNote: by default, Graph uses colour theme from thecolourationlibrary for roots and uses the directionality of edges to determine the colour of other nodes. In the above example, without any edges, all nodes are roots.connect(...)(add an edge)Theconnectmethod creates anedgefrom astartnode to anendnode. Thestartandendarguments can be either names of nodes or theNodeobjects.foriinrange(len(node_list)-1):edge=graph.connect(start=node_list[i],end=node_list[i+1])graph.display(direction='LR')# top-bottom direction is too tallNote: nodes that form a loop are coloured differently (red circles with yellow colour inside)get_nodeTo retrieve a node from the graph you can use theget_nodemethod which returns aNodeobject.rock=graph.get_node('rock')display(...)Thedisplaymethod visualizes the graph and if apathis provided it saves it to an image file that can be apdforpng; you can also provide the resolution with thedpiargument. The file format is infered from thepathargument.# save as a png file and view the filegraph.draw(path='my_graph.png',view=True)Graph(obj=...)You can create a graph from any object that has a__graph__()method. Examples of such objects are:Graphclass from this libraryPensieveclass from thepensievelibraryPageclass frominternet.wikipediasubmodulefrompensieveimportPensievefromabstractimportGraphpensieve=Pensieve()pensieve['two']=2pensieve['three']=3pensieve['four']=lambdatwo:two*twopensieve['five']=5pensieve['six']=lambdatwo,three:two*threepensieve['seven']=7pensieve['eight']=lambdatwo,four:two*fourpensieve['nine']=lambdathree:three*threepensieve['ten']=lambdatwo,five:two*fivegraph=Graph(obj=pensieve,direction='TB')# or Graph(pensieve)graph.display()random(...)Therandommethod creates a random Graph.g1=Graph.random(num_nodes=8,connection_probability=0.4,seed=6)g1Adding Two Graphs:+You can easily add two graphs using the+operator. The result will have the union of nodes and edges in both graphs.g2=Graph.random(num_nodes=7,start_index=3,connection_probability=0.4,seed=41)g2g3=g1+g2g3Finding LoopsThenode'sis_in_loopmethod helps you find nodes that form a loop;i.e., nodes that have at least one descendant which is also an ancestor.graph_with_loop=Graph()forletterin'abcdef':graph_with_loop.add_node(letter)forstart,endin[('a','b'),('b','c'),('c','a'),('c','d'),('d','e'),('e','f'),('f','e')]:graph_with_loop.connect(start,end)graph_with_loopfornodeingraph_with_loop.nodes:ifnode.is_in_loop_with(other='a')andnode.name!='a':print(node.name,'is in the same loop as a')elifnode.is_in_loop():print(node.name,'is in a loop')else:print(node.name,'is not in a loop')output:a is in a loop b is in the same loop as a c is in the same loop as a d is not in a loop e is in a loop f is in a loopFuture FeaturesCreate a graph from:list of dictionariesdataframeCreate a new graph by filtering a graph
abstract-ai
Abstract AITable of ContentsAbstract AITable of ContentsOverviewImagesInstallationUsageAbstract AI ModuleGptManager OverviewPurposeMotivationObjectiveExtended OverviewDetailed Components DocumentationGptManagerModelManagerPromptManagerInstructionManagerResponseManagerApiManagerDependenciesDetailed Components DocumentationModelManagerInstructionManagerPromptManagerAdditional InformationContactLicenseabstract_aiModuleabstract_aiis a feature-rich Python module for interacting with the OpenAI GPT-3 and 4 API. It provides an easy way to manage requests and responses with the AI and allows for detailed interaction with the API, giving developers control over the entire process. This module manages data chunking for large files, allowing the user to process large documents in a single query. It is highly customizable, with methods allowing modules to maintain context, repeat queries, adjust token sizes, and even terminate the query loop if necessary.InstallationTo utilize theapi_calls.pymodule, install the necessary dependencies and configure your OpenAI API key:Install the required Python packages:pipinstallabstract_aiSet your OpenAI API key as an environment variable. By default, the module searches for an environment variable namedOPENAI_API_KEYfor API call authentication. Ensure your.envis saved inhome/envy_all,documents/envy_all, within thesource_folder, or specify the.envpath in the GUI settings tab.Abstract Ai Overciew# Dynamic Data Chunking & API Query Handler## OverviewThis repository presents a sophisticated code example engineered to efficiently process extensive datasets via an intelligent chunking algorithm, tailored for API interactions where data size and query constraints are predominant. It assures a smooth operation with minimal user input.## Key Features### Dual Input System-`Request`and`Prompt Data`sections for straightforward data incorporation.-Automatic division of prompt data into manageable chunks based on user-specified parameters.### Intelligent Chunking-Dynamically segments data considering the set percentage for expected completion per API query and the maximum token limit.-Executes iterative queries through a response handler class until data processing completes.### Iterative Query Execution-Handles documents split into multiple chunks (e.g., 14 chunks result in at least 14 API queries), with real-time adaptive query decisions.### Instruction Set-`bot_notation`: allows the module to create notes about the current data chunk to be recieved upon the next query, this is such that they can keep context, and understand why the previous selections were made.-`additional_response`: Allows repeated query execution until a specified condition is met, bypassing token limitations.-`select_chunks`: allows the module to review either the previous or next chunk of data alongside the current or by itself, if needed, the loop will essentially impliment additional_response for this.-`token_size_adjustment`: allows the module to adjust the size of the chunks being sent, this is a neat feature because they do get finicky about this and it can be used in combination with any of the above.-`abort`: Authorizes termination of the query loop to conserve resources.-`suggestions`: Provides a system for leaving future improvement notes.## Autonomy & EfficiencyEmpowers modules with significant autonomy for managing large data volumes efficiently, ensuring the output is streamlined and user post-processing is minimal.## User ConvenienceSimplifies user involvement by automating data chunking and handling multiple prompts in a single operation. The modules are also equipped to independently address typical query-related issues.## ConclusionDevelopers seeking to automate and refine data handling for API-centric applications will find this repository a valuable asset. It's crafted to mitigate common data processing challenges and implement proactive solutions for enhanced user and module performance. --- Your journey towards seamless data handling starts here! Dive into the code, and feel free to contribute or suggest improvements.Usageimportos##ModelBuilder.pyfromabstract_aiimportModelManagermodel='gpt-4'model_mgr=ModelManager(input_model_name=model)model_mgr.selected_endpoint#output: https://api.openai.com/v1/chat/completionsmodel_mgr.selected_max_tokens#output: 8192#ApiBuilder.py#you can put in either your openai key directly or utilize an env value# the env uses abstract_security module, it will automatically search the following folders for a .env to matcha that value# - current_directory# - home/env_all# - documents/env_allfromabstract_aiimportApiManagerapi_env='OPENAI_API_KEY'api_mgr=ApiManager(api_env=api_env,content_type=None,header=None,api_key=None)api_mgr.content_type#output application/jsonapi_mgr.header#output: {'Content-Type': 'application/json', 'Authorization': 'Bearer ***private***'}api_mgr.api_key#output: ***private***#InstructionBuilder.pyfromabstract_aiimportInstructionManager#Each of these methods, with their signature features, enhances the usability and functionality of the Abstract_AI system,#ensuring optimized interactions, easy navigation through data chunks, and adept handling of responses.notation=True# allows the module a method of notation that it can utilize to maintain comtext and contenuity from one prompt query to the nextsuggestions=True# encourages suggestions on the users implimentation of the current queryabort=True# allows for the module to put a full stop to the query loop if the goal is unattainable or an anamolous instance occursgenerate_title=True# the module wil generate a title for the response fileadditional_responses=True# allows for the module to delegate the relooping of a prompt interval, generally to form a complete response if token length is insuficcient, or if context is too much or too littleadditional_instruction="please place any iterable data inside of this key value unless otherwise specified"request_chunks=True# allows for the module to add an interval to the query loop to retrieve the previous prompt the previous promptinstruction_mgr=InstructionManager(notation=notation,suggestions=suggestions,abort=abort,generate_title=generate_title,additional_responses=additional_responses,additional_instruction=additional_instruction,request_chunks=self.request_chunks)instruction_mgr.instructions#output:"""your response is expected to be in JSON format with the keys as follows:0) api_response - place response to prompt here1) notation - A useful parameter that allows a module to retain context and continuity of the prompts. These notations can be used to preserve relevant information or context that should be carried over to subsequent prompts.2) suggestions - ': A parameter that allows the module to provide suggestions for improving efficiency in future prompt sequences. These suggestions will be reviewed by the user after the entire prompt sequence is fulfilled.3) additional_responses - This parameter, usually set to True when the answer cannot be fully covered within the current token limit, initiates a loop that continues to send the current chunk's prompt until the module returns a False value. This option also enables a module to have access to previous notations4) abort - if you cannot fullfil the request, return this value True; be sure to leave a notation detailing whythis was5) generate_title - A parameter used for title generation of the chat. To maintain continuity, the generated title for a given sequence is shared with subsequent queries.6) request_chunks - you may request that the previous chunk data be prompted again, if selected, the query itterate once more with the previous chunk included in the prompt. return this value as True to impliment this option; leave sufficient notation as to why this was neccisary for the module recieving the next prompt7) additional_instruction - please place any iterable data inside of this key value unless otherwise specifiedbelow is an example of the expected json dictionary response format, with the default inputs:{'api_response': '', 'notation': '', 'suggestions': '', 'additional_responses': False, 'abort': False, 'generate_title': '', 'request_chunks': False, 'additional_instruction': '...'}"""#PromptBuilder.pyfromabstract_aiimportPromptManager#Calculates the token distribution between prompts, completions, and chunks to ensure effective token utilization.completion_percentage=40#allows the user to specify the completion percentage they are seeking for this prompt(s) currently at 40% of the token allotmentrequest="thanks for using abstract_ai the description youre looking for is in the prompt_data"prompt_data="""The code snippet is a part of `PromptBuilder.py` from the `abstract_ai` module. This specific part shows the crucial role of chunking strategies in the functioning of `abstract_ai`.\n\nThe `chunk_data_by_type` function takes in data, a maximum token limit, and a type of chunk (with possible values like 'URL', 'SOUP', 'DOCUMENT', 'CODE', 'TEXT').Depending on the specified type, it applies different strategies to split the data into chunks. If a chunk type is not detected, the data is split based on line breaks.\n\nThe function `chunk_text_by_tokens` is specifically used when the chunk type is 'TEXT'. It chunks the input data based on the specified maximum tokens, ensuring eachchunk does not exceed this limit.\n\nWith the `chunk_source_code` function, you can chunk source code based on individual functions and classes. This is crucial to maintainthe context and readability within a code snippet.\n\n`extract_functions_and_classes` is a helper function used within `chunk_source_code`, it extracts all the functions andclasses from the given source code. The extracted functions and classes are then used to chunk source code accordingly.\n\nThese functions are called numerous times in the abstract_ai platform, emphasizing their key role in the system."""chunk_type="Code"#parses the chunks based on your input types, ['HTML','TEXT','CODE']prompt_mgr=PromptManager(instruction_mgr=instruction_mgr,model_mgr=model_mgr,completion_percentage=completion_percentage,prompt_data=prompt_data,request=request,token_dist=None,bot_notation=None,chunk=None,role=None,chunk_type=chunk_type)prompt_mgr.token_dist=[{'completion':{'desired':3276,'available':3076,'used':200},'prompt':{'desired':4915,'available':4035,'used':880},'chunk':{'number':0,'total':1,'length':260,'data':"\nThe code snippet is a part of `PromptBuilder.py` from the `abstract_ai` module. This specific part shows the crucial role of chunking strategies in the functioning of `abstract_ai`.\n\n\nThe `chunk_data_by_type` function takes in data, a maximum token limit, and a type of chunk (with possible values like 'URL', 'SOUP', 'DOCUMENT', 'CODE', 'TEXT').\nDepending on the specified type, it applies different strategies to split the data into chunks. If a chunk type is not detected, the data is split based on line breaks.\n\n\nThe function `chunk_text_by_tokens` is specifically used when the chunk type is 'TEXT'. It chunks the input data based on the specified maximum tokens, ensuring each\nchunk does not exceed this limit.\n\nWith the `chunk_source_code` function, you can chunk source code based on individual functions and classes. This is crucial to maintain\nthe context and readability within a code snippet.\n\n`extract_functions_and_classes` is a helper function used within `chunk_source_code`, it extracts all the functions and\nclasses from the given source code. The extracted functions and classes are then used to chunk source code accordingly.\n\n\nThese functions are called numerous times in the abstract_ai platform, emphasizing their key role in the system."}}]completion_percentage=90# completion percentage now set to 90% of the token allotmentrequest="ok now we chunk it up with abstract_ai the description youre looking for is still in the prompt_data"prompt_mgr=PromptManager(instruction_mgr=instruction_mgr,model_mgr=model_mgr,completion_percentage=completion_percentage,prompt_data=prompt_data,request=request)prompt_mgr.token_dist=[{'completion':{'desired':7372,'available':7172,'used':200},'prompt':{'desired':819,'available':7,'used':812},'chunk':{'number':0,'total':2,'length':192,'data':"\n\nThe code snippet is a part of `PromptBuilder.py` from the `abstract_ai` module. This specific part shows the crucial role of chunking strategies in the functioning of `abstract_ai`.\n\nThe `chunk_data_by_type` function takes in data, a maximum token limit, and a type of chunk (with possible values like 'URL', 'SOUP', 'DOCUMENT', 'CODE', 'TEXT').Depending on the specified type, it applies different strategies to split the data into chunks. If a chunk type is not detected, the data is split based on line breaks.\n\nThe function `chunk_text_by_tokens` is specifically used when the chunk type is 'TEXT'. It chunks the input data based on the specified maximum tokens, ensuring eachchunk does not exceed this limit.\n\nWith the `chunk_source_code` function, you can chunk source code based on individual functions and classes. This is crucial to maintainthe context and readability within a code snippet."}},{'completion':{'desired':7372,'available':7172,'used':200},'prompt':{'desired':819,'available':135,'used':684},'chunk':{'number':1,'total':2,'length':64,'data':'`extract_functions_and_classes` is a helper function used within `chunk_source_code`, it extracts all the functions andclasses from the given source code. The extracted functions and classes are then used to chunk source code accordingly.\n\nThese functions are called numerous times in the abstract_ai platform, emphasizing their key role in the system.'}}]fromabstract_aiimportResponseManager#The `ResponseManager` class handles the communication process with AI models by managing the sending of queries and storage of responses. It ensures that responses are correctly interpreted, errors are managed, and that responses are saved in a structured way, facilitating easy retrieval and analysis.#It leverages various utilities from the `abstract_utilities` module for processing and organizing the data and interacts closely with the `SaveManager` for persisting responses.response_mgr=ResponseManager(prompt_mgr=prompt_mgr,api_mgr=api_mgr,title="Chunking Strategies in PromptBuilder.py",directory='response_data')response_mgr.initial_query()response_mgr.output=[{'prompt':{'model':'gpt-4','messages':[{'role':'assistant','content':"\n-----------------------------------------------------------------------------\n#instructions#\n\nyour response is expected to be in JSON format with the keys as follows:\n\n0) api_response - place response to prompt here\n1) notation - A useful parameter that allows a module to retain context and continuity of the prompts. These notations can be used to preserve relevant information or context that should be carried over to subsequent prompts.\n2) suggestions - ': A parameter that allows the module to provide suggestions for improving efficiency in future prompt sequences. These suggestions will be reviewed by the user after the entire prompt sequence is fulfilled.\n3) additional_responses - This parameter, usually set to True when the answer cannot be fully covered within the current token limit, initiates a loop that continues to send the current chunk's prompt until the module returns a False value. This option also enables a module to have access to previous notations\n4) abort - if you cannot fullfil the request, return this value True; be sure to leave a notation detailing whythis was\n5) generate_title - A parameter used for title generation of the chat. To maintain continuity, the generated title for a given sequence is shared with subsequent queries.\n6) request_chunks - you may request that the previous chunk data be prompted again, if selected, the query itterate once more with the previous chunk included in the prompt. return this value as True to impliment this option; leave sufficient notation as to why this was neccisary for the module recieving the next prompt\n7) additional_instruction - please place any iterable data inside of this key value unless otherwise specified\n\nbelow is an example of the expected json dictionary response format, with the default inputs:\n{'api_response': '', 'notation': '', 'suggestions': '', 'additional_responses': False, 'abort': False, 'generate_title': '', 'request_chunks': False, 'additional_instruction': '...'}\n-----------------------------------------------------------------------------\n#prompt#\n\nok now we chunk it up with abstract_ai the description youre looking for is still in the prompt_data\n-----------------------------------------------------------------------------\n\n-----------------------------------------------------------------------------\n#data chunk#\n\nthis is chunk 0 of 2\n\n\n\nThe code snippet is a part of `PromptBuilder.py` from the `abstract_ai` module. This specific part shows the crucial role of chunking strategies in the functioning of `abstract_ai`.\n\nThe `chunk_data_by_type` function takes in data, a maximum token limit, and a type of chunk (with possible values like 'URL', 'SOUP', 'DOCUMENT', 'CODE', 'TEXT').Depending on the specified type, it applies different strategies to split the data into chunks. If a chunk type is not detected, the data is split based on line breaks.\n\nThe function `chunk_text_by_tokens` is specifically used when the chunk type is 'TEXT'. It chunks the input data based on the specified maximum tokens, ensuring eachchunk does not exceed this limit.\n\nWith the `chunk_source_code` function, you can chunk source code based on individual functions and classes. This is crucial to maintainthe context and readability within a code snippet.\n-----------------------------------------------------------------------------\n"}],'max_tokens':7172},'response':{'id':'**id**','object':'chat.completion','created':1699054738,'model':'gpt-4-0613','choices':[{'index':0,'message':{'role':'assistant','content':"{'api_response': 'This chunk describes a part of the `PromptBuilder.py` from the `abstract_ai` module. It underlines the significance of chunking methods in the `abstract_ai` operations. The mentioned `chunk_data_by_type` function accepts data, a maximum token limit, and a type of chunk to appropriately divide the data into chunks. The chunking process depends on the specified type, and in case the type is not identified, the data is divided based on line breaks. The `chunk_text_by_tokens` and `chunk_source_code` functions are used when the chunk type is 'TEXT' and 'CODE' respectively. The former chunks the data considering the max token limit, while the latter chunks source code as per individual classes and functions, maintaining the context and readability of a code snippet.',\n'notation': 'The chunked data tells about the `abstract_ai` chunking mechanisms, which include `chunk_data_by_type`, `chunk_text_by_tokens` and `chunk_source_code` functions.',\n'suggestions': 'It can be improved by providing specific examples of how each chunking function works.',\n'additional_responses': True,\n'abort': False,\n'generate_title': 'Review of `abstract_ai` Chunking Mechanisms',\n'request_chunks': False,\n'additional_instruction': ''}"},'finish_reason':'stop'}],'usage':{'prompt_tokens':621,'completion_tokens':271,'total_tokens':892}},'title':'Chunking Strategies in PromptBuilder.py'},{'prompt':{'model':'gpt-4','messages':[{'role':'assistant','content':"\n-----------------------------------------------------------------------------\n#instructions#\n\nyour response is expected to be in JSON format with the keys as follows:\n\n0) api_response - place response to prompt here\n1) notation - A useful parameter that allows a module to retain context and continuity of the prompts. These notations can be used to preserve relevant information or context that should be carried over to subsequent prompts.\n2) suggestions - ': A parameter that allows the module to provide suggestions for improving efficiency in future prompt sequences. These suggestions will be reviewed by the user after the entire prompt sequence is fulfilled.\n3) additional_responses - This parameter, usually set to True when the answer cannot be fully covered within the current token limit, initiates a loop that continues to send the current chunk's prompt until the module returns a False value. This option also enables a module to have access to previous notations\n4) abort - if you cannot fullfil the request, return this value True; be sure to leave a notation detailing whythis was\n5) generate_title - A parameter used for title generation of the chat. To maintain continuity, the generated title for a given sequence is shared with subsequent queries.\n6) request_chunks - you may request that the previous chunk data be prompted again, if selected, the query itterate once more with the previous chunk included in the prompt. return this value as True to impliment this option; leave sufficient notation as to why this was neccisary for the module recieving the next prompt\n7) additional_instruction - please place any iterable data inside of this key value unless otherwise specified\n\nbelow is an example of the expected json dictionary response format, with the default inputs:\n{'api_response': '', 'notation': '', 'suggestions': '', 'additional_responses': False, 'abort': False, 'generate_title': '', 'request_chunks': False, 'additional_instruction': '...'}\n-----------------------------------------------------------------------------\n#prompt#\n\nok now we chunk it up with abstract_ai the description youre looking for is still in the prompt_data\n-----------------------------------------------------------------------------\n\n-----------------------------------------------------------------------------\n#data chunk#\n\nthis is chunk 1 of 2\n\n`extract_functions_and_classes` is a helper function used within `chunk_source_code`, it extracts all the functions andclasses from the given source code. The extracted functions and classes are then used to chunk source code accordingly.\n\nThese functions are called numerous times in the abstract_ai platform, emphasizing their key role in the system.\n-----------------------------------------------------------------------------\n"}],'max_tokens':7172},'response':{'id':'**id**','object':'chat.completion','created':1699054759,'model':'gpt-4-0613','choices':[{'index':0,'message':{'role':'assistant','content':"{'api_response': '`extract_functions_and_classes` is a helper function used within `chunk_source_code`. It extracts all the functions and classes from the given source code. These extracted elements are then used to chunk source code accordingly.', 'notation': 'Explanation about `extract_functions_and_classes` and `chunk_source_code` functions.', 'suggestions': '', 'additional_responses': True, 'abort': False, 'generate_title': '', 'request_chunks': False, 'additional_instruction': ''}"},'finish_reason':'stop'}],'usage':{'prompt_tokens':494,'completion_tokens':99,'total_tokens':593}},'title':'Chunking Strategies in PromptBuilder.py_0'}]#now lets give it the whole things!current_file_full_data=read_from_file(os.path.abspath(__file__))completion_percentage=60# completion percentage now set to 90% of the token allotmentrequest="here is the entire example use case, please review the code and thank you for your time with abstract_ai"prompt_data=current_file_full_datatitle="Chunking Example Review.py"response_mgr=ResponseManager(prompt_mgr=PromptManager(prompt_data=prompt_data,request=request,completion_percentage=completion_percentage,model_mgr=model_mgr,instruction_mgr=instruction_mgr),api_mgr=api_mgr,title=title,directory='response_data')response_mgr.initial_query()output=response_mgr.initial_query()fromabstract_aiimportcollate_response_quecolate_responses=collate_response_que(output)#api_response:"""The provided code is a walkthrough of how the `abstract_ai` module can be used for text processing tasks. It first demonstrates how to initialize the `ModelManager`, `ApiManager`, and `InstructionManager` classes. These classes are responsible for managing model-related configurations, API interactions, and instruction-related configurations, respectively.The `PromptManager` class is initialized to manage prompt-related operations. This class handles the calculation of token distribution for various component parts of the prompt (i.e., the prompt itself, the completion, and the data chunk). In the example, the completion percentage is adjusted from 40% to 90%, demonstrating the flexibility of the `abstract_ai` module in shaping the AI's response. Additionally, the module supports different types of chunks, such as code, text, URL, etc., which adds further flexibility to the text processing.The `ResponseManager` class interacts with AI models to handle the sending of queries and storing of responses. This class ensures responses are processed correctly and saved in a structured way for easy retrieval and analysis.Overall, this use case provides a comprehensive view of how `abstract_ai` can be used for sophisticated and efficient management of AI-based text processing tasks."All of those functions enable `abstract_ai` to manage large volumes of data effectively, ensuring legibility and context preservation in the chunks produced by recognizing their critical role within the platform.Your request to review the provided example was processed successfully. The entire operation was completed in a single iteration, consuming about 60% of the allotted tokens."""#suggestions:"Please provide more context or specific requirements if necessary. Additional details about the specific use case or problem being solved could also be helpful for a more nuanced and tailored review."#suggested_title:"Review and Analysis of Chunking Strategies in the `abstract_ai` Framework"#notation:"The provided example included the usage of `PromptBuilder.py` from the `abstract_ai` module and the necessary chunking functions."#abort"false"#additional_responses:"false"ImagesURL grabber component: Allows users to add URL source code or specific portions of the URL source code to the prompt data, this feature is imported from theUrlGrabbercomponent of theAbstract_webtoolsmodule.Prompt Data: The left pane showcases the prompt data. All PromptData are derived from thePromptManagerclass.Response: The right pane contains the response intake. All Responses are derived from theResponseManagerclass.Instructions Display: Showcases all default instructions, which are customizable in the same pane. All instructions are derived from theInstructionManagerclass.Settings Tab: Contains all selectable settings, including available, desired, and used prompt and completion tokens, most of which is derived from thePromptManmagerclass.File Browser Component: Enables users to add the contents of files or specific portions of file content to the prompt data, this feature is imported from theAbstractBrowsercomponent of theAbstract_guimodule.Detailed Components DocumentationTheGptManager.pymodule provides an extensive class management to interact with the GPT-3 model conveniently. This module combines various functionalities into a unified and structured framework. Some of the significant classes it encapsulates are as follows:GptManager: This is the central class that administers the interactions and data flow amongst distinct components.ApiManager: This manages the OpenAI API keys and headers.ModelManager: This takes care of model selection and querying.PromptManager: This takes care of the generation and management of prompts.InstructionManager: This encapsulates the instructions for the GPT model.ResponseManager: This administers the responses received from the model.These classes work collectively to simplify the task of sending queries, interpreting responses, and managing the interactions with the GPT-3 model. The module heavily depends on packages likeabstract_webtools,abstract_gui,abstract_utilities, andabstract_ai_gui_layout.Dependenciesabstract_webtools: Provides web-centric tools.abstract_gui: Houses GUI-related tools and components.abstract_utilities: Contains general-purpose utility functions and classes.abstract_ai_gui_layout: Lays out the AI GUI.#abstract_ai_gui_backend.pyOverviewTo use theabstract_ai_gui_backend.py, first, initialize the GptManager class. Following this step, use the update methods to set or change configurations. Finally, use theget_query()method to query the GPT model and retrieve a response. This chunk of code contains several methods for the abstract_ai_gui_backend module of the Abstract AI system:Class: GptManagerupdate_response_mgr: This method updates the ResponseManager instance used by the module, linking it with the existing instances of PromptManager and ApiManager. The ResponseManager generates AI response to prompts that are sent.get_query: The method is used for making respective calls to get responses. If a response is already computed, it stops the existing request (usingThreadManager) and starts a new thread for response retrieval.update_all: This method is used to update all the managers used by the module, to keep their data in sync.get_new_api_call_name: This method generates a new unique name for API call and appends it in the API call list.get_remainder: It returns what's left when 100 is subtracted from the value from a specific key.check_test_bool: This method checks if a test run is initiated. According to the result, it sets different statuses and updates the GUI accordingly.get_new_line: The method simply returns a newline character(s).update_feedback: This method fetches a value from the last response, based on the given key, and updates the GUI accordingly.update_text_with_responses: This method loads the last response and if it contains API response, it updates the GUI with the Title, Request text, and Response. For other feedback, it appends the information to the output.get_bool_and_text_keys: This method simply returns a list of keys formed by the given key_list and sections_list.text_to_key: The static method uses the function from utilities to generate a key name from text.get_dots: This method is used for decorating progress status.update_progress_chunks: This method provides a visual representation of the overall progress with respect to total chunks.check_response_mgr_status: This method checks whether the response manager's query process is finished by checking thequery_doneattribute.submit_query: This method controls the sending and receiving of queries and updating the GUI with the AI response.update_chunk_info: This method updates information related to the chunk based on the progress.adjust_chunk_display: This method modifies the GUI value responsible for displaying the chunk number in GUI based on the navigation number provided.The above chunk contains a sequence of function definitions that manage different parts of the AI interaction process within the abstract_ai module. Here's a concise guide about each method:get_chunk_display_numbers: Retrieves the display number for the current data chunk.determine_chunk_display: Determines if a data chunk should be displayed based on the input event.append_output: Appends new content to a particular key in the output.add_to_chunk: Appends new content to the current data chunk.clear_chunks: Resets the current chunk data.get_url: Retrieves the URL for a script.get_url_manager: Checks if a URL manager exists for a particular URL.test_files: Performs tests on the AI model with a hard-coded query.update_last_response_file: Stores the path of the most recent response file, and updates the related GUI elements.update_response_list_box: Updates the list box displaying response files.aggregate_conversations: Aggregates all conversations from JSON files in a specified directory.initialize_output_display: Initializes the display for the output and sets the data to the first item of the latest output. This code chunk contains four methods for managing output display in the application.get_output_display_numbers: This functions fetches fromwindow_mgrthe value associated with the-RESPONSE_TEXT_NUMBER-key and stores it inresponse_text_number_display. Additionally, it calculatesresponse_text_number_actualby subtracting 1 fromresponse_text_number_display.determine_output_display: This function checks the currenteventand decides whether the output display needs to be adjusted and in which direction (back or forward). It checks the conditions and, if valid, calls theadjust_output_displaymethod.adjust_output_display: This function accepts one argumentnumwhich is used to adjustresponse_text_number_actualandresponse_text_number_display. It updates the output display and updates the-RESPONSE_TEXT_NUMBER-key in thewindow_mgrwith the new value ofresponse_text_number_display.update_output: This function acceptsoutput_iterationas an argument and checks if it lies within the valid range. If valid, it assigns the designated output toself.latest_output[output_iteration]and then callsupdate_text_with_responses.This code manages the logic for traversing through the returned responses. It correctly fetches the number of responses, decides whether to go back or forward based on the event, adjusts the display accordingly and updates it. It is important to update the documentation with these details to allow the users to understand the flow of control in the script.ModelManagerThe ModelManager class, as part of theabstract_aimodule, provides functionalities for selecting, querying, and managing information about the available GPT models. The class initializes a list of models and their related information like endpoint and token limits. It also has methods to get model-specific details like endpoints, names, and token limits.Below are the method descriptions:__init__: Initialises the ModelManager instance with information about all available models. It also sets up the default model, endpoints, and maximum tokens.get_all_values: Takes a key as an input parameter and returns all unique values associated with this key in theall_modelslist._get_all_values: An alternate private method toget_all_valuesthat performs the same functionality with a reduced number of lines by using a list comprehension._get_endpoint_by_model: Returns the endpoint associated with the input model name._get_models_by_endpoint: Returns a list of models associated with a given endpoint._get_max_tokens_by_model: Returns the maximum tokens that can be processed by a given model.Note: In the__init__function, depending on the given inputs, the function prioritizes model_name over endpoint in setting the selected model, endpoint, and maximum tokens.PromptBuilder.pyPromptBuilder is a sophisticated module within the Abstract AI's ApiConsole, designed to handle the intricacies of token distribution, chunking of data, and prompt construction necessary for interfacing with language model APIs.OverviewPromptBuilder.py specializes in calculating and managing token allocations for prompt and completion outputs, considering the user's specifications and the constraints of the API's token limits. It ensures that queries are not only well-formed but also optimally structured for the language model to understand and respond effectively. The module's responsibilities extend to sizing the current query, evaluating the total prompt data, and segmenting it into processable chunks before final prompt assembly.PromptManagerKey FeaturesToken Distribution:Allocates tokens between prompts and completions based on instruction weight, verbosity, and available token quota.Data Chunking:Separates prompt data into manageable chunks, conforming to the calculated token budget.Prompt Construction:Assembles the full prompt incorporating user instructions, chunk data, and module notation, ready for API interaction.Integration:Serves as a foundational tool for the system, called upon in nearly every significant interaction with Abstract AI.DependenciesPromptBuilder.py relies on thenltkandtiktokenlibraries for accurate tokenization and text encoding, ensuring precise calculations and data handling.UsageTo utilize the PromptBuilder.py, import the module and instantiate the required class. Utilize its methods by adjusting parameters to fit the needs of your query. Here's an example usingcalculate_token_distribution:fromPromptBuilderimportPromptBuilder# Create an instance of the PromptBuilder classprompt_builder=PromptBuilder()# Example usage of calculate_token_distributiontoken_distribution=prompt_builder.calculate_token_distribution(user_query)Integration with Abstract AIPromptBuilder.py is deeply integrated with the Abstract AI suite, often collaborating withApiBuilder.py,ModelBuilder.py, andabstract_ai_gui_backend.pyfor a cohesive and efficient API interaction experience.Methods Overviewget_token_calcs: Evaluates token distribution for each chunk, ensuring balance between prompts and completions.get_token_distributions: Distributes tokens optimally across chunks based on the prompt and completion needs.Helpful Methods in PromptBuilder.pyAmong the functions in this module, the token calculation functionsget_token_calcsandget_token_distributionsare quite crucial. They carefully calculate the tokens used and available for prompts and completion. If, at any point, available prompt tokens fall below zero, they get added to the available completion tokens leading to a balance distribution.Here's an overview of two prominent methods:get_token_calcs: Evaluates individual token calculations for prompt and completion data. On detecting a shortage in available tokens, it redistributes tokens from the completion pool.get_token_distributions: It distributes tokens between the prompt and completion parts of the GPT-3 model query, ensuring a smooth and balanced query and keeping within the maximum token limit for the task.#api_response#{"title":"Chunking Strategies in PromptBuilder.py","prompt_type":"Python Code","prompt":"The code snippet is a part of `PromptBuilder.py` from the `abstract_ai` module. This specific part shows the crucial role of chunking strategies in the functioning of `abstract_ai`.\n\nThe `chunk_data_by_type` function takes in data, a maximum token limit, and a type of chunk (with possible values like 'URL', 'SOUP', 'DOCUMENT', 'CODE', 'TEXT'). Depending on the specified type, it applies different strategies to split the data into chunks. If a chunk type is not detected, the data is split based on line breaks.\n\nThe function `chunk_text_by_tokens` is specifically used when the chunk type is 'TEXT'. It chunks the input data based on the specified maximum tokens, ensuring each chunk does not exceed this limit.\n\nWith the `chunk_source_code` function, you can chunk source code based on individual functions and classes. This is crucial to maintain the context and readability within a code snippet.\n\n`extract_functions_and_classes` is a helper function used within `chunk_source_code`, it extracts all the functions and classes from the given source code. The extracted functions and classes are then used to chunk source code accordingly.\n\nThese functions are called numerous times in the abstract_ai platform, emphasizing their key role in the system.","suggested_formatting":[{"code":{"language":"python","content":["def chunk_data_by_type(data, max_tokens, chunk_type=None):","...","def chunk_text_by_tokens(prompt_data, max_tokens):","...","def extract_functions_and_classes(source_code):","...","def chunk_source_code(source_code, max_tokens):"]}}]}#InstructonBuilder.py##OverViewThis module is a segment of the Abstract AI system that manages the creation and modifications of instructions used by the GPT-3 model. It composes a significant component ofabstract_ai_gui_backend.py, working closely with classes like GptManager, ApiManager, ModelManager, PromptManager, and ResponseManager to ensure efficient and structured interactions with the model.Class: InstructionManagerMain Methods:update_response_mgr: Links the existing instances of PromptManager and ApiManager, updating the ResponseManager used by the module.get_query: Controls interrupting ongoing requests and starting a new thread for response retrieval through theThreadManager.update_all: Keeps the model, API, and other managers up to date and synchronized.get_new_api_call_name: Generates unique ID for each API call and maintains them in a list.get_remainder: Works on value subtraction for a particular key.get_new_line: A simple method for returning newline characters.update_feedback: Fetches a key-value pair from the prior response and updates the GUI.update_text_with_responses: Loads the latest response, extracts necessary information, and updates the GUI accordingly.update_chunk_info: Manages updates relating to the chunk based on the progression of the query.adjust_chunk_display: Alters the GUI value responsible for showing the chunk number based on the provided navigation number.Each of these methods, with their signature features, enhances the usability and functionality of the Abstract_AI system, ensuring optimized interactions, easy navigation through data chunks, and adept handling of responses.ApiManagerOverview The ApiBuilder.py is a component of the abstract_ai GPT API console that streamlines the usage of the OpenAI API. It serves as a utility module for managing API keys and constructing request headers required for API interactions.Features API Key Retrieval: Securely fetches the OpenAI API key from environmental storage, ensuring that sensitive data is not hardcoded into the application. Header Construction: Automates the creation of the necessary authorization headers for making requests to the OpenAI API. Prerequisites To use ApiBuilder.py, ensure that you have an OpenAI API key stored in your environment variables under the name OPENAI_API_KEY.Getting Started To begin with ApiBuilder.py, you can create an instance of ApiManager and then use it to perform operations requiring API access. The ApiManager class encapsulates all you need to manage API keys and headers for the OpenAI GPT API requests.Usage python Copy code from ApiBuilder import ApiManagerResponseBuilder.pyOverviewThe Purpose of theResponseBuilder.pyis to handle interactions and communications with AI models primarily through API endpoints. It ensures that responses are correctly interpreted, any errors are handled gracefully, and the responses are saved in a structured manner to facilitate easy retrieval and analysis at a later stage. It’s a core part of theabstract_ai, module we are currently communicating through.Class: ResponseManagerTheResponseManagerclass is a single handler of all in-query events and module interactions. It leverages various utilities from theabstract_utilitiesmodule to process and organise the data, interacting closely with theSaveManagerto persist responses.This class can be initialised with an instance ofprompt manager' andAPI manager, optionaltitlefor the session or the file saved, anddirectorypath where responses are to be saved.Below is a brief description of the methods of this class:re_initialize_query: Resets query-related attributes to their default state for a new query cycle.post_request: Sends a POST request with the current prompt and headers to the AI model and handles the response.get_response: Extracts and formats the response from the API call.try_load_response: Attempts to load the response content into a structured format.extract_response: Processes the response and manages the creation of a save point throughSaveManager.get_last_response: Retrieves the last response from the save location.get_response_bools: Checks and sets boolean flags based on the content of the latest response.send_query: Prepares and sends a new query to the AI model, then processes the response.test_query: Simulates sending a query for testing purposes.prepare_response: Handles the response after a query has been sent.initial_query: Manages the initial sequence of sending queries and processing responses.Benefits and FeaturesTheResponseBuilderautomatically chunks large data sets into manageable segments based on the percentage delegated for expected completion per query relative to the max tokens limit. This prevents wasting compute cycles on unnecessarily large queries and helps ensure more efficient responses.It handles the communication with the AI model, sending queries and storing/interpreting responses, significantly simplifying the interaction process for the end-users.It also gives the chat modules sufficient autonomy to efficiently handle the requests, preserving context and continuity in the responses. This helps avoid the need for users to stitch together the responses manually, leading to a more seamless interaction experience.ApiManagerapi_manager = ApiManager()API key and headers are set up and ready to be used for requestsprint(api_manager.api_key) # Displays the loaded API key print(api_manager.header) # Displays the generated headers Class ApiManager ApiManager is responsible for handling API keys and headers. It comes with a default configuration but can be customized during instantiation.Attributes: content_type (str): The MIME type of the request content. Defaults to 'application/json'. api_env (str): The environment variable name where the API key is stored. Defaults to 'OPENAI_API_KEY'. api_key (str): The actual API key used for authentication with the OpenAI API. header (dict): The authorization headers used in API requests. Methods: get_openai_key(): Retrieves the API key from the environment variable. load_openai_key(): Loads the API key into the OpenAI library for authenticating requests. get_header(): Constructs the headers required for making API requests. Security The ApiManager leverages environment variables to manage the API key, which is a secure practice. Ensure not to expose your API key in the codebase or any version control systems.Additional InformationAuthor: putkoffDate: 10/29/2023Version: 1.0.0ContactFor issues, suggestions, or contributions, open a new issue on ourGithub repository.Licenseabstract_aiis distributed under theMIT License.
abstract-ai-test
Abstract AITable of ContentsAbstract AITable of ContentsOverviewImagesInstallationUsageAbstract AI ModuleGptManager OverviewPurposeMotivationObjectiveExtended OverviewDetailed Components DocumentationGptManagerModelManagerPromptManagerInstructionManagerResponseManagerApiManagerDependenciesDetailed Components DocumentationModelManagerInstructionManagerPromptManagerAdditional InformationContactLicenseabstract_aiModuleabstract_aiis a feature-rich Python module for interacting with the OpenAI GPT-3 and 4 API. It provides an easy way to manage requests and responses with the AI and allows for detailed interaction with the API, giving developers control over the entire process. This module manages data chunking for large files, allowing the user to process large documents in a single query. It is highly customizable, with methods allowing modules to maintain context, repeat queries, adjust token sizes, and even terminate the query loop if necessary.InstallationTo utilize theapi_calls.pymodule, install the necessary dependencies and configure your OpenAI API key:Install the required Python packages:pipinstallabstract_aiSet your OpenAI API key as an environment variable. By default, the module searches for an environment variable namedOPENAI_API_KEYfor API call authentication. Ensure your.envis saved inhome/envy_all,documents/envy_all, within thesource_folder, or specify the.envpath in the GUI settings tab.Abstract Ai Overciew# Dynamic Data Chunking & API Query Handler## OverviewThis repository presents a sophisticated code example engineered to efficiently process extensive datasets via an intelligent chunking algorithm, tailored for API interactions where data size and query constraints are predominant. It assures a smooth operation with minimal user input.## Key Features### Dual Input System-`Request`and`Prompt Data`sections for straightforward data incorporation.-Automatic division of prompt data into manageable chunks based on user-specified parameters.### Intelligent Chunking-Dynamically segments data considering the set percentage for expected completion per API query and the maximum token limit.-Executes iterative queries through a response handler class until data processing completes.### Iterative Query Execution-Handles documents split into multiple chunks (e.g., 14 chunks result in at least 14 API queries), with real-time adaptive query decisions.### Instruction Set-`bot_notation`: allows the module to create notes about the current data chunk to be recieved upon the next query, this is such that they can keep context, and understand why the previous selections were made.-`additional_response`: Allows repeated query execution until a specified condition is met, bypassing token limitations.-`select_chunks`: allows the module to review either the previous or next chunk of data alongside the current or by itself, if needed, the loop will essentially impliment additional_response for this.-`token_size_adjustment`: allows the module to adjust the size of the chunks being sent, this is a neat feature because they do get finicky about this and it can be used in combination with any of the above.-`abort`: Authorizes termination of the query loop to conserve resources.-`suggestions`: Provides a system for leaving future improvement notes.## Autonomy & EfficiencyEmpowers modules with significant autonomy for managing large data volumes efficiently, ensuring the output is streamlined and user post-processing is minimal.## User ConvenienceSimplifies user involvement by automating data chunking and handling multiple prompts in a single operation. The modules are also equipped to independently address typical query-related issues.## ConclusionDevelopers seeking to automate and refine data handling for API-centric applications will find this repository a valuable asset. It's crafted to mitigate common data processing challenges and implement proactive solutions for enhanced user and module performance. --- Your journey towards seamless data handling starts here! Dive into the code, and feel free to contribute or suggest improvements.Usageimportos##ModelBuilder.pyfromabstract_aiimportModelManagermodel='gpt-4'model_mgr=ModelManager(input_model_name=model)model_mgr.selected_endpoint#output: https://api.openai.com/v1/chat/completionsmodel_mgr.selected_max_tokens#output: 8192#ApiBuilder.py#you can put in either your openai key directly or utilize an env value# the env uses abstract_security module, it will automatically search the following folders for a .env to matcha that value# - current_directory# - home/env_all# - documents/env_allfromabstract_aiimportApiManagerapi_env='OPENAI_API_KEY'api_mgr=ApiManager(api_env=api_env,content_type=None,header=None,api_key=None)api_mgr.content_type#output application/jsonapi_mgr.header#output: {'Content-Type': 'application/json', 'Authorization': 'Bearer ***private***'}api_mgr.api_key#output: ***private***#InstructionBuilder.pyfromabstract_aiimportInstructionManager#Each of these methods, with their signature features, enhances the usability and functionality of the Abstract_AI system,#ensuring optimized interactions, easy navigation through data chunks, and adept handling of responses.notation=True# allows the module a method of notation that it can utilize to maintain comtext and contenuity from one prompt query to the nextsuggestions=True# encourages suggestions on the users implimentation of the current queryabort=True# allows for the module to put a full stop to the query loop if the goal is unattainable or an anamolous instance occursgenerate_title=True# the module wil generate a title for the response fileadditional_responses=True# allows for the module to delegate the relooping of a prompt interval, generally to form a complete response if token length is insuficcient, or if context is too much or too littleadditional_instruction="please place any iterable data inside of this key value unless otherwise specified"request_chunks=True# allows for the module to add an interval to the query loop to retrieve the previous prompt the previous promptinstruction_mgr=InstructionManager(notation=notation,suggestions=suggestions,abort=abort,generate_title=generate_title,additional_responses=additional_responses,additional_instruction=additional_instruction,request_chunks=self.request_chunks)instruction_mgr.instructions#output:"""your response is expected to be in JSON format with the keys as follows:0) api_response - place response to prompt here1) notation - A useful parameter that allows a module to retain context and continuity of the prompts. These notations can be used to preserve relevant information or context that should be carried over to subsequent prompts.2) suggestions - ': A parameter that allows the module to provide suggestions for improving efficiency in future prompt sequences. These suggestions will be reviewed by the user after the entire prompt sequence is fulfilled.3) additional_responses - This parameter, usually set to True when the answer cannot be fully covered within the current token limit, initiates a loop that continues to send the current chunk's prompt until the module returns a False value. This option also enables a module to have access to previous notations4) abort - if you cannot fullfil the request, return this value True; be sure to leave a notation detailing whythis was5) generate_title - A parameter used for title generation of the chat. To maintain continuity, the generated title for a given sequence is shared with subsequent queries.6) request_chunks - you may request that the previous chunk data be prompted again, if selected, the query itterate once more with the previous chunk included in the prompt. return this value as True to impliment this option; leave sufficient notation as to why this was neccisary for the module recieving the next prompt7) additional_instruction - please place any iterable data inside of this key value unless otherwise specifiedbelow is an example of the expected json dictionary response format, with the default inputs:{'api_response': '', 'notation': '', 'suggestions': '', 'additional_responses': False, 'abort': False, 'generate_title': '', 'request_chunks': False, 'additional_instruction': '...'}"""#PromptBuilder.pyfromabstract_aiimportPromptManager#Calculates the token distribution between prompts, completions, and chunks to ensure effective token utilization.completion_percentage=40#allows the user to specify the completion percentage they are seeking for this prompt(s) currently at 40% of the token allotmentrequest="thanks for using abstract_ai the description youre looking for is in the prompt_data"prompt_data="""The code snippet is a part of `PromptBuilder.py` from the `abstract_ai` module. This specific part shows the crucial role of chunking strategies in the functioning of `abstract_ai`.\n\nThe `chunk_data_by_type` function takes in data, a maximum token limit, and a type of chunk (with possible values like 'URL', 'SOUP', 'DOCUMENT', 'CODE', 'TEXT').Depending on the specified type, it applies different strategies to split the data into chunks. If a chunk type is not detected, the data is split based on line breaks.\n\nThe function `chunk_text_by_tokens` is specifically used when the chunk type is 'TEXT'. It chunks the input data based on the specified maximum tokens, ensuring eachchunk does not exceed this limit.\n\nWith the `chunk_source_code` function, you can chunk source code based on individual functions and classes. This is crucial to maintainthe context and readability within a code snippet.\n\n`extract_functions_and_classes` is a helper function used within `chunk_source_code`, it extracts all the functions andclasses from the given source code. The extracted functions and classes are then used to chunk source code accordingly.\n\nThese functions are called numerous times in the abstract_ai platform, emphasizing their key role in the system."""chunk_type="Code"#parses the chunks based on your input types, ['HTML','TEXT','CODE']prompt_mgr=PromptManager(instruction_mgr=instruction_mgr,model_mgr=model_mgr,completion_percentage=completion_percentage,prompt_data=prompt_data,request=request,token_dist=None,bot_notation=None,chunk=None,role=None,chunk_type=chunk_type)prompt_mgr.token_dist=[{'completion':{'desired':3276,'available':3076,'used':200},'prompt':{'desired':4915,'available':4035,'used':880},'chunk':{'number':0,'total':1,'length':260,'data':"\nThe code snippet is a part of `PromptBuilder.py` from the `abstract_ai` module. This specific part shows the crucial role of chunking strategies in the functioning of `abstract_ai`.\n\n\nThe `chunk_data_by_type` function takes in data, a maximum token limit, and a type of chunk (with possible values like 'URL', 'SOUP', 'DOCUMENT', 'CODE', 'TEXT').\nDepending on the specified type, it applies different strategies to split the data into chunks. If a chunk type is not detected, the data is split based on line breaks.\n\n\nThe function `chunk_text_by_tokens` is specifically used when the chunk type is 'TEXT'. It chunks the input data based on the specified maximum tokens, ensuring each\nchunk does not exceed this limit.\n\nWith the `chunk_source_code` function, you can chunk source code based on individual functions and classes. This is crucial to maintain\nthe context and readability within a code snippet.\n\n`extract_functions_and_classes` is a helper function used within `chunk_source_code`, it extracts all the functions and\nclasses from the given source code. The extracted functions and classes are then used to chunk source code accordingly.\n\n\nThese functions are called numerous times in the abstract_ai platform, emphasizing their key role in the system."}}]completion_percentage=90# completion percentage now set to 90% of the token allotmentrequest="ok now we chunk it up with abstract_ai the description youre looking for is still in the prompt_data"prompt_mgr=PromptManager(instruction_mgr=instruction_mgr,model_mgr=model_mgr,completion_percentage=completion_percentage,prompt_data=prompt_data,request=request)prompt_mgr.token_dist=[{'completion':{'desired':7372,'available':7172,'used':200},'prompt':{'desired':819,'available':7,'used':812},'chunk':{'number':0,'total':2,'length':192,'data':"\n\nThe code snippet is a part of `PromptBuilder.py` from the `abstract_ai` module. This specific part shows the crucial role of chunking strategies in the functioning of `abstract_ai`.\n\nThe `chunk_data_by_type` function takes in data, a maximum token limit, and a type of chunk (with possible values like 'URL', 'SOUP', 'DOCUMENT', 'CODE', 'TEXT').Depending on the specified type, it applies different strategies to split the data into chunks. If a chunk type is not detected, the data is split based on line breaks.\n\nThe function `chunk_text_by_tokens` is specifically used when the chunk type is 'TEXT'. It chunks the input data based on the specified maximum tokens, ensuring eachchunk does not exceed this limit.\n\nWith the `chunk_source_code` function, you can chunk source code based on individual functions and classes. This is crucial to maintainthe context and readability within a code snippet."}},{'completion':{'desired':7372,'available':7172,'used':200},'prompt':{'desired':819,'available':135,'used':684},'chunk':{'number':1,'total':2,'length':64,'data':'`extract_functions_and_classes` is a helper function used within `chunk_source_code`, it extracts all the functions andclasses from the given source code. The extracted functions and classes are then used to chunk source code accordingly.\n\nThese functions are called numerous times in the abstract_ai platform, emphasizing their key role in the system.'}}]fromabstract_aiimportResponseManager#The `ResponseManager` class handles the communication process with AI models by managing the sending of queries and storage of responses. It ensures that responses are correctly interpreted, errors are managed, and that responses are saved in a structured way, facilitating easy retrieval and analysis.#It leverages various utilities from the `abstract_utilities` module for processing and organizing the data and interacts closely with the `SaveManager` for persisting responses.response_mgr=ResponseManager(prompt_mgr=prompt_mgr,api_mgr=api_mgr,title="Chunking Strategies in PromptBuilder.py",directory='response_data')response_mgr.initial_query()response_mgr.output=[{'prompt':{'model':'gpt-4','messages':[{'role':'assistant','content':"\n-----------------------------------------------------------------------------\n#instructions#\n\nyour response is expected to be in JSON format with the keys as follows:\n\n0) api_response - place response to prompt here\n1) notation - A useful parameter that allows a module to retain context and continuity of the prompts. These notations can be used to preserve relevant information or context that should be carried over to subsequent prompts.\n2) suggestions - ': A parameter that allows the module to provide suggestions for improving efficiency in future prompt sequences. These suggestions will be reviewed by the user after the entire prompt sequence is fulfilled.\n3) additional_responses - This parameter, usually set to True when the answer cannot be fully covered within the current token limit, initiates a loop that continues to send the current chunk's prompt until the module returns a False value. This option also enables a module to have access to previous notations\n4) abort - if you cannot fullfil the request, return this value True; be sure to leave a notation detailing whythis was\n5) generate_title - A parameter used for title generation of the chat. To maintain continuity, the generated title for a given sequence is shared with subsequent queries.\n6) request_chunks - you may request that the previous chunk data be prompted again, if selected, the query itterate once more with the previous chunk included in the prompt. return this value as True to impliment this option; leave sufficient notation as to why this was neccisary for the module recieving the next prompt\n7) additional_instruction - please place any iterable data inside of this key value unless otherwise specified\n\nbelow is an example of the expected json dictionary response format, with the default inputs:\n{'api_response': '', 'notation': '', 'suggestions': '', 'additional_responses': False, 'abort': False, 'generate_title': '', 'request_chunks': False, 'additional_instruction': '...'}\n-----------------------------------------------------------------------------\n#prompt#\n\nok now we chunk it up with abstract_ai the description youre looking for is still in the prompt_data\n-----------------------------------------------------------------------------\n\n-----------------------------------------------------------------------------\n#data chunk#\n\nthis is chunk 0 of 2\n\n\n\nThe code snippet is a part of `PromptBuilder.py` from the `abstract_ai` module. This specific part shows the crucial role of chunking strategies in the functioning of `abstract_ai`.\n\nThe `chunk_data_by_type` function takes in data, a maximum token limit, and a type of chunk (with possible values like 'URL', 'SOUP', 'DOCUMENT', 'CODE', 'TEXT').Depending on the specified type, it applies different strategies to split the data into chunks. If a chunk type is not detected, the data is split based on line breaks.\n\nThe function `chunk_text_by_tokens` is specifically used when the chunk type is 'TEXT'. It chunks the input data based on the specified maximum tokens, ensuring eachchunk does not exceed this limit.\n\nWith the `chunk_source_code` function, you can chunk source code based on individual functions and classes. This is crucial to maintainthe context and readability within a code snippet.\n-----------------------------------------------------------------------------\n"}],'max_tokens':7172},'response':{'id':'**id**','object':'chat.completion','created':1699054738,'model':'gpt-4-0613','choices':[{'index':0,'message':{'role':'assistant','content':"{'api_response': 'This chunk describes a part of the `PromptBuilder.py` from the `abstract_ai` module. It underlines the significance of chunking methods in the `abstract_ai` operations. The mentioned `chunk_data_by_type` function accepts data, a maximum token limit, and a type of chunk to appropriately divide the data into chunks. The chunking process depends on the specified type, and in case the type is not identified, the data is divided based on line breaks. The `chunk_text_by_tokens` and `chunk_source_code` functions are used when the chunk type is 'TEXT' and 'CODE' respectively. The former chunks the data considering the max token limit, while the latter chunks source code as per individual classes and functions, maintaining the context and readability of a code snippet.',\n'notation': 'The chunked data tells about the `abstract_ai` chunking mechanisms, which include `chunk_data_by_type`, `chunk_text_by_tokens` and `chunk_source_code` functions.',\n'suggestions': 'It can be improved by providing specific examples of how each chunking function works.',\n'additional_responses': True,\n'abort': False,\n'generate_title': 'Review of `abstract_ai` Chunking Mechanisms',\n'request_chunks': False,\n'additional_instruction': ''}"},'finish_reason':'stop'}],'usage':{'prompt_tokens':621,'completion_tokens':271,'total_tokens':892}},'title':'Chunking Strategies in PromptBuilder.py'},{'prompt':{'model':'gpt-4','messages':[{'role':'assistant','content':"\n-----------------------------------------------------------------------------\n#instructions#\n\nyour response is expected to be in JSON format with the keys as follows:\n\n0) api_response - place response to prompt here\n1) notation - A useful parameter that allows a module to retain context and continuity of the prompts. These notations can be used to preserve relevant information or context that should be carried over to subsequent prompts.\n2) suggestions - ': A parameter that allows the module to provide suggestions for improving efficiency in future prompt sequences. These suggestions will be reviewed by the user after the entire prompt sequence is fulfilled.\n3) additional_responses - This parameter, usually set to True when the answer cannot be fully covered within the current token limit, initiates a loop that continues to send the current chunk's prompt until the module returns a False value. This option also enables a module to have access to previous notations\n4) abort - if you cannot fullfil the request, return this value True; be sure to leave a notation detailing whythis was\n5) generate_title - A parameter used for title generation of the chat. To maintain continuity, the generated title for a given sequence is shared with subsequent queries.\n6) request_chunks - you may request that the previous chunk data be prompted again, if selected, the query itterate once more with the previous chunk included in the prompt. return this value as True to impliment this option; leave sufficient notation as to why this was neccisary for the module recieving the next prompt\n7) additional_instruction - please place any iterable data inside of this key value unless otherwise specified\n\nbelow is an example of the expected json dictionary response format, with the default inputs:\n{'api_response': '', 'notation': '', 'suggestions': '', 'additional_responses': False, 'abort': False, 'generate_title': '', 'request_chunks': False, 'additional_instruction': '...'}\n-----------------------------------------------------------------------------\n#prompt#\n\nok now we chunk it up with abstract_ai the description youre looking for is still in the prompt_data\n-----------------------------------------------------------------------------\n\n-----------------------------------------------------------------------------\n#data chunk#\n\nthis is chunk 1 of 2\n\n`extract_functions_and_classes` is a helper function used within `chunk_source_code`, it extracts all the functions andclasses from the given source code. The extracted functions and classes are then used to chunk source code accordingly.\n\nThese functions are called numerous times in the abstract_ai platform, emphasizing their key role in the system.\n-----------------------------------------------------------------------------\n"}],'max_tokens':7172},'response':{'id':'**id**','object':'chat.completion','created':1699054759,'model':'gpt-4-0613','choices':[{'index':0,'message':{'role':'assistant','content':"{'api_response': '`extract_functions_and_classes` is a helper function used within `chunk_source_code`. It extracts all the functions and classes from the given source code. These extracted elements are then used to chunk source code accordingly.', 'notation': 'Explanation about `extract_functions_and_classes` and `chunk_source_code` functions.', 'suggestions': '', 'additional_responses': True, 'abort': False, 'generate_title': '', 'request_chunks': False, 'additional_instruction': ''}"},'finish_reason':'stop'}],'usage':{'prompt_tokens':494,'completion_tokens':99,'total_tokens':593}},'title':'Chunking Strategies in PromptBuilder.py_0'}]#now lets give it the whole things!current_file_full_data=read_from_file(os.path.abspath(__file__))completion_percentage=60# completion percentage now set to 90% of the token allotmentrequest="here is the entire example use case, please review the code and thank you for your time with abstract_ai"prompt_data=current_file_full_datatitle="Chunking Example Review.py"response_mgr=ResponseManager(prompt_mgr=PromptManager(prompt_data=prompt_data,request=request,completion_percentage=completion_percentage,model_mgr=model_mgr,instruction_mgr=instruction_mgr),api_mgr=api_mgr,title=title,directory='response_data')response_mgr.initial_query()output=response_mgr.initial_query()fromabstract_aiimportcollate_response_quecolate_responses=collate_response_que(output)#api_response:"""The provided code is a walkthrough of how the `abstract_ai` module can be used for text processing tasks. It first demonstrates how to initialize the `ModelManager`, `ApiManager`, and `InstructionManager` classes. These classes are responsible for managing model-related configurations, API interactions, and instruction-related configurations, respectively.The `PromptManager` class is initialized to manage prompt-related operations. This class handles the calculation of token distribution for various component parts of the prompt (i.e., the prompt itself, the completion, and the data chunk). In the example, the completion percentage is adjusted from 40% to 90%, demonstrating the flexibility of the `abstract_ai` module in shaping the AI's response. Additionally, the module supports different types of chunks, such as code, text, URL, etc., which adds further flexibility to the text processing.The `ResponseManager` class interacts with AI models to handle the sending of queries and storing of responses. This class ensures responses are processed correctly and saved in a structured way for easy retrieval and analysis.Overall, this use case provides a comprehensive view of how `abstract_ai` can be used for sophisticated and efficient management of AI-based text processing tasks."All of those functions enable `abstract_ai` to manage large volumes of data effectively, ensuring legibility and context preservation in the chunks produced by recognizing their critical role within the platform.Your request to review the provided example was processed successfully. The entire operation was completed in a single iteration, consuming about 60% of the allotted tokens."""#suggestions:"Please provide more context or specific requirements if necessary. Additional details about the specific use case or problem being solved could also be helpful for a more nuanced and tailored review."#suggested_title:"Review and Analysis of Chunking Strategies in the `abstract_ai` Framework"#notation:"The provided example included the usage of `PromptBuilder.py` from the `abstract_ai` module and the necessary chunking functions."#abort"false"#additional_responses:"false"ImagesURL grabber component: Allows users to add URL source code or specific portions of the URL source code to the prompt data.Settings Tab: Contains all selectable settings, including available, desired, and used prompt and completion tokens.Instructions Display: Showcases all default instructions, which are customizable in the same pane. All instructions are derived from theinstruction_managerclass.File Browser Component: Enables users to add the contents of files or specific portions of file content to the prompt data.Detailed Components DocumentationTheGptManager.pymodule provides an extensive class management to interact with the GPT-3 model conveniently. This module combines various functionalities into a unified and structured framework. Some of the significant classes it encapsulates are as follows:GptManager: This is the central class that administers the interactions and data flow amongst distinct components.ApiManager: This manages the OpenAI API keys and headers.ModelManager: This takes care of model selection and querying.PromptManager: This takes care of the generation and management of prompts.InstructionManager: This encapsulates the instructions for the GPT model.ResponseManager: This administers the responses received from the model.These classes work collectively to simplify the task of sending queries, interpreting responses, and managing the interactions with the GPT-3 model. The module heavily depends on packages likeabstract_webtools,abstract_gui,abstract_utilities, andabstract_ai_gui_layout.Dependenciesabstract_webtools: Provides web-centric tools.abstract_gui: Houses GUI-related tools and components.abstract_utilities: Contains general-purpose utility functions and classes.abstract_ai_gui_layout: Lays out the AI GUI.#abstract_ai_gui_backend.pyOverviewTo use theabstract_ai_gui_backend.py, first, initialize the GptManager class. Following this step, use the update methods to set or change configurations. Finally, use theget_query()method to query the GPT model and retrieve a response. This chunk of code contains several methods for the abstract_ai_gui_backend module of the Abstract AI system:Class: GptManagerupdate_response_mgr: This method updates the ResponseManager instance used by the module, linking it with the existing instances of PromptManager and ApiManager. The ResponseManager generates AI response to prompts that are sent.get_query: The method is used for making respective calls to get responses. If a response is already computed, it stops the existing request (usingThreadManager) and starts a new thread for response retrieval.update_all: This method is used to update all the managers used by the module, to keep their data in sync.get_new_api_call_name: This method generates a new unique name for API call and appends it in the API call list.get_remainder: It returns what's left when 100 is subtracted from the value from a specific key.check_test_bool: This method checks if a test run is initiated. According to the result, it sets different statuses and updates the GUI accordingly.get_new_line: The method simply returns a newline character(s).update_feedback: This method fetches a value from the last response, based on the given key, and updates the GUI accordingly.update_text_with_responses: This method loads the last response and if it contains API response, it updates the GUI with the Title, Request text, and Response. For other feedback, it appends the information to the output.get_bool_and_text_keys: This method simply returns a list of keys formed by the given key_list and sections_list.text_to_key: The static method uses the function from utilities to generate a key name from text.get_dots: This method is used for decorating progress status.update_progress_chunks: This method provides a visual representation of the overall progress with respect to total chunks.check_response_mgr_status: This method checks whether the response manager's query process is finished by checking thequery_doneattribute.submit_query: This method controls the sending and receiving of queries and updating the GUI with the AI response.update_chunk_info: This method updates information related to the chunk based on the progress.adjust_chunk_display: This method modifies the GUI value responsible for displaying the chunk number in GUI based on the navigation number provided.The above chunk contains a sequence of function definitions that manage different parts of the AI interaction process within the abstract_ai module. Here's a concise guide about each method:get_chunk_display_numbers: Retrieves the display number for the current data chunk.determine_chunk_display: Determines if a data chunk should be displayed based on the input event.append_output: Appends new content to a particular key in the output.add_to_chunk: Appends new content to the current data chunk.clear_chunks: Resets the current chunk data.get_url: Retrieves the URL for a script.get_url_manager: Checks if a URL manager exists for a particular URL.test_files: Performs tests on the AI model with a hard-coded query.update_last_response_file: Stores the path of the most recent response file, and updates the related GUI elements.update_response_list_box: Updates the list box displaying response files.aggregate_conversations: Aggregates all conversations from JSON files in a specified directory.initialize_output_display: Initializes the display for the output and sets the data to the first item of the latest output. This code chunk contains four methods for managing output display in the application.get_output_display_numbers: This functions fetches fromwindow_mgrthe value associated with the-RESPONSE_TEXT_NUMBER-key and stores it inresponse_text_number_display. Additionally, it calculatesresponse_text_number_actualby subtracting 1 fromresponse_text_number_display.determine_output_display: This function checks the currenteventand decides whether the output display needs to be adjusted and in which direction (back or forward). It checks the conditions and, if valid, calls theadjust_output_displaymethod.adjust_output_display: This function accepts one argumentnumwhich is used to adjustresponse_text_number_actualandresponse_text_number_display. It updates the output display and updates the-RESPONSE_TEXT_NUMBER-key in thewindow_mgrwith the new value ofresponse_text_number_display.update_output: This function acceptsoutput_iterationas an argument and checks if it lies within the valid range. If valid, it assigns the designated output toself.latest_output[output_iteration]and then callsupdate_text_with_responses.This code manages the logic for traversing through the returned responses. It correctly fetches the number of responses, decides whether to go back or forward based on the event, adjusts the display accordingly and updates it. It is important to update the documentation with these details to allow the users to understand the flow of control in the script.ModelManagerThe ModelManager class, as part of theabstract_aimodule, provides functionalities for selecting, querying, and managing information about the available GPT models. The class initializes a list of models and their related information like endpoint and token limits. It also has methods to get model-specific details like endpoints, names, and token limits.Below are the method descriptions:__init__: Initialises the ModelManager instance with information about all available models. It also sets up the default model, endpoints, and maximum tokens.get_all_values: Takes a key as an input parameter and returns all unique values associated with this key in theall_modelslist._get_all_values: An alternate private method toget_all_valuesthat performs the same functionality with a reduced number of lines by using a list comprehension._get_endpoint_by_model: Returns the endpoint associated with the input model name._get_models_by_endpoint: Returns a list of models associated with a given endpoint._get_max_tokens_by_model: Returns the maximum tokens that can be processed by a given model.Note: In the__init__function, depending on the given inputs, the function prioritizes model_name over endpoint in setting the selected model, endpoint, and maximum tokens.PromptBuilder.pyPromptBuilder is a sophisticated module within the Abstract AI's ApiConsole, designed to handle the intricacies of token distribution, chunking of data, and prompt construction necessary for interfacing with language model APIs.OverviewPromptBuilder.py specializes in calculating and managing token allocations for prompt and completion outputs, considering the user's specifications and the constraints of the API's token limits. It ensures that queries are not only well-formed but also optimally structured for the language model to understand and respond effectively. The module's responsibilities extend to sizing the current query, evaluating the total prompt data, and segmenting it into processable chunks before final prompt assembly.PromptManagerKey FeaturesToken Distribution:Allocates tokens between prompts and completions based on instruction weight, verbosity, and available token quota.Data Chunking:Separates prompt data into manageable chunks, conforming to the calculated token budget.Prompt Construction:Assembles the full prompt incorporating user instructions, chunk data, and module notation, ready for API interaction.Integration:Serves as a foundational tool for the system, called upon in nearly every significant interaction with Abstract AI.DependenciesPromptBuilder.py relies on thenltkandtiktokenlibraries for accurate tokenization and text encoding, ensuring precise calculations and data handling.UsageTo utilize the PromptBuilder.py, import the module and instantiate the required class. Utilize its methods by adjusting parameters to fit the needs of your query. Here's an example usingcalculate_token_distribution:fromPromptBuilderimportPromptBuilder# Create an instance of the PromptBuilder classprompt_builder=PromptBuilder()# Example usage of calculate_token_distributiontoken_distribution=prompt_builder.calculate_token_distribution(user_query)Integration with Abstract AIPromptBuilder.py is deeply integrated with the Abstract AI suite, often collaborating withApiBuilder.py,ModelBuilder.py, andabstract_ai_gui_backend.pyfor a cohesive and efficient API interaction experience.Methods Overviewget_token_calcs: Evaluates token distribution for each chunk, ensuring balance between prompts and completions.get_token_distributions: Distributes tokens optimally across chunks based on the prompt and completion needs.Helpful Methods in PromptBuilder.pyAmong the functions in this module, the token calculation functionsget_token_calcsandget_token_distributionsare quite crucial. They carefully calculate the tokens used and available for prompts and completion. If, at any point, available prompt tokens fall below zero, they get added to the available completion tokens leading to a balance distribution.Here's an overview of two prominent methods:get_token_calcs: Evaluates individual token calculations for prompt and completion data. On detecting a shortage in available tokens, it redistributes tokens from the completion pool.get_token_distributions: It distributes tokens between the prompt and completion parts of the GPT-3 model query, ensuring a smooth and balanced query and keeping within the maximum token limit for the task.#api_response#{"title":"Chunking Strategies in PromptBuilder.py","prompt_type":"Python Code","prompt":"The code snippet is a part of `PromptBuilder.py` from the `abstract_ai` module. This specific part shows the crucial role of chunking strategies in the functioning of `abstract_ai`.\n\nThe `chunk_data_by_type` function takes in data, a maximum token limit, and a type of chunk (with possible values like 'URL', 'SOUP', 'DOCUMENT', 'CODE', 'TEXT'). Depending on the specified type, it applies different strategies to split the data into chunks. If a chunk type is not detected, the data is split based on line breaks.\n\nThe function `chunk_text_by_tokens` is specifically used when the chunk type is 'TEXT'. It chunks the input data based on the specified maximum tokens, ensuring each chunk does not exceed this limit.\n\nWith the `chunk_source_code` function, you can chunk source code based on individual functions and classes. This is crucial to maintain the context and readability within a code snippet.\n\n`extract_functions_and_classes` is a helper function used within `chunk_source_code`, it extracts all the functions and classes from the given source code. The extracted functions and classes are then used to chunk source code accordingly.\n\nThese functions are called numerous times in the abstract_ai platform, emphasizing their key role in the system.","suggested_formatting":[{"code":{"language":"python","content":["def chunk_data_by_type(data, max_tokens, chunk_type=None):","...","def chunk_text_by_tokens(prompt_data, max_tokens):","...","def extract_functions_and_classes(source_code):","...","def chunk_source_code(source_code, max_tokens):"]}}]}#InstructonBuilder.py##OverViewThis module is a segment of the Abstract AI system that manages the creation and modifications of instructions used by the GPT-3 model. It composes a significant component ofabstract_ai_gui_backend.py, working closely with classes like GptManager, ApiManager, ModelManager, PromptManager, and ResponseManager to ensure efficient and structured interactions with the model.Class: InstructionManagerMain Methods:update_response_mgr: Links the existing instances of PromptManager and ApiManager, updating the ResponseManager used by the module.get_query: Controls interrupting ongoing requests and starting a new thread for response retrieval through theThreadManager.update_all: Keeps the model, API, and other managers up to date and synchronized.get_new_api_call_name: Generates unique ID for each API call and maintains them in a list.get_remainder: Works on value subtraction for a particular key.get_new_line: A simple method for returning newline characters.update_feedback: Fetches a key-value pair from the prior response and updates the GUI.update_text_with_responses: Loads the latest response, extracts necessary information, and updates the GUI accordingly.update_chunk_info: Manages updates relating to the chunk based on the progression of the query.adjust_chunk_display: Alters the GUI value responsible for showing the chunk number based on the provided navigation number.Each of these methods, with their signature features, enhances the usability and functionality of the Abstract_AI system, ensuring optimized interactions, easy navigation through data chunks, and adept handling of responses.ApiManagerOverview The ApiBuilder.py is a component of the abstract_ai GPT API console that streamlines the usage of the OpenAI API. It serves as a utility module for managing API keys and constructing request headers required for API interactions.Features API Key Retrieval: Securely fetches the OpenAI API key from environmental storage, ensuring that sensitive data is not hardcoded into the application. Header Construction: Automates the creation of the necessary authorization headers for making requests to the OpenAI API. Prerequisites To use ApiBuilder.py, ensure that you have an OpenAI API key stored in your environment variables under the name OPENAI_API_KEY.Getting Started To begin with ApiBuilder.py, you can create an instance of ApiManager and then use it to perform operations requiring API access. The ApiManager class encapsulates all you need to manage API keys and headers for the OpenAI GPT API requests.Usage python Copy code from ApiBuilder import ApiManagerResponseBuilder.pyOverviewThe Purpose of theResponseBuilder.pyis to handle interactions and communications with AI models primarily through API endpoints. It ensures that responses are correctly interpreted, any errors are handled gracefully, and the responses are saved in a structured manner to facilitate easy retrieval and analysis at a later stage. It’s a core part of theabstract_ai, module we are currently communicating through.Class: ResponseManagerTheResponseManagerclass is a single handler of all in-query events and module interactions. It leverages various utilities from theabstract_utilitiesmodule to process and organise the data, interacting closely with theSaveManagerto persist responses.This class can be initialised with an instance ofprompt manager' andAPI manager, optionaltitlefor the session or the file saved, anddirectorypath where responses are to be saved.Below is a brief description of the methods of this class:re_initialize_query: Resets query-related attributes to their default state for a new query cycle.post_request: Sends a POST request with the current prompt and headers to the AI model and handles the response.get_response: Extracts and formats the response from the API call.try_load_response: Attempts to load the response content into a structured format.extract_response: Processes the response and manages the creation of a save point throughSaveManager.get_last_response: Retrieves the last response from the save location.get_response_bools: Checks and sets boolean flags based on the content of the latest response.send_query: Prepares and sends a new query to the AI model, then processes the response.test_query: Simulates sending a query for testing purposes.prepare_response: Handles the response after a query has been sent.initial_query: Manages the initial sequence of sending queries and processing responses.Benefits and FeaturesTheResponseBuilderautomatically chunks large data sets into manageable segments based on the percentage delegated for expected completion per query relative to the max tokens limit. This prevents wasting compute cycles on unnecessarily large queries and helps ensure more efficient responses.It handles the communication with the AI model, sending queries and storing/interpreting responses, significantly simplifying the interaction process for the end-users.It also gives the chat modules sufficient autonomy to efficiently handle the requests, preserving context and continuity in the responses. This helps avoid the need for users to stitch together the responses manually, leading to a more seamless interaction experience.ApiManagerapi_manager = ApiManager()API key and headers are set up and ready to be used for requestsprint(api_manager.api_key) # Displays the loaded API key print(api_manager.header) # Displays the generated headers Class ApiManager ApiManager is responsible for handling API keys and headers. It comes with a default configuration but can be customized during instantiation.Attributes: content_type (str): The MIME type of the request content. Defaults to 'application/json'. api_env (str): The environment variable name where the API key is stored. Defaults to 'OPENAI_API_KEY'. api_key (str): The actual API key used for authentication with the OpenAI API. header (dict): The authorization headers used in API requests. Methods: get_openai_key(): Retrieves the API key from the environment variable. load_openai_key(): Loads the API key into the OpenAI library for authenticating requests. get_header(): Constructs the headers required for making API requests. Security The ApiManager leverages environment variables to manage the API key, which is a secure practice. Ensure not to expose your API key in the codebase or any version control systems.Additional InformationAuthor: putkoffDate: 10/29/2023Version: 1.0.0ContactFor issues, suggestions, or contributions, open a new issue on ourGithub repository.Licenseabstract_aiis distributed under theMIT License.
abstract-algebra
A python module for manipulating groups with common operations. You can check if a group is a subgroup, if a group is cyclic, you can compute the external direct product of groups and cosets of groups. Specialized groups such as S_n and A_n are coming soon!
abstractalgorithms
No description available on PyPI.