package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
acc
# acc [![Build Status](https://travis-ci.org/autti/acc.svg?branch=master)](https://travis-ci.org/autti/acc) [![Coverage Status](https://coveralls.io/repos/github/autti/acc/badge.svg?branch=master)](https://coveralls. io/github/autti/acc?branch=master)Adaptive Cruise Control. Udacity micro challenge.# WHAT SHOULD I DO? Look forcruise.pyand implement thecontrolfunction.# More informationJoin the #acc-challenge channel on the ND013 Slack and ask away.Here are some reference links shared by Mac:https://www.codeproject.com/articles/36459/pid-process-control-a-cruise-control-examplehttp://itech.fgcu.edu/faculty/zalewski/cda4170/files/pidcontrol.pdfhttps://github.com/slater1/AdaptiveCruiseControlhttps://github.com/commaai/openpilot/blob/master/selfdrive/controls/lib/adaptivecruise.py# TESTING` python setup.py test `# TODO[ ] Create assertions for reasonable behavior when implementing the maneuver and fail the tests when those do not pass. For example, distance to car in front is 0, or target speed is different to actual speed.[ ] Implement plotting of PID curves to compare solutions.[ ] Replacegas=0andbrake=0for a simple solution that passes the tests.[ ] Decide if we need to run the tests in real time or do something different.
acc2psql
acc2psqlConvert Microsoft Access *.accdb or *.mdb into *.sql PostgreSQL formatI believe in MS Access regarding its fast prototyping of relational database. You probably are not going to use MS Access in production environment due to its file based and single user restriction. But in a prototyping phase, MS Acces is one tool that is hard to beat!Quick and easily prototype your database using MS Access, once satisfied, use this tool to convert said database into PostgreSQL DDL. You can either save it to *.sql file, dump it to console or directly execute it to PostgreSQL instanceIf you use Django, you can then runpython manage.py inspectdbto turn generated tables into Django models and .. voila, you can skip manually coding tedious Django models yourselves!pip install acc2sql python -m acc2sql --src someawesomedatabase.accdb --dump python -m acc2sql --src someawesomedatabase.accdb --out saveto.sql python -m acc2sql --src someawesomedatabase.accdb --host localhost --username user --password password --db db #if you use django python manage.py inspectdbThis package is still in early development, I may add more database source / target along the way. You can also use this to any tool necessary: for example, I am thinking to use this as PyCharm Extension.. that will be awesome!Eko - Founder and CEORemote Worker Indonesia/LinkedIn/Facebook/Instagram/TwitterYoutube: Everybody can code!Spotify: Everybody can [email protected]
accapi
Avigilon Control Center API for PythonAboutThis package alows you to communicate with Avigilon Control Center (ACC) API. Before you can start using it, you need to send and email [email protected] ask for your unique pair of user nonce and user key values. Only having those will you be able to communicate with ACC server instance.FeaturesLogin and get sessionGet camera listCurrently limited functionality is available but it's easy to extend (contributions welcome!)Usagefromaccapi.clientimportAccClientFactoryfactory=AccClientFactory("user_nonce","user_key")client=factory.login("http://acc_address","username","password")cameras=client.get_cameras()
accarbon
Tweet images of source code submitted to AtCoder.Install$ pip3 install accarbonUsage$ accarbon https://atcoder.jp/contests/abc170/submissions/14465204?lang=jaSupported LanguageWe ensure that syntax highlighting is correct for submissions in the following languages.C (GCC 9.2.1)C (Clang 10.0.0)C++ (GCC 9.2.1)C++ (Clang 10.0.0)Java (OpenJDK 11.0.6)Python (3.8.2)Ruby (2.7.1)C# (.NET Core 3.1.201)PyPy3 (7.3.0)Haskell (GHC 8.8.3)Rust (1.42.0)Tools for Developmentflake8autopep8pandocLicenseThis software is released under the MIT License, see LICENSE.
accasim
AccaSim: a Workload Management Simulator for Job Dispatching Research=====================================================================*current version:* |version|AccaSim is a Workload management [sim]ulator for [H]PC systems, useful for developing dispatchers and conducting controlled experiments in HPC dispatching research. It is scalable and highly customizable, allowing to carry out large experiments across different workload sources, resource settings, and dispatching methods.AccaSim enables users to design novel advanced dispatchers by exploiting information regarding the current system status, which can be extended for including custom behaviors such as power consumption and failures of the resources. The researchers can use AccaSim for instance to mimic any real system by setting up the synthetic resources suitably, develop advanced power-aware, fault-resilient dispatching methods, and test them over a wide range of workloads by generating them synthetically or using real workload traces from HPC users.For more information please visit the `webpage of AccaSim <http://accasim.readthedocs.io/en/latest/>`_***************What's new?***************- 23-06-2018 Major improvements. Additional data is executed during Submission, Dispatching and Completion events (before and after).- 15-06-2018 Version 1.0 released.- 01-06-2018 New version of the dispatchers. The job requests are verified during the scheduling process.- 27-05-2018 New version of the resources and resource manager. All simulation methods were improved. Logger is used instead of printing messages.- 21-05-2018 Asyncronous file writing to reduce the IO overhead of the simulations.- 06-12-2017 A workload generator is available for generating new workloads from existing workloads.- 13-11-2017 Simulating distinct dispatchers under the same system and simulator configuration can be managed under the Experimentation class. It also includes the automatic plot generation.- 21-08-2017 Automatic plot generation for comparison of multiple workload schedules and benchmark files: Slowdown, Queue sizes, Load ratio, Efficiency, Scalability, Simulation time, Memory Usage.- 19-07-2017 Documentation is moved to `http://accasim.readthedocs.io <http://accasim.readthedocs.io/en/latest/>`_- 12-07-2017 First version of the package.
accbib
Bibliography info verification and format conversion packageThe "accbib" package aims to accelerate and accurise the bibliography data preparation. It collects complete and accurate bibliography data based on DOIs. It does the following jobs:generate accurate bibliography data from DOI list, bib file, or xml (used for microsoft word) database;check and correct the data of bib and xml file by looking up the entry's DOI;export bibliography database as either bib or xml format;so it can do format conversion between bib and xml files.The package fetches information fromhttp://dx.doi.org/website with DOI number usingapplication/vnd.citationstyles.csl+jsonheader for content negotiation. However, lots of materials do not have DOIs, such as Phd thesis, websites, very old publications, etc., this package allows users to provide an additional bib file in which user-defined DOIs and their corresponding contents is included. Thefetchadoiwill look into this bib database if DOI was not found on the internet.InstallationDependence:pybtexlxmlThis module can be installed via pip:pipinstallaccbibExamplesretrieve info from a DOIimportaccbib# fetch a doi, the returned data is an Entry object defined by pybtexdata=accbib.fetchadoi('10.1103/PhysRevA.103.063112')print(data)The output would be:Entry('article',fields=[('doi','10.1103/physreva.103.063112'),('journal','Physical Review A'),('number','6'),('publisher','American Physical Society (APS)'),('title','Controlling quantum numbers and light emission of Rydberg states via the laser pulse duration'),('url','http://dx.doi.org/10.1103/PhysRevA.103.063112'),('volume','103'),('pages','063112'),('year','2021')],persons=OrderedCaseInsensitiveDict([('author',[Person('Ortmann, L.'),Person('Hofmann, C.'),Person('Ivanov, I. A.'),Person('Landsman, A. S.')])]))retrieve bibliography data from a DOI list, bib or xml file# generate bibliography database from a file containing DOI list# the .dat file contains one DOI in each row# userlib is an optional parameter. For some special materials which do not# have DOIs, you can make up a fake DOI and put all the info (must include#"DOI":<fake DOI>) in userlib.bib in bib format. The accbib.loadois# function will look into that bib file if not found on internet.bibdata=accbib.loadois('dois.dat',userlib='userlib.bib')# load bib and correct the reference info with doi if possible.bibdata_1=accbib.loaddb('test.bib',checkdoi=True)# load xml and correct the reference info with doi if possible.bibdata_2=accbib.loaddb('test.xml',checkdoi=True)save bibliography data as different formats# save the database as an xml file, which can be imported in microsoft# office# jnStyle specifies whether to output full ('full') or abbreviation#('abbr') journal name# there is a small journal name database in translation.py file. Everyone#can add journal names and their abbreviations for his/her research area.accbib.export('example.xml',bibdata,jnStyle='abbr')# save the database as a bib file# for .bib file, full journal name is always output.accbib.export('example.bib',bibdata)a sample output bib file@article{Ortmann2021,author="Ortmann, L. and Hofmann, C. and Ivanov, I. A. and Landsman,A. S.",doi="10.1103/physreva.103.063112",journal="Physical Review A",number="6",publisher="American Physical Society (APS)",title="Controlling quantum numbers and light emission of Rydberg states via the laser pulse duration",url="http://dx.doi.org/10.1103/PhysRevA.103.063112",volume="103",pages="063112",year="2021"}@article{Facon2016,author="Facon, Adrien and Dietsche, Eva-Katharina and Grosso, Dorianand Haroche, Serge and Raimond, Jean-Michel and Brune, Michel andGleyzes, Sรฉbastien",doi="10.1038/nature18327",journal="Nature",number="7611",pages="262-265",publisher="Springer Science and Business Media LLC",title="A sensitive electrometer based on a Rydberg atom in a Schrรถdinger-cat state",url="http://dx.doi.org/10.1038/nature18327",volume="535",year="2016"}a sample output xml file<?xml version='1.0' encoding='UTF-8'?><Sourcesxmlns="http://schemas.openxmlformats.org/officeDocument/2006/bibliography"><Source><SourceType>JournalArticle</SourceType><Tag>Ortmann2021</Tag><Author><Author><NameList><Person><Last>Ortmann</Last><First>L.</First><Middle></Middle></Person><Person><Last>Hofmann</Last><First>C.</First><Middle></Middle></Person><Person><Last>Ivanov</Last><First>I.</First><Middle>A.</Middle></Person><Person><Last>Landsman</Last><First>A.</First><Middle>S.</Middle></Person></NameList></Author></Author><Title>ControllingquantumnumbersandlightemissionofRydbergstatesviathelaserpulseduration</Title><JournalName>Phys.Rev.A</JournalName><DOI>10.1103/physreva.103.063112</DOI><Issue>6</Issue><Publisher>AmericanPhysicalSociety(APS)</Publisher><URL>http://dx.doi.org/10.1103/PhysRevA.103.063112</URL><Volume>103</Volume><Pages>063112</Pages><Year>2021</Year></Source><Source><SourceType>JournalArticle</SourceType><Tag>Facon2016</Tag><Author><Author><NameList><Person><Last>Facon</Last><First>Adrien</First><Middle></Middle></Person><Person><Last>Dietsche</Last><First>Eva-Katharina</First><Middle></Middle></Person><Person><Last>Grosso</Last><First>Dorian</First><Middle></Middle></Person><Person><Last>Haroche</Last><First>Serge</First><Middle></Middle></Person><Person><Last>Raimond</Last><First>Jean-Michel</First><Middle></Middle></Person><Person><Last>Brune</Last><First>Michel</First><Middle></Middle></Person><Person><Last>Gleyzes</Last><First>Sรฉbastien</First><Middle></Middle></Person></NameList></Author></Author><Title>AsensitiveelectrometerbasedonaRydbergatominaSchrรถdinger-catstate</Title><JournalName>Nature</JournalName><DOI>10.1038/nature18327</DOI><Issue>7611</Issue><Pages>262-265</Pages><Publisher>SpringerScienceandBusinessMediaLLC</Publisher><URL>http://dx.doi.org/10.1038/nature18327</URL><Volume>535</Volume><Year>2016</Year></Source></Sources>
accbpg
Accelerated Bregman Proximal Gradient MethodsA Python package of accelerated first-order algorithms for solving relatively-smooth convex optimization problemsminimize { f(x) + P(x) | x in C }with a reference function h(x), where C is a closed convex set andh(x) is convex and essentially smooth on C;f(x) is convex and differentiable, and L-smooth relative to h(x), that is, f(x)-L*h(x) is convex;P(x) is convex and closed (lower semi-continuous).Implemented algorithms inHRX2018:BPG(Bregman proximal gradient) method with line search optionABPG (Accelerated BPG) methodABPG-expo (ABPG with exponent adaption)ABPG-gain (ABPG with gain adaption)ABDA (Accelerated Bregman dual averaging) methodAdditional algorithms for solving D-Optimal Experiment Design problems:D_opt_FW (basic Frank-Wolfe method)D_opt_FW_away (Frank-Wolfe method with away steps)InstallClone or fork from GitHub. Or install from PyPI:pip install accbpgUsageExample: generate a random instance of D-optimal design problem and solve it using two different methods.importaccbpg# generate a random instance of D-optimal design problem of size 80 by 200f,h,L,x0=accbpg.D_opt_design(80,200)# solve the problem instance using BPG with line searchx1,F1,G1,T1=accbpg.BPG(f,h,L,x0,maxitrs=1000,verbskip=100)# solve it again using ABPG with gamma=2x2,F2,G2,T2=accbpg.ABPG(f,h,L,x0,gamma=2,maxitrs=1000,verbskip=100)# solve it again using adaptive variant of ABPG with gamma=2x3,F3,G3,_,_,T3=accbpg.ABPG_gain(f,h,L,x0,gamma=2,maxitrs=1000,verbskip=100)D-optimal experiment design problems can be constructed from files (LIBSVM format) directly usingf,h,L,X0=accbpg.D_opt_libsvm(filename)All algorithms can work with customized functions f(x) and h(x), and an example is given inthis Python file.Additional examplesA complete example with visualization is given in thisJupyter Notebook.All examples inHRX2018can be found in theipynbdirectory.Comparisons with the Frank-Wolfe method can be found inipynb/ABPGvsFW.
acccmip5
acccmip5package can access CMIP5 database.GitHub repo:https://github.com/TaufiqHassan/acccmip5UsageFollow:https://github.com/TaufiqHassan/acccmip6LicenseThis code is licensed under theMIT License.
acccmip6
acccmip6package can access CMIP6 database in real-time.GitHub repo:https://github.com/TaufiqHassan/acccmip6Documentation:https://acccmip6.readthedocs.org.FeaturesReal-time search and download from continuously updating CMIP6 databaseFind data for any specific items (e.g. model, experiment, variable, frequency, realm)Search and download any combination of the above itemsFind the total number of available files and realizationsValidate your search itemsGet suggestions if necessaryAccess definition of the experimentsSkips already existing filesInstallationInstall is as simple as typing -pip install acccmip6Requires python v3.5 or up and pip. Mac users can usebrew install python3andpythonget-pip.pyfrom terminal. Windows users can useWindows Subsystem.Installation demoYou may also install the package via conda. Make sure you have added the conda-forge channel in your environment. You can add any channel by -conda config--env--addchannelsconda-forgeThen installacccmip6fromthassanchannel:conda install-cthassan acccmip6Usageacccmip6searches the live CMIP6 database and spits out currently available models, experiments and variables that satisfies your search criteria. It will also output the number of available files.acccmip6also tries to be a good command-line interface (CLI). Runacccmip6-hto see a help message with all the arguments you can pass.Required Arguments-o: Takes output type. โ€˜Sโ€™ for searching the database or โ€˜Dโ€™ for downloading from the database.Optional Arguments-m: Model names (multiple comma separated names are allowed)-e: Experiment names-f: CMIP6 output frequency (e.g. mon, day etc.)-v: Variable names-r: Realm name (e.g. atmos, ocean etc.)-rlzn: Select a specified realization-c: โ€˜yesโ€™ to use checker when searching or downloading. This helps to find out whether the search items are currently available. If not, it will produce suggestions that matches closely to your search.-desc: โ€˜yesโ€™ to get the description of the experiments searched for-dir: Download directory-skip: Skip any item (model/experiment/realizations) from your download-time: โ€˜yesโ€™ to print out all available time periods-yr: Select data for a time period (number of years)-n: Select specific data nodes (multiple node selection allowed)-serv: Set user-defined server-cr: Select common realizations among selected experimentsDemoSearch CMIP6 database withacccmip6-oSDownload CMIP6 data withacccmip6-oDLicenseThis code is licensed under theMIT License.
accdbtools
No description available on PyPI.
accel
No description available on PyPI.
accelarobotkeywords
Failed to fetch description. HTTP Status Code: 404
accelasc
accelascImplementation of theaccel_ascalgorithm for integer partitions. Seethis stackoverflow postandJerome Kelleher's website. See also the paper by Kelleher and O'Sullivan:Generating All Partitions: A Comparison Of Two Encodings.Installationpip3installaccelascorpip3install--useraccelascUsagefromaccelascimportaccel_asctuple(accel_asc(5))([1,1,1,1,1],[1,1,1,2],[1,1,3],[1,2,2],[1,4],[2,3],[5])
accel-brain-base
Deep Learning Library: accel-brain-base.accel-brain-baseis a basic library of the Deep Learning for rapid development at low cost. This library makes it possible to design and implement deep learning, which must be configured as a complex system or a System of Systems, by combining a plurality of functionally differentiated modules such as aRestricted Boltzmann Machine(RBM),Deep Boltzmann Machines(DBMs), aStacked-Auto-Encoder, anEncoder/Decoder based on Long Short-Term Memory(LSTM), and aConvolutional Auto-Encoder(CAE).From the view points of functionally equivalents and structural expansions, this library also prototypes many variants such as energy-based models and Generative models. Typical examples areGenerative Adversarial Networks(GANs)andAdversarial Auto-Encoders(AAEs).It also supports the implementation ofSemi-Supervised LearningandSelf-Supervised Learning, which consists of a combination of supervised and unsupervised learning systems. This library dryly considers the various Transformers variants such asBERT,XLNet,RoBERTa,ALBERT, etc, are merely applications ofSelf-Supervised LearningorSelf-Supervised Domain Adaptation(SSDA). From this point of view, this class builds the Transformers variants as SSDA models.In addition, this library providesDeep Reinforcement Learningsuch asDeep Q-Networksthat applies the neural network described above as a function approximator. In principle, any neural network described above can be implemented as a function approximator.See also ...Automatic Summarization Library: pysummarizationIf you want to implement theSequence-to-Sequence(Seq2Seq) modelfor the automatic summarization by usingaccel-brain-baseto build theEncoder/Decoder controllers,Attention models, orTransformer models.Reinforcement Learning Library: pyqlearningIf you want to implement the Deep Reinforcement Learning, especially forDeep Q-NetworkandMulti-agent Deep Q-Networkby usingaccel-brain-baseas a Function Approximator.Generative Adversarial Networks Library: pyganIf you want to implementGenerative Adversarial Networks(GANs)andAdversarial Auto-Encoders(AAEs)by usingaccel-brain-baseas components for Generative models based on the Statistical machine learning problems.Algorithmic Composition or Automatic Composition Library: pycomposerIf you want to implement the Algorithmic Composer based onGenerative Adversarial Networks(GANs)by usingaccel-brain-baseas components for Generative models based on the Statistical machine learning problems.InstallationInstall using pip:pipinstallaccel-brain-baseor,pythonsetup.pybdist_wheel pipinstalldist/accel_brain_base-*.*.*-py3-*-*.whlSource codeThe source code is currently hosted on GitHub.accel-brain-code/Accel-Brain-BasePython package index(PyPI)Installers for the latest released version are available at the Python package index.accel-brain-base : Python Package IndexDependenciesnumpy: v1.13.3 or higher.pandas: v0.22.0 or higher.mxnetormxnet-cu*: latest.Only when building a model of this library usingApache MXNet.torchOnly when building a model of this library usingPyTorch.For ML Ops.In theApache MXNetversion of this library, almost all models inheritHybridBlockfrommxnet.gluon. Functions for common ML Ops such as saving and loading parameters are provided byHybridBlock.On the other hand, inPyTorchof this library, almost all models inheritModulefromtorch.nn. Check the official documentation for the information you need.DocumentationFull documentation is available onhttps://code.accel-brain.com/Accel-Brain-Base/README.html. This document contains information on functionally reusability, functional scalability and functional extensibility.Problem Setting: Deep Learning after the era of "Democratization of Artificial Intelligence(AI)".How the Research and Development(R&D) on the subject of machine learning including deep learning, after the era of "Democratization of Artificial Intelligence(AI)", can become possible? Simply implementing the models and algorithms provided by standard machine learning libraries and applications like AutoML would reinvent the wheel. If you just copy and paste the demo code from the library and use it, your R&D would fall into dogmatically authoritarian development, or so-called the Hype driven development.If you fall in love with the concept of "Democratization of AI," you may forget the reality that the R&D is under the influence of not only democracy but also capitalism. The R&D provides economic value when its R&D artifacts are distinguished from the models and algorithms realized by standard machine learning libraries and applications such as AutoML. In general terms, R&D must provide a differentiator to maximize the scarcity of its implementation artifacts.On the other hand, it must be remembered that any R&D builds on the history of the social structure and the semantics of the concepts envisioned by previous studies. Many models and algorithms are variants derived not only from research but also from the relationship with business domains. It is impossible to assume differentiating factors without taking commonality and identity between society and its history.Considering many variable parts, structural unions, andfunctional equivalentsin the deep learning paradigm, which are variants derived not only from research but also from the relationship with business domains, from perspective ofcommonality/variability analysisin order to practice object-oriented design, this library provides abstract classes that define the skeleton of the deep Learning algorithm in an operation, deferring some steps in concrete variant algorithms such as theDeep Boltzmann Machines,Stacked Auto-Encoder,Encoder/Decoder based on LSTM, andConvolutional Auto-Encoderto client subclasses. The abstract classes and the interfaces in this library let subclasses redefine certain steps of the deep Learning algorithm without changing the algorithm's structure.These abstract classes can also provide new original models and algorithms such asGenerative Adversarial Networks(GANs),Deep Reinforcement Learning, orNeural network language modelby implementing the variable parts of the fluid elements of objects.Problem Solution: Distinguishing between this library and other libraries.The functions of Deep Learning are already available in many machine learning libraries. Broadly speaking, the deep learning functions provided by each machine learning library can be divided into the following two.A component that works just by running a batch program or API.A component that allows you to freely design its functions and algorithms.Many of the former have somewhat fixed error functions, initialization strategies, samplers, activation functions, regularization, and deep architecture configurations. That is, the functional extensibility is low. If you just want to run a distributed batch program or demo code, or just run it on a pre-designed interface, you should be fine with these components.But, in-house R&D aimed at extending to more accurate and faster algorithms, it is a risk to rely on distributions that are less functionally reusable and less functionally scalable. In addition, many of the "A component that works just by running a batch program or API" tend to be specified with the condition that "if you prepare an appropriate dataset, it will work". The "appropriateness" of the "appropriate dataset" is often undecidable without knowing the inside of the black box obscured by its components.On the other hand, in the existing machine learning library, the latter "A component that allows you to freely design its functions and algorithms" is certainly distributed. For example, MxNet/Gluon'sHybridBlockgives you the freedom to design functions. Similar things can be done with PyTorch'storch.nn.However, it cannot be said that a single algorithm can be produced in-house simply by making full use of these components. In-house R&D is also an organizational decision-making process. In this procedure, problems that can be solved by machine learning and statistics are first set in order to meet the demands from the business domain. Then, we will design an algorithm that functions as a problem-solving solution to this problem setting.However, it cannot be said that there is only one function as a problem solution. Unless we compare several functionally equivalent algorithms that help solve the problem, it remains unclear which algorithm should be the final choice.Object-Oriented Analysis and Design.From perspective ofcommonality/variability analysisin order to practice object-oriented design, the concepts and interfaces of Deep Learning paradigms can be organized as follows:ExtractableDatais an interface responsible for extracting data from local files.NoiseableDatais an interface that has the function of enhancing robustness by adding noise to the extracted sample.IteratableDatais an interface that has the function of repeatedly outputting labeled and unlabeled samples by calling a class that implementsExtractableData. This function is the heart of iterative learning algorithms such as stochastic gradient descent. Whether or not to put the data in GPU memory can be selected in this subclass.SamplableDatais a useful interface for introducing neural network learning algorithms into data sampling processing, especially for Generative Adversarial Networks(GANs).ObservableDatais an interface for implementing neural network models in general. All of these subclasses are designed with the assumption thatIteratableDatawill be delegated. You can also observe the mini-batch data extracted bySamplableData. Of course, the input / output function is variable depending on the problem setting, but basically it executes inference for the observed data points. The presence or absence of hyperparameters and GPUs can be adjusted in this subclass.ComputableLossis nothing but an interface for error/loss functions. Each subclass is premised on the automatic differentiation function implemented in the machine learning library.RegularizatableDatais the interface for performing regularization. The amount that the regular term is given as a penalty term can be realized inside the forward propagation method or inside the error/loss function. This interface works when other special regularizations are performed immediately after parameter updates.ControllableModelfunctions as a controller when implementing complexObservableDatain combination as a complex system(or System of Systems) such as deep reinforcement learning and hostile generation network.Broadly speaking, each subclass that implements these interfaces is considered functionally equivalent and mutually substitutable. For example, the following classes are implemented as subclasses ofObservableData.For example,Neural Networksmakes it possible to build multi-layer neural networks. And, after inheriting this class,AutoEncoderconstructed by joining twoNeuralNetworkss in a stack is placed. There is a similar relationship betweenLSTMNetworksfor building LSTM(Long short-term memory) Networks, which is a type of Recurrent Neural Networks, and its subclassEncoderDecoder.In addition, various subclasses are arranged forConvolutionalNeuralNetworks, which builds convolutional neural networks. Many relatively new learning algorithms, such assemi-supervised learningandself-supervised learning, have been proposed as extensions of convolutional neural networks. Therefore, when these algorithms are implemented, they are often implemented as a subclass ofConvolutionalNeuralNetworks.When adopting a functionalism, such a series of interface designs assists in the search for functional equivalents. Since each class that realizes the same interface is designed on the premise of the same interface specifications, it is easy to consider it as a candidate for functional equivalent.Furthermore, developers using this library can reduce the burden of functional expansion as well as searching for functional equivalents. In fact, there are functional extensions to this library.For example,Automatic Summarization Library: pysummarization, which realizes automatic document summarization by natural language processing and text mining, reuses the Sequence-to-Sequence (Seq2Seq) model realized byaccel-brain-base. This sub-library makes it possilbe to do automatically document summarization and clustering based on the manifold hypothesis.On the other hand, inReinforcement Learning Library: pyqlearningandGenerative Adversarial Networks Library: pygan, Reinforcement Learning and various variants of GANs are implemented.Problem Re-Setting: What concept does this library design?Let's exemplify the basic deep architecture that this library has already designed. Users can functionally reuse or functionally extend the already implemented architecture.Problem Solution: Deep Boltzmann Machines.The function of this library is building and modeling Restricted Boltzmann Machine(RBM) and Deep Boltzmann Machine(DBM). The models are functionally equivalent to stacked auto-encoder. The basic function is the same as dimensions reduction or pre-learning for so-called transfer learning.The structure of RBM.Boltzmann MachineRestricted Boltzmann MachineAccording to graph theory, the structure of RBM corresponds to a complete bipartite graph which is a special kind of bipartite graph where every node in the visible layer is connected to every node in the hidden layer. Based on statistical mechanics and thermodynamics(Ackley, D. H., Hinton, G. E., & Sejnowski, T. J. 1985), the state of this structure can be reflected by the energy function:whereis a bias in visible layer,is a bias in hidden layer,is an activity or a state in visible layer,is an activity or a state in hidden layer, andis a weight matrix in visible and hidden layer. The activities can be calculated as the below product, since the link of activations of visible layer and hidden layer are conditionally independent.The learning equations of RBM.Because of the rules of conditional independence, the learning equations of RBM can be introduced as simple form. The distribution of visible statewhich is marginalized over the hidden stateis as following:whereis a partition function in statistical mechanics or thermodynamics. Letbe set of observed data points, then. Therefore the gradients on the parameterof the log-likelihood function arewhereis an expected value for.is a sigmoid function.The learning equations of RBM are introduced by performing control so that those gradients can become zero.Contrastive Divergence as an approximation method.In relation to RBM,Contrastive Divergence(CD) is a method for approximation of the gradients of the log-likelihood(Hinton, G. E. 2002). The procedure of this method is similar to Markov Chain Monte Carlo method(MCMC). However, unlike MCMC, the visbile variables to be set first in visible layer is not randomly initialized but the observed data points in training dataset are set to the first visbile variables. And, like Gibbs sampler, drawing samples from hidden variables and visible variables is repeatedktimes. Empirically (and surprisingly),kis considered to be1.The structure of DBM.As is well known, DBM is composed of layers of RBMs stacked on top of each other(Salakhutdinov, R., & Hinton, G. E. 2009). This model is a structural expansion of Deep Belief Networks(DBN), which is known as one of the earliest models of Deep Learning(Le Roux, N., & Bengio, Y. 2008). Like RBM, DBN places nodes in layers. However, only the uppermost layer is composed of undirected edges, and the other consists of directed edges. DBN withRhidden layers is below probabilistic model:wherer = 0points to visible layer. Considerling simultaneous distribution in top two layer,and conditional distributions in other layers are as follows:The pre-learning of DBN engages in a procedure of recursive learning in layer-by-layer. However, as you can see from the difference of graph structure, DBM is slightly different from DBN in the form of pre-learning. For instance, ifr = 1, the conditional distribution of visible layer is.On the other hand, the conditional distribution in the intermediate layer iswhere2has been introduced considering that the intermediate layerrreceives input data from Shallower layerr-1and deeper layerr+1. DBM sets these parameters as initial states.DBM as a Stacked Auto-Encoder.DBM is functionally equivalent to aStacked Auto-Encoder, which is-a neural network that tries to reconstruct its input. Toencodethe observed data points, the function of DBM is as linear transformation of feature map below.On the other hand, todecodethis feature points, the function of DBM is as linear transformation of feature map below.The reconstruction error should be calculated in relation to problem setting such as the Representation Learning. In default, this library computes Mean Squared Error(MSE) or L2 norm. For instance, my jupyter notebook:demo/Deep-Boltzmann-Machines-for-Representation-Learning.ipynbdemonstrates that DBM which is a Stacked Auto-Encoder can minimize the reconstruction errors based on the Representation Learning.Functionally equivalent: Encoder/Decoder based on LSTM.The methodology ofequivalent-functionalismenables us to introduce more functional equivalents and compare problem solutions structured with different algorithms and models in common problem setting. For example, in dimension reduction problem, the function ofEncoder/Decoder schemais equivalent toDBMas aStacked Auto-Encoder.According to the neural networks theory, and in relation to manifold hypothesis, it is well known that multilayer neural networks can learn features of observed data points and have the feature points in hidden layer. High-dimensional data can be converted to low-dimensional codes by training the model such asStacked Auto-EncoderandEncoder/Decoderwith a small central layer to reconstruct high-dimensional input vectors. This function of dimensionality reduction facilitates feature expressions to calculate similarity of each data point.This library providesEncoder/Decoder based on LSTM, which is a reconstruction model and makes it possible to extract series features embedded in deeper layers. The LSTM encoder learns a fixed length vector of time-series observed data points and the LSTM decoder uses this representation to reconstruct the time-series using the current hidden state and the value inferenced at the previous time-step.Encoder/Decoder for Anomaly Detection(EncDec-AD)One interesting application example is theEncoder/Decoder for Anomaly Detection (EncDec-AD)paradigm (Malhotra, P., et al. 2016). This reconstruction model learns to reconstructnormaltime-series behavior, and thereafter uses reconstruction error to detect anomalies. Malhotra, P., et al. (2016) showed that EncDec-AD paradigm is robust and can detect anomalies from predictable, unpredictable, periodic, aperiodic, and quasi-periodic time-series. Further, they showed that the paradigm is able to detect anomalies from short time-series (length as small as 30) as well as long time-series (length as large as 500).As the prototype is exemplified indemo/Encoder-Decoder-based-on-LSTM-for-Anomaly-Detection.ipynb, this library provides Encoder/Decoder based on LSTM as a EncDec-AD scheme.Functionally equivalent: Convolutional Auto-Encoder.Convolutional Auto-Encoder(Masci, J., et al., 2011) is a functionally equivalent ofStacked Auto-Encoderin relation to problem settings such as image segmentation, object detection, inpainting and graphics. A stack of Convolutional Auto-Encoder forms a convolutional neural network(CNN), which are among the most successful models for supervised image classification. Each Convolutional Auto-Encoder is trained using conventional on-line gradient descent without additional regularization terms.Image inthe Weizmann horse dataset.Reconstructed image byConvolutional Auto-Encoder.This library can draw a distinction betweenStacked Auto-EncoderandConvolutional Auto-Encoder, and is able to design and implement respective models.Stacked Auto-Encoderignores the 2 dimentional image structures. In many cases, the rank of observed tensors extracted from image dataset is more than 3. This is not only a problem when dealing with realistically sized inputs, but also introduces redundancy in the parameters, forcing each feature to be global. LikeShape-BM,Convolutional Auto-Encoderdiffers fromStacked Auto-Encoderas their weights are shared among all locations in the input, preserving spatial locality. Hence, the reconstructed image data is due to a linear combination of basic image patches based on the latent code.In this library,Convolutional Auto-Encoderis also based onEncoder/Decoderscheme. Theencoderis to thedecoderwhat theConvolutionis to theDeconvolution. The Deconvolution also called transposed convolutions "work by swapping the forward and backward passes of a convolution." (Dumoulin, V., & Visin, F. 2016, p20.)In relation to the Representation Learning, like DBM, this model also can minimize the reconstruction errors. An example can be found indemo/Convolutional-Auto-Encoder-for-Representation-Learning.ipynb.Functionally equivalent: Convolutional Contractive Auto-Encoder.This library also provides some functionally equivalents of the Convolutional Auto-Encoder. For instance,Convolutional Contractive Auto-Encoder(Contractive CAE)is a Convolutional Auto-Encoder based on the First-Order Contractive Auto-Encoder(Rifai, S., et al., 2011), which executes the representation learning by adding a penalty term to the classical reconstruction cost function. This penalty term corresponds to the Frobenius norm of the Jacobian matrix of the encoder activations with respect to the input and results in a localized space contraction which in turn yields robust features on the activation layer.Analogically, the Contractive Convolutional Auto-Encoder calculates the penalty term. But it differs in that the operation of the deconvolution intervenes insted of inner product. The prototype is exemplified indemo/Contractive-Convolutional-Auto-Encoder-for-Representation-Learning.ipynb.Issue: Structural extension from Auto-Encoders and Encoder/Decoders to energy-based models and Generative models.Auto-Encoders, such as the Encoder/Decoder, the Convolutional Auto-Encoder, and the DBM have in common that these models are Stacked Auto-Encoders or the reconstruction models. On the other hand, the Auto-Encoders and the Encoder/Decoders are not statistical mechanical energy-based models unlike with RBM or DBM.However, Auto-Encoders have traditionally been used to represent energy-based models. According to the statistical mechanical theory forenergy-based models, Auto-Encoders constructed by neural networks can be associated with an energy landscape, akin to negative log-probability in a probabilistic model, which measures how well the Auto-Encoder can represent regions in the input space. The energy landscape has been commonly inferred heuristically, by using a training criterion that relates the Auto-Encoder to a probabilistic model such as a RBM. The energy function is identical to the free energy of the corresponding RBM, showing that Auto-Encoders and RBMs may be viewed as two different ways to derive training criteria for forming the same type of analytically defined energy landscape.The view of the Auto-Encoder as a dynamical system allows us to understand how an energy function may be derived for the Auto-Encoder. This makes it possible to assign energies to Auto-Encoders with many different types of activation functions and outputs, and consider minimanization of reconstruction errors as energy minimanization(Kamyshanska, H., & Memisevic, R., 2014).When trained with some regularization terms, the Auto-Encoders have the ability to learn an energy manifold without supervision or negative examples(Zhao, J., et al., 2016). This means that even when an energy-based Auto-Encoding model is trained to reconstruct a real sample, the model contributes to discovering the data manifold by itself.This library provides energy-based Auto-Encoders such asContractive Convolutional Auto-Encoder(Rifai, S., et al., 2011),Repelling Convolutional Auto-Encoder(Zhao, J., et al., 2016),Denoising Auto-Encoders(Bengio, Y., et al., 2013), andLadder Networks(Valpola, H., 2015). But it is more usefull to redescribe the Auto-Encoders in the framework ofGenerative Adversarial Networks(GANs)(Goodfellow, I., et al., 2014) to make those models function as not only energy-based models but also Generative models. For instance, theory of anAdversarial Auto-Encoders(AAEs)(Makhzani, A., et al., 2015) andenergy-based GANs(EBGANs)(Zhao, J., et al., 2016) enables us to turn Auto-Encoders into a Generative models which referes energy functions.Problem Solution: Generative Adversarial Networks(GANs).The Generative Adversarial Networks(GANs) (Goodfellow et al., 2014) framework establishes a min-max adversarial game between two neural networks โ€“ a generative model,G, and a discriminative model,D. The discriminator model,D(x), is a neural network that computes the probability that a observed data pointxin data space is a sample from the data distribution (positive samples) that we are trying to model, rather than a sample from our generative model (negative samples). Concurrently, the generator uses a functionG(z)that maps sampleszfrom the priorp(z)to the data space.G(z)is trained to maximally confuse the discriminator into believing that samples it generates come from the data distribution. The generator is trained by leveraging the gradient ofD(x)w.r.t.x, and using that to modify its parameters.Problem Solution:ConditionalGANs (or cGANs).TheConditionalGANs (or cGANs) is a simple extension of the basic GAN model which allows the model to condition on external information. This makes it possible to engage the learned generative model in different "modes" by providing it with different contextual information (Gauthier, J. 2014).This model can be constructed by simply feeding the data,y, to condition on to both the generator and discriminator. In an unconditioned generative model, because the maps sampleszfrom the priorp(z)are drawn from uniform or normal distribution, there is no control on modes of the data being generated. On the other hand, it is possible to direct the data generation process by conditioning the model on additional information (Mirza, M., & Osindero, S. 2014).Problem Solution: Adversarial Auto-Encoders(AAEs).This library also provides the Adversarial Auto-Encoders(AAEs), which is a probabilistic Auto-Encoder that uses GANs to perform variational inference by matching the aggregated posterior of the feature points in hidden layer of the Auto-Encoder with an arbitrary prior distribution(Makhzani, A., et al., 2015). Matching the aggregated posterior to the prior ensures that generating from any part of prior space results in meaningful samples. As a result, the decoder of the Adversarial Auto-Encoder learns a deep generative model that maps the imposed prior to the data distribution.Problem Solution: Energy-based Generative Adversarial Network(EBGAN).Reusing the Auto-Encoders, this library introduces the Energy-based Generative Adversarial Network (EBGAN) model(Zhao, J., et al., 2016) which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and higher energies to other regions. THe Auto-Encoders have traditionally been used to represent energy-based models. When trained with some regularization terms, the Auto-Encoders have the ability to learn an energy manifold without supervision or negative examples. This means that even when an energy-based Auto-Encoding model is trained to reconstruct a real sample, the model contributes to discovering the data manifold by itself.Functionally equivalent: Energy-based Adversarial Auto-Encoders(EBAAEs).This library models theEnergy-based Adversarial-Auto-Encoder(EBAAE)by structural coupling between AAEs and EBGAN. As the prototype is exemplified indemo/Energy-based-Adversarial-Auto-Encoder-for-Representation-Learning.ipynb, the learning algorithm equivalents an adversarial training of AAEs as a generator and EBGAN as a discriminator.Issue: How unsupervised learning like Auto-Encoder, Energy-based Model, and Generative Model can function in classification problem?In most classification problems, finding and producing labels for the samples is hard. In many cases plenty of unlabeled data existand it seems obvious that using them should improve the results. For instance, there are plenty of unlabeled images available and in most image classification tasks there are vastly more bits of information in the statistical structure of input images than in their labels.It is argued here that the reason why unsupervised learning has not been able to improve results is that most current versions are incompatible with supervised learning. The problem is that many un-supervised learning methods try to represent as much information about the original data as possible whereas supervised learning tries to filter out all the information which is irrelevant for the task at hand.Problem Solution: Ladder Networks.Ladder network is an Auto-Encoder which can discard information Unsupervised learning needs to toleratediscarding information in order to work well with supervised learning. Many unsupervised learning methods are not good at this but one class of models stands out as an exception: hierarchical latent variable models. Unfortunately their derivation can be quite complicated and often involves approximations which compromise their per-formance.Hierarchical Latent Variable ModelsAuto-EncoderLadder NetworksA simpler alternative is offered by Auto-Encoders which also have the benefit of being compatible with standard supervised feedforward networks. They would be a promising candidate for combining supervised and unsupervised learning but unfortunately Auto-Encoders normally correspond to latent variable models with a single layer of stochastic variables, that is, they do not tolerate discarding information.Ladder network makes it possible to solve that problem by settting recursive derivation of the learning rule with a distributed cost function, building denoisng Auto-Encoder recursively. Normally denoising Auto-Encoders have a fixed input but the cost functions on the higher layers can influence their input mappings and this creates a bias towards PCA-type solutions.In relation to problem settings such as the Representation Learning, the Ladder Networks is also functionally equivalent of standard CAE as the prototype is exemplified indemo/Convolutional-Ladder-Networks-for-Representation-Learning.ipynb.Problem Solution: Deep Reconstruction-Classification Networks(DRCN or DRCNetworks).Deep Reconstruction-Classification Network(DRCN or DRCNetworks)is a convolutional network that jointly learns two tasks:supervised source label prediction.unsupervised target data reconstruction.Ideally, a discriminative representation should model both the label and the structure of the data. Based on that intuition, Ghifary, M., et al.(2016) hypothesize that a domain-adaptive representation should satisfy two criteria:classify well the source domain labeled data.reconstruct well the target domain unlabeled data, which can be viewed as an approximate of the ideal discriminative representation.The encoding parameters of the DRCN are shared across both tasks, while the decoding parameters are sepa-rated. The aim is that the learned label prediction function can perform well onclassifying images in the target domain thus the data reconstruction can beviewed as an auxiliary task to support the adaptation of the label prediction.Using this library, for instance, we can extend the Convolutional Auto-Encoder in DRCNetworks to the Convolutional Ladder Networks as mentioned indemo/DRCNetworks-for-Dataset-Bias-Problem.ipynb.Functional equivalent: Self-Supervised Domain Adaptation.Xu, J., Xiao, L., & Lรณpez, A. M. (2019) proposedSelf-Supervised Domain Adaptationframework. This model learns a domain invariant feature representation by incorporating a pretext learning task which can automatically create labels from target domain images. The pretext and main task such as classification problem, object detection problem, or semantic segmentation problem are learned jointly via multi-task learning.While DRCNetworks jointly learns supervised source label prediction and unsupervised target data reconstruction, Self-Supervised Domain Adaptation learns supervised label prediction in source domain and unsupervised pretext-task in target domain. DRCNetworks and Self-Supervised Domain Adaptation are alike in not only network structures but also learning algorithms. Neither is mere supervised learning, nor is it mere unsupervised learning. The learning algorithm of DRCNetworks is a semi-supervised learning, but Self-Supervised Domain Adaptation literally does self-supervised learning.Using this library, for instance, we can extend the Self-Supervised Domain Adaptation as mentioned indemo/Self-Supervised-Domain-Adaptation-for-Classfication-Problem.ipynbanddemo/Self-Supervised-Domain-Adaptation-with-Adversarial-training-for-Classfication-Problem.ipynb.Issue: Structural extension for Deep Reinforcement Learning.TheReinforcement learningtheory presents several issues from a perspective of deep learning theory(Mnih, V., et al. 2013). Firstly, deep learning applications have required large amounts of hand-labelled training data. Reinforcement learning algorithms, on the other hand, must be able to learn from a scalar reward signal that is frequently sparse, noisy and delayed.The difference between the two theories is not only the type of data but also the timing to be observed. The delay between taking actions and receiving rewards, which can be thousands of timesteps long, seems particularly daunting when compared to the direct association between inputs and targets found in supervised learning.Another issue is that deep learning algorithms assume the data samples to be independent, while in reinforcement learning one typically encounters sequences of highly correlated states. Furthermore, in Reinforcement learning, the data distribution changes as the algorithm learns new behaviours, presenting aspects of recursive learning, which can be problematic for deep learning methods that assume a fixed underlying distribution.Problem Re-setting: Generalisation, or a function approximation.This library considers problem setteing in which an agent interacts with an environment, in a sequence of actions, observations and rewards. At each time-step the agent selects an action at from the set of possible actions,. The state/action-value function is.The goal of the agent is to interact with theby selecting actions in a way that maximises future rewards. We can make the standard assumption that future rewards are discounted by a factor of $\gamma$ per time-step, and define the future discounted return at timeas,whereis the time-step at which the agent will reach the goal. This library defines the optimal state/action-value functionas the maximum expected return achievable by following any strategy, after seeing some stateand then taking some action,,whereis a policy mapping sequences to actions (or distributions over actions).The optimal state/action-value function obeys an important identity known as the Bellman equation. This is based on the followingintuition: if the optimal valueof the sequenceat the next time-step was known for all possible actions, then the optimal strategy is to select the actionmaximising the expected value of,.The basic idea behind many reinforcement learning algorithms is to estimate the state/action-value function, by using the Bellman equation as an iterative update,.Suchvalue iteration algorithmsconverge to the optimal state/action-value function,as.But increasing the complexity of states/actions is equivalent to increasing the number of combinations of states/actions. If the value function is continuous and granularities of states/actions are extremely fine, the combinatorial explosion will be encountered. In other words, this basic approach is totally impractical, because the state/action-value function is estimated separately for each sequence, without anygeneralisation. Instead, it is common to use afunction approximatorto estimate the state/action-value function,So the Reduction of complexities is required.Problem Solution: Deep Q-NetworkIn this problem setting, the function of nerual network or deep learning is a function approximation with weightsas a Q-Network. A Q-Network can be trained by minimising a loss functionsthat changes at each iteration,whereis the target for iterationandis a so-called behaviour distribution. This is probability distribution over states and actions. The parameters from the previous iterationare held fixed when optimising the loss function. Differentiating the loss function with respect to the weights we arrive at the following gradient,Functional equivalent: MobileNet.If you pay attention to the calculation speed, it is better to extend the CNN part that is the function approximation to MobileNet. As mentioned indemo/MobileNet-v2-for-Image-Classification.ipynb, this library provides the MobileNet V2(Sandler, M., et al., 2018).Functional equivalent: LSTM.It is not inevitable to functionally reuse CNN as a function approximator. In the above problem setting of generalisation and Combination explosion, for instance, Long Short-Term Memory(LSTM) networks, which is-a special Reccurent Neural Network(RNN) structure, and CNN as a function approximator are functionally equivalent. In the same problem setting, functional equivalents can be functionally replaced. Considering that the feature space of the rewards has the time-series nature, LSTM will be more useful.ReferencesThe basic concepts, theories, and methods behind this library are described in the following books.ใ€Žใ€ŒAIใฎๆฐ‘ไธปๅŒ–ใ€ๆ™‚ไปฃใฎไผๆฅญๅ†…็ ”็ฉถ้–‹็™บ: ๆทฑๅฑคๅญฆ็ฟ’ใฎใ€ŒๅฎŸๅญฆใ€ใจใ—ใฆใฎๆฉŸ่ƒฝๅˆ†ๆžใ€(Japanese)ใ€ŽAI vs. ใƒŽใ‚คใ‚บใƒˆใƒฌใƒผใƒ€ใƒผใจใ—ใฆใฎๆŠ•่ณ‡ๅฎถใŸใก: ใ€Œใ‚ขใƒซใ‚ดใƒชใ‚บใƒ ๆˆฆไบ‰ใ€ๆ™‚ไปฃใฎ่จผๅˆธๆŠ•่ณ‡ๆˆฆ็•ฅใ€(Japanese)ใ€Ž่‡ช็„ถ่จ€่ชžๅ‡ฆ็†ใฎใƒใƒ™ใƒซ: ๆ–‡ๆ›ธ่‡ชๅ‹•่ฆ็ด„ใ€ๆ–‡็ซ ็”ŸๆˆAIใ€ใƒใƒฃใƒƒใƒˆใƒœใƒƒใƒˆใฎๆ„ๅ‘ณ่ซ–ใ€(Japanese)ใ€Ž็ตฑ่จˆ็š„ๆฉŸๆขฐๅญฆ็ฟ’ใฎๆ นๆบ: ็†ฑๅŠ›ๅญฆใ€้‡ๅญๅŠ›ๅญฆใ€็ตฑ่จˆๅŠ›ๅญฆใซใŠใ‘ใ‚‹ๅคฉๆ‰็‰ฉ็†ๅญฆ่€…ใŸใกใฎ็ฅžๅญฆ็š„ใช็†ๅฟตใ€(Japanese)Specific references are the following papers and books.Deep Boltzmann machines.Ackley, D. H., Hinton, G. E., & Sejnowski, T. J. (1985). A learning algorithm for Boltzmann machines. Cognitive science, 9(1), 147-169.Boulanger-Lewandowski, N., Bengio, Y., & Vincent, P. (2012). Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription. arXiv preprint arXiv:1206.6392.Eslami, S. A., Heess, N., Williams, C. K., & Winn, J. (2014). The shape boltzmann machine: a strong model of object shape. International Journal of Computer Vision, 107(2), 155-176.Hinton, G. E. (2002). Training products of experts by minimizing contrastive divergence. Neural computation, 14(8), 1771-1800.Le Roux, N., & Bengio, Y. (2008). Representational power of restricted Boltzmann machines and deep belief networks. Neural computation, 20(6), 1631-1649.Lyu, Q., Wu, Z., Zhu, J., & Meng, H. (2015, June). Modelling High-Dimensional Sequences with LSTM-RTRBM: Application to Polyphonic Music Generation. In IJCAI (pp. 4138-4139).Lyu, Q., Wu, Z., & Zhu, J. (2015, October). Polyphonic music modelling with LSTM-RTRBM. In Proceedings of the 23rd ACM international conference on Multimedia (pp. 991-994). ACM.Salakhutdinov, R., & Hinton, G. E. (2009). Deep boltzmann machines. InInternational conference on artificial intelligence and statistics (pp. 448-455).Sutskever, I., Hinton, G. E., & Taylor, G. W. (2009). The recurrent temporal restricted boltzmann machine. In Advances in Neural Information Processing Systems (pp. 1601-1608).Auto-Encoders.Baccouche, M., Mamalet, F., Wolf, C., Garcia, C., & Baskurt, A. (2012, September). Spatio-Temporal Convolutional Sparse Auto-Encoder for Sequence Classification. In BMVC (pp. 1-12).Bengio, Y., Yao, L., Alain, G., & Vincent, P. (2013). Generalized denoising auto-encoders as generative models. In Advances in neural information processing systems (pp. 899-907).Chong, Y. S., & Tay, Y. H. (2017, June). Abnormal event detection in videos using spatiotemporal autoencoder. In International Symposium on Neural Networks (pp. 189-196). Springer, Cham.Kingma, D. P., & Welling, M. (2014). Auto-encoding variational Bayes, May 2014. arXiv preprint arXiv:1312.6114.Masci, J., Meier, U., CireลŸan, D., & Schmidhuber, J. (2011, June). Stacked convolutional auto-encoders for hierarchical feature extraction. In International Conference on Artificial Neural Networks (pp. 52-59). Springer, Berlin, Heidelberg.Patraucean, V., Handa, A., & Cipolla, R. (2015). Spatio-temporal video autoencoder with differentiable memory. arXiv preprint arXiv:1511.06309.Rifai, S., Vincent, P., Muller, X., Glorot, X., & Bengio, Y. (2011, June). Contractive auto-encoders: Explicit invariance during feature extraction. In Proceedings of the 28th International Conference on International Conference on Machine Learning (pp. 833-840). Omnipress.Rifai, S., Mesnil, G., Vincent, P., Muller, X., Bengio, Y., Dauphin, Y., & Glorot, X. (2011, September). Higher order contractive auto-encoder. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 645-660). Springer, Berlin, Heidelberg.Seung, H. S. (1998). Learning continuous attractors in recurrent networks. In Advances in neural information processing systems (pp. 654-660).Zhao, J., Mathieu, M., & LeCun, Y. (2016). Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126.Encoder/Decoder schemes with an Attention mechanism.Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.Cho, K., Van Merriรซnboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.Malhotra, P., Ramakrishnan, A., Anand, G., Vig, L., Agarwal, P., & Shroff, G. (2016). LSTM-based encoder-decoder for multi-sensor anomaly detection. arXiv preprint arXiv:1607.00148.Xingjian, S. H. I., Chen, Z., Wang, H., Yeung, D. Y., Wong, W. K., & Woo, W. C. (2015). Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In Advances in neural information processing systems (pp. 802-810).Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems (pp. 5998-6008).Generative Adversarial Networks(GANs).Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672-2680).Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I., & Frey, B. (2015). Adversarial autoencoders. arXiv preprint arXiv:1511.05644.Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784.Zhao, J., Mathieu, M., & LeCun, Y. (2016). Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126.Unsupervised / Supervised pre-trainingBengio, Y., Lamblin, P., Popovici, D., & Larochelle, H. (2007). Greedy layer-wise training of deep networks. In Advances in neural information processing systems (pp. 153-160).Erhan, D., Bengio, Y., Courville, A., Manzagol, P. A., Vincent, P., & Bengio, S. (2010). Why does unsupervised pre-training help deep learning?. Journal of Machine Learning Research, 11(Feb), 625-660.Semi-supervised learning.Ghifary, M., Kleijn, W. B., Zhang, M., Balduzzi, D., & Li, W. (2016, October). Deep reconstruction-classification networks for unsupervised domain adaptation. In European Conference on Computer Vision (pp. 597-613). Springer, Cham.Rasmus, A., Berglund, M., Honkala, M., Valpola, H., & Raiko, T. (2015). Semi-supervised learning with ladder networks. In Advances in neural information processing systems (pp. 3546-3554).Valpola, H. (2015). From neural PCA to deep unsupervised learning. In Advances in Independent Component Analysis and Learning Machines (pp. 143-171). Academic Press.Self-supervised learning.Jing, L., & Tian, Y. (2020). Self-supervised visual feature learning with deep neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence.Xu, J., Xiao, L., & Lรณpez, A. M. (2019). Self-supervised domain adaptation for computer vision tasks. IEEE Access, 7, 156694-156706.Deep Reinforcement Learning.Cho, K., Van Merriรซnboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.Egorov, M. (2016). Multi-agent deep reinforcement learning.Gupta, J. K., Egorov, M., & Kochenderfer, M. (2017, May). Cooperative multi-agent control using deep reinforcement learning. In International Conference on Autonomous Agents and Multiagent Systems (pp. 66-83). Springer, Cham.Malhotra, P., Ramakrishnan, A., Anand, G., Vig, L., Agarwal, P., & Shroff, G. (2016). LSTM-based encoder-decoder for multi-sensor anomaly detection. arXiv preprint arXiv:1607.00148.Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.Sainath, T. N., Vinyals, O., Senior, A., & Sak, H. (2015, April). Convolutional, long short-term memory, fully connected deep neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on (pp. 4580-4584). IEEE.Xingjian, S. H. I., Chen, Z., Wang, H., Yeung, D. Y., Wong, W. K., & Woo, W. C. (2015). Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In Advances in neural information processing systems (pp. 802-810).Zaremba, W., Sutskever, I., & Vinyals, O. (2014). Recurrent neural network regularization. arXiv preprint arXiv:1409.2329.Attention model and Transformer model.Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.Floridi, L., & Chiriatti, M. (2020). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 30(4), 681-694.Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., & Soricut, R. (2019). Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942.Miller, A., Fisch, A., Dodge, J., Karimi, A. H., Bordes, A., & Weston, J. (2016). Key-value memory networks for directly reading documents. arXiv preprint arXiv:1606.03126.Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018) Improving Language Understanding by Generative Pre-Training. OpenAI (URL:https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf)Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9.Song, K., Tan, X., Qin, T., Lu, J., & Liu, T. Y. (2019). Mass: Masked sequence to sequence pre-training for language generation. arXiv preprint arXiv:1905.02450.Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., & Polosukhin, I. (2017). Attention is all you need. arXiv preprint arXiv:1706.03762.Optimizations.Bengio, Y., Boulanger-Lewandowski, N., & Pascanu, R. (2013, May). Advances in optimizing recurrent networks. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 8624-8628). IEEE.Duchi, J., Hazan, E., & Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul), 2121-2159.Dozat, T. (2016). Incorporating nesterov momentum into adam., Workshop track - ICLR 2016.Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.Algorithms, Arithmetic, Regularizations, and Representations learning.Dumoulin, V., & Visin, F. (2016). A guide to convolution arithmetic for deep learning. arXiv preprint arXiv:1603.07285.Erhan, D., Courville, A., & Bengio, Y. (2010). Understanding representations learned in deep architectures. Department dInformatique et Recherche Operationnelle, University of Montreal, QC, Canada, Tech. Rep, 1355, 1.Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning (adaptive computation and machine learning series). Adaptive Computation and Machine Learning series, 800.He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167.Kamyshanska, H., & Memisevic, R. (2014). The potential energy of an autoencoder. IEEE transactions on pattern analysis and machine intelligence, 37(6), 1261-1273.Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. C. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4510-4520).Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1), 1929-1958.Zaremba, W., Sutskever, I., & Vinyals, O. (2014). Recurrent neural network regularization. arXiv preprint arXiv:1409.2329.Authoraccel-brainAuthor URIhttps://accel-brain.co.jp/https://accel-brain.com/LicenseGNU General Public License v2.0
accelbyte-py-sdk
AccelByte Python SDKA software development kit (SDK) for interacting with AccelByte services written in Python.This SDK was generated from OpenAPI specification documents included in thespecdirectory.SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall dependencies.pipinstallrequestshttpxwebsocketsPyYAMLPyJWT[crypto]mmh3bitarrayInstall from PyPIpipinstallaccelbyte-py-sdkor install from source.pipinstallgit+https://github.com/AccelByte/accelbyte-python-sdk.git@{VERSION}#egg=accelbyte_py_sdkReplace{VERSION}with a specific release version tag. When starting a new project, using the latest release version is recommended. For the list of available versions, seereleases.Special note only for Windows environmentIf you encounter errorpath too longwhen attempting to install the SDK. The steps to solve this are:Enable long paths in registry.Enable long paths in git.git config --global core.longpaths trueRestart the powershell window you used to take effect.Try installing SDK again.pip install git+https://github.com/AccelByte/accelbyte-python-sdk.git@{VERSION}#egg=accelbyte_py_sdkEnvironment VariablesThe following environment variables need to be set when usingEnvironmentConfigRepository(default).NameRequiredExampleAB_BASE_URLYeshttps://demo/accelbyte.ioAB_CLIENT_IDYesabcdef0123456789abcdef0123456789AB_CLIENT_SECRETYes, only if you use a privateAB_CLIENT_IDab#c,d)ef(ab#c,d)ef(ab#c,d)ef(abAB_NAMESPACEYes, the SDK will automatically fill up the{namespace}path parameter (overridable)accelbyteAB_APP_NAMENo, the SDK will automatically fill up theUser-Agentheader (overridable)MyAppAB_APP_VERSIONNo, the SDK will automatically fill up theUser-Agentheader (overridable)1.0.0UsageInitializingYou'll have to initialize the SDK using theinitialize()function.importaccelbyte_py_sdkif__name__=="__main__":accelbyte_py_sdk.initialize()# uses EnvironmentConfigRepository by default# which in turn uses '$AB_BASE_URL', '$AB_CLIENT_ID', '$AB_CLIENT_SECRET', '$AB_NAMESPACE'You could also pass in options like so:fromosimportenvironimportaccelbyte_py_sdkfromaccelbyte_py_sdk.coreimportMyConfigRepositoryif__name__=="__main__":base_url=environ["MY_BASE_URL"]client_id=environ["MY_CLIENT_ID"]client_secret=environ["MY_CLIENT_SECRET"]namespace=environ["MY_NAMESPACE"]app_name=environ["MY_APP_NAME"]app_version=environ["MY_APP_VERSION"]my_config_repository=MyConfigRepository(base_url=base_url,client_id=client_id,client_secret=client_secret,namespace=namespace,app_name=app_name,app_version=app_version)options={"config":my_config_repository}accelbyte_py_sdk.initialize(options)# you could still set some of these options after initializing.# ex. accelbyte_py_sdk.core.set_config_repository(my_config_repository)Logging In and Logging OutLogin using Username and Passwordfromosimportenvironimportaccelbyte_py_sdkfromaccelbyte_py_sdk.services.authimportlogin_user,logoutif__name__=="__main__":accelbyte_py_sdk.initialize()username=environ["AB_USERNAME"]password=environ["AB_PASSWORD"]token,error=login_user(username,password)asserterrorisNone_,error=logout()asserterrorisNoneHerelogin_user(username, password)andlogout()are wrapper functions.You can also specify the scope you want forlogin_user(...). With thescopeparameter typed asOptional[Union[str, List[str]]]. By default the scope used iscommerce account social publishing analytics(a space seprated value string).login_user(username,password,scope="scopeA")# login_user(username, password, scope="scopeA scopeB") # space separated values# login_user(username, password, scope=["scopeA", "scopeB"])Login using OAuth Client (Public or Confidential)fromosimportenvironimportaccelbyte_py_sdkfromaccelbyte_py_sdk.services.authimportlogin_clientif__name__=="__main__":accelbyte_py_sdk.initialize()client_id=environ["AB_CLIENT_ID"]client_secret=environ["AB_CLIENT_SECRET"]token,error=login_client(client_id,client_secret)# passing '$AB_CLIENT_ID' and '$AB_CLIENT_SECRET' is same as:# token, error = login_client()asserterrorisnotNone:bulb: The use of a Public OAuth Client is highly discouraged for this use case. Please ensure that you both set the Client ID and Client Secret.Refreshing Tokens:bulb: Usinglogin_x(..., auto_refresh=True)automatically refreshes the token once the expiration draws near.# set the refresh rate for 'login_client'# 0.5 means refresh when we 50% of the expiration duration has passedres,err=login_client(client_id,client_secret,auto_refresh=True,refresh_rate=0.5)# set the refresh rate for 'login_user'# 0.5 means refresh when we 50% of the expiration duration has passedres,err=login_user(username,password,auto_refresh=True,refresh_rate=0.5)The auto refresh is only triggered when another request is fired. If you want to the refresh run automatically in the background. Use any of theLoginXTimerclasses.fromaccelbyte_py_sdk.services.authimportLoginClientTimer,LoginPlatformTimer,LoginUserTimer,RefreshLoginTimerres,err=login_user(username,password)iferrisnotNone:exit(1)# creates a threading.Timer-like object that calls login_user every 1800 seconds repeatedly 'inf' timesinterval=1800timer=LoginUserTimer(interval,username=username,password=password,repeats=-1,# <0: repeat 'inf' times | 0 or None: repeat 0 times | >0: repeat n timesautostart=True,)To manually refresh the token:fromaccelbyte_py_sdk.coreimportget_token_repositoryfromaccelbyte_py_sdk.services.authimportrefresh_logintoken_repo=get_token_repository()refresh_token=token_repo.get_refresh_token()token,error=refresh_login(refresh_token)asserterrorisNoneTo use on-demand token refresh, enable therefresh_if_possibleoption by setting it to True. This configuration involves checking for the presence of an existing token, verifying its expiration status. If the token has expired, the SDK will then examine whether a refresh token exists. If a refresh token is available, it will be utilized to obtain a new token.res,error=login_user(username,password,refresh_if_possible=True)Using multiple SDK instancesThe examples above demonstrates using just one instance of the Python SDK (the default which is also global), but you could also instantiate multiple instances of the SDK and use them at the same time.importaccelbyte_py_sdk.services.authasauth_serviceimportaccelbyte_py_sdk.api.iamasiam_serviceimportaccelbyte_py_sdk.api.iam.modelsasiam_modelsfromaccelbyte_py_sdkimportAccelByteSDKfromaccelbyte_py_sdk.coreimportEnvironmentConfigRepositoryfromaccelbyte_py_sdk.coreimportInMemoryTokenRepository# Create 3 instances of the SDKclient_sdk=AccelByteSDK()user_sdk1=AccelByteSDK()user_sdk2=AccelByteSDK()# Initialize the SDKsclient_sdk.initialize(options={"config":EnvironmentConfigRepository(),"token":InMemoryTokenRepository(),})user_sdk1.initialize(options={"config":EnvironmentConfigRepository(),"token":InMemoryTokenRepository(),})user_sdk2.initialize(options={"config":user_sdk1.get_config_repository(),# you could also share the config repo with User 1 SDK's"token":InMemoryTokenRepository(),# you could also do the same thing with token repos but that is not advisable.})# Login the SDKs_,error=auth_service.login_client(sdk=client_sdk)username1,password1=..._,error=auth_service.login_user(username1,password1,sdk=user_sdk1)username2,password2=..._,error=auth_service.login_user(username2,password2,sdk=user_sdk2)# Call an endpoint as User 1result1,error=iam_service.public_create_user_v4(body=iam_models.AccountCreateUserRequestV4.create_from_dict({...}),sdk=user_sdk1,)# Call an endpoint as User 2result2,error=iam_service.public_create_user_v4(body=iam_models.AccountCreateUserRequestV4.create_from_dict({...}),sdk=user_sdk2,)# Call an endpoint as the Admin IAM Clientresult,error=admin_update_user_v4(body=iam_models.ModelUserUpdateRequestV3.create_from_dict({...}),user_id=result1.user_id,sdk=client_sdk,)# Reset/Deintialize the SDKs after usingclient_sdk1.deintialize()client_sdk1.deintialize()client_sdk1.deintialize()Interacting with a Service EndpointExample AIn this example we will create a new user using thePOSTendpoint/iam/v3/public/namespaces/{namespace}/usersimportjsonimportaccelbyte_py_sdkfromaccelbyte_py_sdk.services.authimportlogin_client# Import the wrapper 'public_create_user_v3'# to know which wrapper to use open the docs/<service-name>-index.md and# use the search function to find the wrapper namefromaccelbyte_py_sdk.api.iamimportpublic_create_user_v3# This POST endpoint also requires a body of 'ModelUserCreateRequestV3'# so you will need to import that too, import it using this scheme:# from accelbyte_py_sdk.api.<service-name>.models import <model-name>fromaccelbyte_py_sdk.api.iam.modelsimportModelUserCreateRequestV3fromaccelbyte_py_sdk.api.iam.modelsimportModelUserCreateResponseV3defmain():# 1 Initialize the SDKaccelbyte_py_sdk.initialize()# 2 Login as a client (uses $AB_CLIENT_ID and $AB_CLIENT_SECRET)_,error=login_client()# 3 Create a user using the POST endpoint: /iam/v3/public/namespaces/{namespace}/users# * this endpoint requires:# - a 'body' (ModelUserCreateRequestV3)# - a 'namespace' (string)# 'namespace' here is unique because it can be omitted, omitting it will result in# the SDK to automatically fill it out with the value of '$AB_NAMESPACE'# * more details on this endpoint can be found in:# accelbyte_py_sdk/api/iam/operations/users/public_create_user_v3.pyresult,error=public_create_user_v3(body=ModelUserCreateRequestV3.create(auth_type="EMAILPASSWD",country="US",date_of_birth="2001-01-01",display_name="************",email_address="******@fakemail.com",password="******************",))# 4 Check for errorsiferror:exit(1)# 5 Do something with the resultprint(json.dumps(result.to_dict(),indent=2))# {# "authType": "EMAILPASSWD",# "country": "US",# "dateOfBirth": "2001-01-01T00:00:00Z",# "displayName": "************",# "emailAddress": "******@fakemail.com",# "namespace": "******",# "userId": "********************************"# }if__name__=="__main__":main():bulb: All wrapper functions follow the return value format ofresult, error.:bulb: You could also write your own wrapper functions by using the models and operations inaccelbyte_py_sdk.api.<service-name>andaccelbyte_py_sdk.api.<service-name>.modelsrespectively.:bulb: All wrapper functions have an asynchronous counterpart that ends with_async.Example A (async)To convertExample Aasynchronously the following steps are needed.Import the asyncio package.importasyncioConvert the main method intoasync.# def main():asyncdefmain():Change how themainfunction is invoked.if__name__=="__main__":# main()loop=asyncio.get_event_loop()loop.run_until_complete(main())UseHttpxHttpClient.# accelbyte_py_sdk.initialize()accelbyte_py_sdk.initialize(options={"http":"HttpxHttpClient"})Use theasyncversion of the wrapper by appending_async.# from accelbyte_py_sdk.api.iam import public_create_user_v3fromaccelbyte_py_sdk.api.iamimportpublic_create_user_v3_asyncUse theasyncwrapper with theawaitkeyword.# result, error = public_create_user_v3(result,error=awaitpublic_create_user_v3_async(Here is the full code:importasyncioimportjsonimportaccelbyte_py_sdkfromaccelbyte_py_sdk.services.authimportlogin_client# Import the wrapper 'public_create_user_v3_async'# to know which wrapper to use open the docs/<service-name>-index.md and# use the search function to find the wrapper namefromaccelbyte_py_sdk.api.iamimportpublic_create_user_v3_async# This POST endpoint also requires a body of 'ModelUserCreateRequestV3'# so you will need to import that too, import it using this scheme:# from accelbyte_py_sdk.api.<service-name>.models import <model-name>fromaccelbyte_py_sdk.api.iam.modelsimportModelUserCreateRequestV3fromaccelbyte_py_sdk.api.iam.modelsimportModelUserCreateResponseV3asyncdefmain():# 1 Initialize the SDKaccelbyte_py_sdk.initialize(options={"http":"HttpxHttpClient"})# 2 Login as a client (uses $AB_CLIENT_ID and $AB_CLIENT_SECRET)_,error=login_client()# 3 Create a user using the POST endpoint: /iam/v3/public/namespaces/{namespace}/users# * this endpoint requires:# - a 'body' (ModelUserCreateRequestV3)# - a 'namespace' (string)# 'namespace' here is unique because it can be omitted, omitting it will result in# the SDK to automatically fill it out with the value of '$AB_NAMESPACE'# * more details on this endpoint can be found in:# accelbyte_py_sdk/api/iam/operations/users/public_create_user_v3.pyresult,error=awaitpublic_create_user_v3_async(body=ModelUserCreateRequestV3.create(auth_type="EMAILPASSWD",country="US",date_of_birth="2001-01-01",display_name="************",email_address="******@fakemail.com",password="******************",))# 4 Check for errorsiferror:exit(1)# 5 Do something with the resultprint(json.dumps(result.to_dict(),indent=2))# {# "authType": "EMAILPASSWD",# "country": "US",# "dateOfBirth": "2001-01-01T00:00:00Z",# "displayName": "************",# "emailAddress": "******@fakemail.com",# "namespace": "******",# "userId": "********************************"# }if__name__=="__main__":loop=asyncio.get_event_loop()loop.run_until_complete(main())Configuring HTTP RetryTo use theHTTP Retryfeature, set theHttpClient'sretry_policyandbackoff_policy.importaccelbyte_py_sdkfromaccelbyte_py_sdk.coreimportget_http_client# 1 Initialize the SDKaccelbyte_py_sdk.initialize()# 2 Get the HTTP Clienthttp_client=get_http_client()# 3 Configure the `retry_policy` and `backoff_policy`# 3a. Retry 3 times with 0.5 seconds delay in betweenfromaccelbyte_py_sdk.coreimportConstantHttpBackoffPolicyfromaccelbyte_py_sdk.coreimportMaxRetriesHttpRetryPolicyhttp_client.retry_policy=MaxRetriesHttpRetryPolicy(3)# 3b. Retry when total elapsed time is less than 15 seconds, with an exponential backoff duration.fromaccelbyte_py_sdk.coreimportExponentialHttpBackoffPolicyfromaccelbyte_py_sdk.coreimportMaxElapsedHttpRetryPolicyhttp_client.backoff_policy=ExponentialHttpBackoffPolicy(initial=1.0,multiplier=2.0)http_client.retry_policy=MaxElapsedHttpRetryPolicy(15)# 3c. Use custom retry and backoff policies.fromdatetimeimporttimedeltafromtypingimportOptionaldefmy_custom_retry_policy(request,response,/,*,retries:int=0,elapsed:Optional[timedelta]=None,**kwargs)->float:return"Retry-After"inresponse.headersandretries==1# Retry if the 'Retry-After' header exists and we are on the 1st retry (2nd attempt).defmy_custom_backoff_policy(request,response,/,*,retries:int=0,elapsed:Optional[timedelta]=None,**kwargs)->float:returnresponse.headers.get("Retry-After",1)# Use the value of the 'Retry-After' response header, default to 1.0s.http_client.backoff_policy=my_custom_backoff_policyhttp_client.retry_policy=my_custom_retry_policy# 3d. Combining multiple retry policies.fromaccelbyte_py_sdk.coreimportCompositeHttpRetryPolicyfromaccelbyte_py_sdk.coreimportMaxRetriesHttpRetryPolicyfromaccelbyte_py_sdk.coreimportStatusCodesHttpRetryPolicyhttp_client.retry_policy=CompositeHttpRetryPolicy(StatusCodesHttpRetryPolicy(401,(501,503)),# Retry when response status code is 401, 501 to 503 (501, 502, or 503) -- ANDMaxRetriesHttpRetryPolicy(3)# when number of retries is less than or equal to 3.)Validating TokensYou can useaccelbyte_py_sdk.token_validation.caching.CachingTokenValidatororaccelbyte_py_sdk.token_validation.iam.IAMTokenValidator.token_validator=CachingTokenValidator(sdk)# or IAMTokenValidator(sdk)# access_token = ...error=token_validator.validate_token(access_token)iferror:raiseerrorParsing TokensYou can useparse_access_tokenfromaccelbyte_py_sdk.services.auth.fromaccelbyte_py_sdk.services.authimportparse_access_token# access_token = ...claims,error=parse_access_token(access_token)iferror:exit(1)You can also do validation in the same call. By default, it usesCachingTokenValidator.fromaccelbyte_py_sdk.services.authimportparse_access_token# access_token = ...claims,error=parse_access_token(access_token,validate=True)iferror:exit(1)You can specify what kind (or which) of validator you want to use.fromaccelbyte_py_sdk.services.authimportparse_access_token# access_token = ...claims,error=parse_access_token(access_token,validator="iam")# or validator="caching"iferror:exit(1)fromaccelbyte_py_sdk.services.authimportparse_access_tokentoken_validator=CachingTokenValidator(sdk)# or IAMTokenValidator(sdk)# access_token = ...claims,error=parse_access_token(access_token,validator=token_validator)iferror:exit(1)Seetestsfor more usage.SamplesSample apps are available in thesamplesdirectoryDocumentationFor documentation about AccelByte services and SDK, seedocs.accelbyte.io:bulb: Check out the index files in thedocsdirectory if you are looking for a specific endpoint.MiscUtility FunctionsCheck if the SDK is initialized.importaccelbyte_py_sdkis_initialized=accelbyte_py_sdk.is_initialized()print(is_initialized)# FalseCreate a Basic Auth from a string tuple.importaccelbyte_py_sdkbasic_auth=accelbyte_py_sdk.core.create_basic_authentication("foo","bar")print(basic_auth)# Basic Zm9vOmJhcg==Gets the stored access token.importaccelbyte_py_sdkaccess_token,error=accelbyte_py_sdk.core.get_access_token()print(access_token)# ************************************GetAB_*environment configuration values.importaccelbyte_py_sdkbase_url,client_id,client_secret,namespace=accelbyte_py_sdk.core.get_env_config()print(f"{base_url},{client_id},{client_secret},{namespace}")# <$AB_BASE_URL>, <$AB_CLIENT_ID>, <$AB_CLIENT_SECRET>, <$AB_NAMESPACE>GetAB_*environment user credential values.importaccelbyte_py_sdkusername,password=accelbyte_py_sdk.core.get_env_user_credentials()print(f"{base_url}:{client_id}")# <$AB_USERNAME>: <$AB_PASSWORD>Set logger level and add logger handlers.importlogging# 1. The SDK has helper functions for logging.accelbyte_py_sdk.core.set_logger_level(logging.INFO)# 'accelbyte_py_sdk'accelbyte_py_sdk.core.set_logger_level(logging.INFO,"http")# 'accelbyte_py_sdk.http'accelbyte_py_sdk.core.set_logger_level(logging.INFO,"ws")# 'accelbyte_py_sdk.ws'# 2. You could also use this helper function for debugging.accelbyte_py_sdk.core.add_stream_handler_to_logger()# sends content of the 'accelbyte_py_sdk' logger to 'sys.stderr'.# 3. There is a helper function that helps you get started with log files.accelbyte_py_sdk.core.add_buffered_file_handler_to_logger(# flushes content of the 'accelbyte_py_sdk' logger to a file named 'sdk.log' every 10 logs.filename="/path/to/sdk.log",capacity=10,level=logging.INFO)accelbyte_py_sdk.core.add_buffered_file_handler_to_logger(# flushes content of the 'accelbyte_py_sdk.http' logger to a file named 'http.log' every 3 logs.filename="/path/to/http.log",capacity=3,level=logging.INFO,additional_scope="http")# 3.a. Or you could the same thing when initializing the SDK.accelbyte_py_sdk.initialize(options={"log_files":{"":"/path/to/sdk.log",# flushes content of the 'accelbyte_py_sdk' logger to a file named 'sdk.log' every 10 logs."http":{# flushes content of the 'accelbyte_py_sdk.http' logger to a file named 'http.log' every 3 logs."filename":"/path/to/http.log","capacity":3,"level":logging.INFO}}})# 4. By default logs from 'accelbyte_py_sdk.http' are stringified dictionaries, you can set your own formatter like so.defformat_request_response_as_yaml(data:dict)->str:returnf"---\n{yaml.safe_dump(data,sort_keys=False).rstrip()}\n..."http_client=accelbyte_py_sdk.core.get_http_client()http_client.request_log_formatter=format_request_response_as_yamlhttp_client.response_log_formatter=format_request_response_as_yamlIn-depth TopicsGenerated codeModelsEach definition in#/definitions/is turned into a Model.Example:# UserProfileInfoproperties:avatarLargeUrl:type:stringavatarSmallUrl:type:stringavatarUrl:type:stringcustomAttributes:additionalProperties:type:objecttype:objectdateOfBirth:format:datetype:stringx-nullable:truefirstName:type:stringlanguage:type:stringlastName:type:stringnamespace:type:stringstatus:enum:-ACTIVE-INACTIVEtype:stringtimeZone:type:stringuserId:type:stringzipCode:type:stringtype:object# accelbyte_py_sdk/api/basic/models/user_profile_info.pyclassUserProfileInfo(Model):"""User profile info (UserProfileInfo)Properties:avatar_large_url: (avatarLargeUrl) OPTIONAL stravatar_small_url: (avatarSmallUrl) OPTIONAL stravatar_url: (avatarUrl) OPTIONAL strcustom_attributes: (customAttributes) OPTIONAL Dict[str, Any]date_of_birth: (dateOfBirth) OPTIONAL strfirst_name: (firstName) OPTIONAL strlanguage: (language) OPTIONAL strlast_name: (lastName) OPTIONAL strnamespace: (namespace) OPTIONAL strstatus: (status) OPTIONAL Union[str, StatusEnum]time_zone: (timeZone) OPTIONAL struser_id: (userId) OPTIONAL strzip_code: (zipCode) OPTIONAL str"""# region fieldsavatar_large_url:str# OPTIONALavatar_small_url:str# OPTIONALavatar_url:str# OPTIONALcustom_attributes:Dict[str,Any]# OPTIONALdate_of_birth:str# OPTIONALfirst_name:str# OPTIONALlanguage:str# OPTIONALlast_name:str# OPTIONALnamespace:str# OPTIONALstatus:Union[str,StatusEnum]# OPTIONALtime_zone:str# OPTIONALuser_id:str# OPTIONALzip_code:str# OPTIONAL# endregion fieldsthere are also a number of utility functions generated with each model that should help in the ease of use.# accelbyte_py_sdk/api/basic/models/user_profile_info.py...defwith_user_id(self,value:str)->UserProfileInfo:self.user_id=valuereturnself# other with_x() methods toodefto_dict(self,include_empty:bool=False)->dict:result:dict={}...returnresult@classmethoddefcreate(cls,avatar_large_url:Optional[str]=None,avatar_small_url:Optional[str]=None,avatar_url:Optional[str]=None,custom_attributes:Optional[Dict[str,Any]]=None,date_of_birth:Optional[str]=None,first_name:Optional[str]=None,language:Optional[str]=None,last_name:Optional[str]=None,namespace:Optional[str]=None,status:Optional[Union[str,StatusEnum]]=None,time_zone:Optional[str]=None,user_id:Optional[str]=None,zip_code:Optional[str]=None,)->UserProfileInfo:instance=cls()...returninstance@classmethoddefcreate_from_dict(cls,dict_:dict,include_empty:bool=False)->UserProfileInfo:instance=cls()...returninstance@staticmethoddefget_field_info()->Dict[str,str]:return{"avatarLargeUrl":"avatar_large_url","avatarSmallUrl":"avatar_small_url","avatarUrl":"avatar_url","customAttributes":"custom_attributes","dateOfBirth":"date_of_birth","firstName":"first_name","language":"language","lastName":"last_name","namespace":"namespace","status":"status","timeZone":"time_zone","userId":"user_id","zipCode":"zip_code",}...OperationsEach path item in#/pathsis turned into an Operation.Example:# GET /basic/v1/public/namespaces/{namespace}/users/{userId}/profilesdescription:'Getuserprofile.&lt;br&gt;Otherdetailinfo:&lt;ul&gt;&lt;li&gt;&lt;i&gt;Requiredpermission&lt;/i&gt;:resource=&lt;b&gt;&#34;NAMESPACE:{namespace}:USER:{userId}:PROFILE&#34;&lt;/b&gt;,action=2&lt;b&gt;(READ)&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;i&gt;Actioncode&lt;/i&gt;:11403&lt;/li&gt;&lt;li&gt;&lt;i&gt;Returns&lt;/i&gt;:userprofile&lt;/li&gt;&lt;/ul&gt;'operationId:publicGetUserProfileInfoparameters:-description:namespace, only accept alphabet and numericin:pathname:namespacerequired:truetype:string-description:user's id, should follow UUID version 4 without hyphenin:pathname:userIdrequired:truetype:stringproduces:-application/jsonresponses:'200':description:Successful operationschema:$ref:'#/definitions/UserProfileInfo''400':description:<table><tr><td>errorCode</td><td>errorMessage</td></tr><tr><td>20002</td><td>validationerror</td></tr></table>schema:$ref:'#/definitions/ValidationErrorEntity''401':description:<table><tr><td>errorCode</td><td>errorMessage</td></tr><tr><td>20001</td><td>unauthorized</td></tr></table>schema:$ref:'#/definitions/ErrorEntity''403':description:<table><tr><td>errorCode</td><td>errorMessage</td></tr><tr><td>20013</td><td>insufficientpermission</td></tr></table>schema:$ref:'#/definitions/ErrorEntity''404':description:'<table><tr><td>errorCode</td><td>errorMessage</td></tr><tr><td>11440</td><td>Unableto{action}:Userprofilenotfoundinnamespace[{namespace}]</td></tr></table>'schema:$ref:'#/definitions/ErrorEntity'security:-authorization:[]-HasPermission:-NAMESPACE:{namespace}:USER:{userId}:PROFILE [READ]authorization:[]summary:Get user profiletags:-UserProfilex-authorization:action:'2'resource:NAMESPACE:{namespace}:USER:{userId}:PROFILEsame with the models there are also a number of utility functions generated with each operation that should help in the ease of use.# accelbyte_py_sdk/api/basic/operations/user_profile/get_user_profile_info.py# Copyright (c) 2021 AccelByte Inc. All Rights Reserved.# This is licensed software from AccelByte Inc, for limitations# and restrictions contact your company contract manager.## Code generated. DO NOT EDIT!# template file: accelbyte_gaming services_py_codegen# pylint: disable=duplicate-code# pylint: disable=line-too-long# pylint: disable=missing-function-docstring# pylint: disable=missing-module-docstring# pylint: disable=too-many-arguments# pylint: disable=too-many-branches# pylint: disable=too-many-instance-attributes# pylint: disable=too-many-lines# pylint: disable=too-many-locals# pylint: disable=too-many-public-methods# pylint: disable=too-many-return-statements# pylint: disable=too-many-statements# pylint: disable=unused-import# AccelByte Gaming Services Basic Service (1.36.1)from__future__importannotationsfromtypingimportAny,Dict,List,Optional,Tuple,Unionfrom.....coreimportOperationfrom.....coreimportHeaderStrfrom.....coreimportHttpResponsefrom...modelsimportErrorEntityfrom...modelsimportUserProfilePrivateInfofrom...modelsimportValidationErrorEntityclassGetUserProfileInfo(Operation):"""Get user profile (getUserProfileInfo)Get user profile.Other detail info:* Required permission : resource= "ADMIN:NAMESPACE:{namespace}:USER:{userId}:PROFILE" , action=2 (READ)* Returns : user profile* Action code : 11403Required Permission(s):- ADMIN:NAMESPACE:{namespace}:USER:{userId}:PROFILE [READ]Properties:url: /basic/v1/admin/namespaces/{namespace}/users/{userId}/profilesmethod: GETtags: ["UserProfile"]consumes: []produces: ["application/json"]securities: [BEARER_AUTH] or [BEARER_AUTH]namespace: (namespace) REQUIRED str in pathuser_id: (userId) REQUIRED str in pathResponses:200: OK - UserProfilePrivateInfo (successful operation)400: Bad Request - ValidationErrorEntity (20002: validation error)401: Unauthorized - ErrorEntity (20001: unauthorized)403: Forbidden - ErrorEntity (20013: insufficient permission)404: Not Found - ErrorEntity (11440: Unable to {action}: User profile not found in namespace [{namespace}])"""# region fields_url:str="/basic/v1/admin/namespaces/{namespace}/users/{userId}/profiles"_method:str="GET"_consumes:List[str]=[]_produces:List[str]=["application/json"]_securities:List[List[str]]=[["BEARER_AUTH"],["BEARER_AUTH"]]_location_query:str=Nonenamespace:str# REQUIRED in [path]user_id:str# REQUIRED in [path]# endregion fields# region properties@propertydefurl(self)->str:returnself._url@propertydefmethod(self)->str:returnself._method@propertydefconsumes(self)->List[str]:returnself._consumes@propertydefproduces(self)->List[str]:returnself._produces@propertydefsecurities(self)->List[List[str]]:returnself._securities@propertydeflocation_query(self)->str:returnself._location_query# endregion properties# region get methods# endregion get methods# region get_x_params methodsdefget_all_params(self)->dict:return{"path":self.get_path_params(),}defget_path_params(self)->dict:result={}ifhasattr(self,"namespace"):result["namespace"]=self.namespaceifhasattr(self,"user_id"):result["userId"]=self.user_idreturnresult# endregion get_x_params methods# region is/has methods# endregion is/has methods# region with_x methodsdefwith_namespace(self,value:str)->GetUserProfileInfo:self.namespace=valuereturnselfdefwith_user_id(self,value:str)->GetUserProfileInfo:self.user_id=valuereturnself# endregion with_x methods# region to methodsdefto_dict(self,include_empty:bool=False)->dict:result:dict={}ifhasattr(self,"namespace")andself.namespace:result["namespace"]=str(self.namespace)elifinclude_empty:result["namespace"]=""ifhasattr(self,"user_id")andself.user_id:result["userId"]=str(self.user_id)elifinclude_empty:result["userId"]=""returnresult# endregion to methods# region response methods# noinspection PyMethodMayBeStaticdefparse_response(self,code:int,content_type:str,content:Any)->Tuple[Union[None,UserProfilePrivateInfo],Union[None,ErrorEntity,HttpResponse,ValidationErrorEntity]]:"""Parse the given response.200: OK - UserProfilePrivateInfo (successful operation)400: Bad Request - ValidationErrorEntity (20002: validation error)401: Unauthorized - ErrorEntity (20001: unauthorized)403: Forbidden - ErrorEntity (20013: insufficient permission)404: Not Found - ErrorEntity (11440: Unable to {action}: User profile not found in namespace [{namespace}])---: HttpResponse (Undocumented Response)---: HttpResponse (Unexpected Content-Type Error)---: HttpResponse (Unhandled Error)"""pre_processed_response,error=self.pre_process_response(code=code,content_type=content_type,content=content)iferrorisnotNone:returnNone,Noneiferror.is_no_content()elseerrorcode,content_type,content=pre_processed_responseifcode==200:returnUserProfilePrivateInfo.create_from_dict(content),Noneifcode==400:returnNone,ValidationErrorEntity.create_from_dict(content)ifcode==401:returnNone,ErrorEntity.create_from_dict(content)ifcode==403:returnNone,ErrorEntity.create_from_dict(content)ifcode==404:returnNone,ErrorEntity.create_from_dict(content)returnself.handle_undocumented_response(code=code,content_type=content_type,content=content)# endregion response methods# region static methods@classmethoddefcreate(cls,namespace:str,user_id:str,)->GetUserProfileInfo:instance=cls()instance.namespace=namespaceinstance.user_id=user_idreturninstance@classmethoddefcreate_from_dict(cls,dict_:dict,include_empty:bool=False)->GetUserProfileInfo:instance=cls()if"namespace"indict_anddict_["namespace"]isnotNone:instance.namespace=str(dict_["namespace"])elifinclude_empty:instance.namespace=""if"userId"indict_anddict_["userId"]isnotNone:instance.user_id=str(dict_["userId"])elifinclude_empty:instance.user_id=""returninstance@staticmethoddefget_field_info()->Dict[str,str]:return{"namespace":"namespace","userId":"user_id",}@staticmethoddefget_required_map()->Dict[str,bool]:return{"namespace":True,"userId":True,}# endregion static methodsCreating:bulb: there are 4 ways to create an instance of these models and operations.# 1. using the python __init__() function then setting the parameters manually:model=ModelName()model.param_a="foo"model.param_b="bar"# 2. using the python __init__() function together with the 'with_x' methods:# # the 'with_x' functions are type annotated and will show warnings if a wrong type is passed.model=ModelName()\.with_param_a("foo")\.with_param_b("bar")# 3. using the ModelName.create(..) class method:# # parameters here are also type annotated and will throw a TypeError if a required field was not filled out.model=ModelName.create(param_a="foo",param_b="bar",)# 4. using the ModelName.create_from_dict(..) class method:# # this method also has a 'include_empty' option that would get ignore values that evaluate to False, None, or len() == 0.model_params={"param_a":"foo","param_b":"bar","param_c":False,"param_d":None,"param_e":[],"param_f":{},}model=ModelName.create_from_dict(model_params)# all of these apply to all operations too.WrappersTo improve ergonomics the code generator also generates wrappers around the operations. The purpose of these wrappers is to automatically fill up parameters that the SDK already knows. (e.g. namespace, client_id, access_token, etc.)They are located ataccelbyte_py_sdk.api.<service-name>.wrappersbut can be accessed like so:from accelbyte_py_sdk.api.<service-name> import <wrapper-name>importaccelbyte_py_sdkfromaccelbyte_py_sdk.api.iamimporttoken_grant_v3if__name__=="__main__":accelbyte_py_sdk.initialize()token,error=token_grant_v3(grant_type="client_credentials")asserterrorisnotNoneThe wrapper functiontoken_grant_v3is a wrapper for theTokenGrantV3operation. It automatically passes in the information needed like the Basic Auth Headers. The values are gotten from the currentConfigRepository.continuing from the previous examples (GetUserProfileInfo), its wrapper would be:# accelbyte_py_sdk/api/basic/wrappers/_user_profile.pyfromtypingimportAny,Dict,List,Optional,Tuple,Unionfrom....coreimportget_namespaceasget_services_namespacefrom....coreimportrun_requestfrom....coreimportsame_doc_asfrom..operations.user_profileimportGetUserProfileInfo@same_doc_as(GetUserProfileInfo)defget_user_profile_info(user_id:str,namespace:Optional[str]=None):ifnamespaceisNone:namespace,error=get_services_namespace()iferror:returnNone,errorrequest=GetUserProfileInfo.create(user_id=user_id,namespace=namespace,)returnrun_request(request)this wrapper function automatically fills up the required path parameternamespace.now to use it only theuser_idis now required.importaccelbyte_py_sdkfromaccelbyte_py_sdk.api.basicimportget_user_profile_infoif__name__=="__main__":accelbyte_py_sdk.initialize()user_profile_info,error=get_user_profile_info(user_id="lorem")asserterrorisnotNoneprint(f"Hello there{user_profile_info.first_name}!")
accelbyte-py-sdk-all
This project is still under development.AccelByte Modular Python SDKA software development kit (SDK) for interacting with AccelByte services written in Python.This SDK was generated from OpenAPI specification documents included in thespecdirectory.SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreand then install the service you needpipinstallaccelbyte-py-sdk-service-achievement pipinstallaccelbyte-py-sdk-service-ams pipinstallaccelbyte-py-sdk-service-basic pipinstallaccelbyte-py-sdk-service-cloudsave pipinstallaccelbyte-py-sdk-service-dslogmanager pipinstallaccelbyte-py-sdk-service-dsmc pipinstallaccelbyte-py-sdk-service-eventlog pipinstallaccelbyte-py-sdk-service-gametelemetry pipinstallaccelbyte-py-sdk-service-gdpr pipinstallaccelbyte-py-sdk-service-iam pipinstallaccelbyte-py-sdk-service-leaderboard pipinstallaccelbyte-py-sdk-service-legal pipinstallaccelbyte-py-sdk-service-lobby pipinstallaccelbyte-py-sdk-service-match2 pipinstallaccelbyte-py-sdk-service-matchmaking pipinstallaccelbyte-py-sdk-service-platform pipinstallaccelbyte-py-sdk-service-qosm pipinstallaccelbyte-py-sdk-service-reporting pipinstallaccelbyte-py-sdk-service-seasonpass pipinstallaccelbyte-py-sdk-service-session pipinstallaccelbyte-py-sdk-service-sessionbrowser pipinstallaccelbyte-py-sdk-service-social pipinstallaccelbyte-py-sdk-service-ugcand then install any feature you wantpipinstallaccelbyte-py-sdk-feat-auth pipinstallaccelbyte-py-sdk-feat-token-validationor install everythingpipinstallaccelbyte-py-sdk-allEnvironment VariablesThe following environment variables need to be set when usingEnvironmentConfigRepository(default).NameRequiredExampleAB_BASE_URLYeshttps://demo/accelbyte.ioAB_CLIENT_IDYesabcdef0123456789abcdef0123456789AB_CLIENT_SECRETYes, only if you use a privateAB_CLIENT_IDab#c,d)ef(ab#c,d)ef(ab#c,d)ef(abAB_NAMESPACEYes, the SDK will automatically fill up the{namespace}path parameter (overridable)accelbyteAB_APP_NAMENo, the SDK will automatically fill up theUser-Agentheader (overridable)MyAppAB_APP_VERSIONNo, the SDK will automatically fill up theUser-Agentheader (overridable)1.0.0UsageInitializingYou'll have to initialize the SDK using theinitialize()function.importaccelbyte_py_sdkif__name__=="__main__":accelbyte_py_sdk.initialize()# uses EnvironmentConfigRepository by default# which in turn uses '$AB_BASE_URL', '$AB_CLIENT_ID', '$AB_CLIENT_SECRET', '$AB_NAMESPACE'You could also pass in options like so:fromosimportenvironimportaccelbyte_py_sdkfromaccelbyte_py_sdk.coreimportMyConfigRepositoryif__name__=="__main__":base_url=environ["MY_BASE_URL"]client_id=environ["MY_CLIENT_ID"]client_secret=environ["MY_CLIENT_SECRET"]namespace=environ["MY_NAMESPACE"]app_name=environ["MY_APP_NAME"]app_version=environ["MY_APP_VERSION"]my_config_repository=MyConfigRepository(base_url=base_url,client_id=client_id,client_secret=client_secret,namespace=namespace,app_name=app_name,app_version=app_version)options={"config":my_config_repository}accelbyte_py_sdk.initialize(options)# you could still set some of these options after initializing.# ex. accelbyte_py_sdk.core.set_config_repository(my_config_repository)Logging In and Logging OutLogin using Username and Passwordfromosimportenvironimportaccelbyte_py_sdkfromaccelbyte_py_sdk.services.authimportlogin_user,logoutif__name__=="__main__":accelbyte_py_sdk.initialize()username=environ["AB_USERNAME"]password=environ["AB_PASSWORD"]token,error=login_user(username,password)asserterrorisNone_,error=logout()asserterrorisNoneHerelogin_user(username, password)andlogout()are wrapper functions.You can also specify the scope you want forlogin_user(...). With thescopeparameter typed asOptional[Union[str, List[str]]]. By default the scope used iscommerce account social publishing analytics(a space seprated value string).login_user(username,password,scope="scopeA")# login_user(username, password, scope="scopeA scopeB") # space separated values# login_user(username, password, scope=["scopeA", "scopeB"])Login using OAuth Client (Public or Confidential)fromosimportenvironimportaccelbyte_py_sdkfromaccelbyte_py_sdk.services.authimportlogin_clientif__name__=="__main__":accelbyte_py_sdk.initialize()client_id=environ["AB_CLIENT_ID"]client_secret=environ["AB_CLIENT_SECRET"]token,error=login_client(client_id,client_secret)# passing '$AB_CLIENT_ID' and '$AB_CLIENT_SECRET' is same as:# token, error = login_client()asserterrorisnotNone:bulb: The use of a Public OAuth Client is highly discouraged for this use case. Please ensure that you both set the Client ID and Client Secret.Refreshing Tokens:bulb: Usinglogin_x(..., auto_refresh=True)automatically refreshes the token once the expiration draws near.# set the refresh rate for 'login_client'# 0.5 means refresh when we 50% of the expiration duration has passedres,err=login_client(client_id,client_secret,auto_refresh=True,refresh_rate=0.5)# set the refresh rate for 'login_user'# 0.5 means refresh when we 50% of the expiration duration has passedres,err=login_user(username,password,auto_refresh=True,refresh_rate=0.5)The auto refresh is only triggered when another request is fired. If you want to the refresh run automatically in the background. Use any of theLoginXTimerclasses.fromaccelbyte_py_sdk.services.authimportLoginClientTimer,LoginPlatformTimer,LoginUserTimer,RefreshLoginTimerres,err=login_user(username,password)iferrisnotNone:exit(1)# creates a threading.Timer-like object that calls login_user every 1800 seconds repeatedly 'inf' timesinterval=1800timer=LoginUserTimer(interval,username=username,password=password,repeats=-1,# <0: repeat 'inf' times | 0 or None: repeat 0 times | >0: repeat n timesautostart=True,)To manually refresh the token:fromaccelbyte_py_sdk.coreimportget_token_repositoryfromaccelbyte_py_sdk.services.authimportrefresh_logintoken_repo=get_token_repository()refresh_token=token_repo.get_refresh_token()token,error=refresh_login(refresh_token)asserterrorisNoneTo use on-demand token refresh, enable therefresh_if_possibleoption by setting it to True. This configuration involves checking for the presence of an existing token, verifying its expiration status. If the token has expired, the SDK will then examine whether a refresh token exists. If a refresh token is available, it will be utilized to obtain a new token.res,error=login_user(username,password,refresh_if_possible=True)Using multiple SDK instancesThe examples above demonstrates using just one instance of the Modular Python SDK (the default which is also global), but you could also instantiate multiple instances of the SDK and use them at the same time.importaccelbyte_py_sdk.services.authasauth_serviceimportaccelbyte_py_sdk.api.iamasiam_serviceimportaccelbyte_py_sdk.api.iam.modelsasiam_modelsfromaccelbyte_py_sdkimportAccelByteSDKfromaccelbyte_py_sdk.coreimportEnvironmentConfigRepositoryfromaccelbyte_py_sdk.coreimportInMemoryTokenRepository# Create 3 instances of the SDKclient_sdk=AccelByteSDK()user_sdk1=AccelByteSDK()user_sdk2=AccelByteSDK()# Initialize the SDKsclient_sdk.initialize(options={"config":EnvironmentConfigRepository(),"token":InMemoryTokenRepository(),})user_sdk1.initialize(options={"config":EnvironmentConfigRepository(),"token":InMemoryTokenRepository(),})user_sdk2.initialize(options={"config":user_sdk1.get_config_repository(),# you could also share the config repo with User 1 SDK's"token":InMemoryTokenRepository(),# you could also do the same thing with token repos but that is not advisable.})# Login the SDKs_,error=auth_service.login_client(sdk=client_sdk)username1,password1=..._,error=auth_service.login_user(username1,password1,sdk=user_sdk1)username2,password2=..._,error=auth_service.login_user(username2,password2,sdk=user_sdk2)# Call an endpoint as User 1result1,error=iam_service.public_create_user_v4(body=iam_models.AccountCreateUserRequestV4.create_from_dict({...}),sdk=user_sdk1,)# Call an endpoint as User 2result2,error=iam_service.public_create_user_v4(body=iam_models.AccountCreateUserRequestV4.create_from_dict({...}),sdk=user_sdk2,)# Call an endpoint as the Admin IAM Clientresult,error=admin_update_user_v4(body=iam_models.ModelUserUpdateRequestV3.create_from_dict({...}),user_id=result1.user_id,sdk=client_sdk,)# Reset/Deintialize the SDKs after usingclient_sdk1.deintialize()client_sdk1.deintialize()client_sdk1.deintialize()Interacting with a Service EndpointExample AIn this example we will create a new user using thePOSTendpoint/iam/v3/public/namespaces/{namespace}/usersimportjsonimportaccelbyte_py_sdkfromaccelbyte_py_sdk.services.authimportlogin_client# Import the wrapper 'public_create_user_v3'# to know which wrapper to use open the docs/<service-name>-index.md and# use the search function to find the wrapper namefromaccelbyte_py_sdk.api.iamimportpublic_create_user_v3# This POST endpoint also requires a body of 'ModelUserCreateRequestV3'# so you will need to import that too, import it using this scheme:# from accelbyte_py_sdk.api.<service-name>.models import <model-name>fromaccelbyte_py_sdk.api.iam.modelsimportModelUserCreateRequestV3fromaccelbyte_py_sdk.api.iam.modelsimportModelUserCreateResponseV3defmain():# 1 Initialize the SDKaccelbyte_py_sdk.initialize()# 2 Login as a client (uses $AB_CLIENT_ID and $AB_CLIENT_SECRET)_,error=login_client()# 3 Create a user using the POST endpoint: /iam/v3/public/namespaces/{namespace}/users# * this endpoint requires:# - a 'body' (ModelUserCreateRequestV3)# - a 'namespace' (string)# 'namespace' here is unique because it can be omitted, omitting it will result in# the SDK to automatically fill it out with the value of '$AB_NAMESPACE'# * more details on this endpoint can be found in:# accelbyte_py_sdk/api/iam/operations/users/public_create_user_v3.pyresult,error=public_create_user_v3(body=ModelUserCreateRequestV3.create(auth_type="EMAILPASSWD",country="US",date_of_birth="2001-01-01",display_name="************",email_address="******@fakemail.com",password="******************",))# 4 Check for errorsiferror:exit(1)# 5 Do something with the resultprint(json.dumps(result.to_dict(),indent=2))# {# "authType": "EMAILPASSWD",# "country": "US",# "dateOfBirth": "2001-01-01T00:00:00Z",# "displayName": "************",# "emailAddress": "******@fakemail.com",# "namespace": "******",# "userId": "********************************"# }if__name__=="__main__":main():bulb: All wrapper functions follow the return value format ofresult, error.:bulb: You could also write your own wrapper functions by using the models and operations inaccelbyte_py_sdk.api.<service-name>andaccelbyte_py_sdk.api.<service-name>.modelsrespectively.:bulb: All wrapper functions have an asynchronous counterpart that ends with_async.Example A (async)To convertExample Aasynchronously the following steps are needed.Import the asyncio package.importasyncioConvert the main method intoasync.# def main():asyncdefmain():Change how themainfunction is invoked.if__name__=="__main__":# main()loop=asyncio.get_event_loop()loop.run_until_complete(main())UseHttpxHttpClient.# accelbyte_py_sdk.initialize()accelbyte_py_sdk.initialize(options={"http":"HttpxHttpClient"})Use theasyncversion of the wrapper by appending_async.# from accelbyte_py_sdk.api.iam import public_create_user_v3fromaccelbyte_py_sdk.api.iamimportpublic_create_user_v3_asyncUse theasyncwrapper with theawaitkeyword.# result, error = public_create_user_v3(result,error=awaitpublic_create_user_v3_async(Here is the full code:importasyncioimportjsonimportaccelbyte_py_sdkfromaccelbyte_py_sdk.services.authimportlogin_client# Import the wrapper 'public_create_user_v3_async'# to know which wrapper to use open the docs/<service-name>-index.md and# use the search function to find the wrapper namefromaccelbyte_py_sdk.api.iamimportpublic_create_user_v3_async# This POST endpoint also requires a body of 'ModelUserCreateRequestV3'# so you will need to import that too, import it using this scheme:# from accelbyte_py_sdk.api.<service-name>.models import <model-name>fromaccelbyte_py_sdk.api.iam.modelsimportModelUserCreateRequestV3fromaccelbyte_py_sdk.api.iam.modelsimportModelUserCreateResponseV3asyncdefmain():# 1 Initialize the SDKaccelbyte_py_sdk.initialize(options={"http":"HttpxHttpClient"})# 2 Login as a client (uses $AB_CLIENT_ID and $AB_CLIENT_SECRET)_,error=login_client()# 3 Create a user using the POST endpoint: /iam/v3/public/namespaces/{namespace}/users# * this endpoint requires:# - a 'body' (ModelUserCreateRequestV3)# - a 'namespace' (string)# 'namespace' here is unique because it can be omitted, omitting it will result in# the SDK to automatically fill it out with the value of '$AB_NAMESPACE'# * more details on this endpoint can be found in:# accelbyte_py_sdk/api/iam/operations/users/public_create_user_v3.pyresult,error=awaitpublic_create_user_v3_async(body=ModelUserCreateRequestV3.create(auth_type="EMAILPASSWD",country="US",date_of_birth="2001-01-01",display_name="************",email_address="******@fakemail.com",password="******************",))# 4 Check for errorsiferror:exit(1)# 5 Do something with the resultprint(json.dumps(result.to_dict(),indent=2))# {# "authType": "EMAILPASSWD",# "country": "US",# "dateOfBirth": "2001-01-01T00:00:00Z",# "displayName": "************",# "emailAddress": "******@fakemail.com",# "namespace": "******",# "userId": "********************************"# }if__name__=="__main__":loop=asyncio.get_event_loop()loop.run_until_complete(main())Configuring HTTP RetryTo use theHTTP Retryfeature, set theHttpClient'sretry_policyandbackoff_policy.importaccelbyte_py_sdkfromaccelbyte_py_sdk.coreimportget_http_client# 1 Initialize the SDKaccelbyte_py_sdk.initialize()# 2 Get the HTTP Clienthttp_client=get_http_client()# 3 Configure the `retry_policy` and `backoff_policy`# 3a. Retry 3 times with 0.5 seconds delay in betweenfromaccelbyte_py_sdk.coreimportConstantHttpBackoffPolicyfromaccelbyte_py_sdk.coreimportMaxRetriesHttpRetryPolicyhttp_client.retry_policy=MaxRetriesHttpRetryPolicy(3)# 3b. Retry when total elapsed time is less than 15 seconds, with an exponential backoff duration.fromaccelbyte_py_sdk.coreimportExponentialHttpBackoffPolicyfromaccelbyte_py_sdk.coreimportMaxElapsedHttpRetryPolicyhttp_client.backoff_policy=ExponentialHttpBackoffPolicy(initial=1.0,multiplier=2.0)http_client.retry_policy=MaxElapsedHttpRetryPolicy(15)# 3c. Use custom retry and backoff policies.fromdatetimeimporttimedeltafromtypingimportOptionaldefmy_custom_retry_policy(request,response,/,*,retries:int=0,elapsed:Optional[timedelta]=None,**kwargs)->float:return"Retry-After"inresponse.headersandretries==1# Retry if the 'Retry-After' header exists and we are on the 1st retry (2nd attempt).defmy_custom_backoff_policy(request,response,/,*,retries:int=0,elapsed:Optional[timedelta]=None,**kwargs)->float:returnresponse.headers.get("Retry-After",1)# Use the value of the 'Retry-After' response header, default to 1.0s.http_client.backoff_policy=my_custom_backoff_policyhttp_client.retry_policy=my_custom_retry_policy# 3d. Combining multiple retry policies.fromaccelbyte_py_sdk.coreimportCompositeHttpRetryPolicyfromaccelbyte_py_sdk.coreimportMaxRetriesHttpRetryPolicyfromaccelbyte_py_sdk.coreimportStatusCodesHttpRetryPolicyhttp_client.retry_policy=CompositeHttpRetryPolicy(StatusCodesHttpRetryPolicy(401,(501,503)),# Retry when response status code is 401, 501 to 503 (501, 502, or 503) -- ANDMaxRetriesHttpRetryPolicy(3)# when number of retries is less than or equal to 3.)Validating TokensYou can useaccelbyte_py_sdk.token_validation.caching.CachingTokenValidatororaccelbyte_py_sdk.token_validation.iam.IAMTokenValidator.token_validator=CachingTokenValidator(sdk)# or IAMTokenValidator(sdk)# access_token = ...error=token_validator.validate_token(access_token)iferror:raiseerrorSeetestsfor more usage.SamplesSample apps are available in thesamplesdirectoryDocumentationFor documentation about AccelByte services and SDK, seedocs.accelbyte.io:bulb: Check out the index files in thedocsdirectory if you are looking for a specific endpoint.MiscUtility FunctionsCheck if the SDK is initialized.importaccelbyte_py_sdkis_initialized=accelbyte_py_sdk.is_initialized()print(is_initialized)# FalseCreate a Basic Auth from a string tuple.importaccelbyte_py_sdkbasic_auth=accelbyte_py_sdk.core.create_basic_authentication("foo","bar")print(basic_auth)# Basic Zm9vOmJhcg==Gets the stored access token.importaccelbyte_py_sdkaccess_token,error=accelbyte_py_sdk.core.get_access_token()print(access_token)# ************************************GetAB_*environment configuration values.importaccelbyte_py_sdkbase_url,client_id,client_secret,namespace=accelbyte_py_sdk.core.get_env_config()print(f"{base_url},{client_id},{client_secret},{namespace}")# <$AB_BASE_URL>, <$AB_CLIENT_ID>, <$AB_CLIENT_SECRET>, <$AB_NAMESPACE>GetAB_*environment user credential values.importaccelbyte_py_sdkusername,password=accelbyte_py_sdk.core.get_env_user_credentials()print(f"{base_url}:{client_id}")# <$AB_USERNAME>: <$AB_PASSWORD>Set logger level and add logger handlers.importlogging# 1. The SDK has helper functions for logging.accelbyte_py_sdk.core.set_logger_level(logging.INFO)# 'accelbyte_py_sdk'accelbyte_py_sdk.core.set_logger_level(logging.INFO,"http")# 'accelbyte_py_sdk.http'accelbyte_py_sdk.core.set_logger_level(logging.INFO,"ws")# 'accelbyte_py_sdk.ws'# 2. You could also use this helper function for debugging.accelbyte_py_sdk.core.add_stream_handler_to_logger()# sends content of the 'accelbyte_py_sdk' logger to 'sys.stderr'.# 3. There is a helper function that helps you get started with log files.accelbyte_py_sdk.core.add_buffered_file_handler_to_logger(# flushes content of the 'accelbyte_py_sdk' logger to a file named 'sdk.log' every 10 logs.filename="/path/to/sdk.log",capacity=10,level=logging.INFO)accelbyte_py_sdk.core.add_buffered_file_handler_to_logger(# flushes content of the 'accelbyte_py_sdk.http' logger to a file named 'http.log' every 3 logs.filename="/path/to/http.log",capacity=3,level=logging.INFO,additional_scope="http")# 3.a. Or you could the same thing when initializing the SDK.accelbyte_py_sdk.initialize(options={"log_files":{"":"/path/to/sdk.log",# flushes content of the 'accelbyte_py_sdk' logger to a file named 'sdk.log' every 10 logs."http":{# flushes content of the 'accelbyte_py_sdk.http' logger to a file named 'http.log' every 3 logs."filename":"/path/to/http.log","capacity":3,"level":logging.INFO}}})# 4. By default logs from 'accelbyte_py_sdk.http' are stringified dictionaries, you can set your own formatter like so.defformat_request_response_as_yaml(data:dict)->str:returnf"---\n{yaml.safe_dump(data,sort_keys=False).rstrip()}\n..."http_client=accelbyte_py_sdk.core.get_http_client()http_client.request_log_formatter=format_request_response_as_yamlhttp_client.response_log_formatter=format_request_response_as_yamlIn-depth TopicsGenerated codeModelsEach definition in#/definitions/is turned into a Model.Example:# UserProfileInfoproperties:avatarLargeUrl:type:stringavatarSmallUrl:type:stringavatarUrl:type:stringcustomAttributes:additionalProperties:type:objecttype:objectdateOfBirth:format:datetype:stringx-nullable:truefirstName:type:stringlanguage:type:stringlastName:type:stringnamespace:type:stringstatus:enum:-ACTIVE-INACTIVEtype:stringtimeZone:type:stringuserId:type:stringzipCode:type:stringtype:object# accelbyte_py_sdk/api/basic/models/user_profile_info.pyclassUserProfileInfo(Model):"""User profile info (UserProfileInfo)Properties:avatar_large_url: (avatarLargeUrl) OPTIONAL stravatar_small_url: (avatarSmallUrl) OPTIONAL stravatar_url: (avatarUrl) OPTIONAL strcustom_attributes: (customAttributes) OPTIONAL Dict[str, Any]date_of_birth: (dateOfBirth) OPTIONAL strfirst_name: (firstName) OPTIONAL strlanguage: (language) OPTIONAL strlast_name: (lastName) OPTIONAL strnamespace: (namespace) OPTIONAL strstatus: (status) OPTIONAL Union[str, StatusEnum]time_zone: (timeZone) OPTIONAL struser_id: (userId) OPTIONAL strzip_code: (zipCode) OPTIONAL str"""# region fieldsavatar_large_url:str# OPTIONALavatar_small_url:str# OPTIONALavatar_url:str# OPTIONALcustom_attributes:Dict[str,Any]# OPTIONALdate_of_birth:str# OPTIONALfirst_name:str# OPTIONALlanguage:str# OPTIONALlast_name:str# OPTIONALnamespace:str# OPTIONALstatus:Union[str,StatusEnum]# OPTIONALtime_zone:str# OPTIONALuser_id:str# OPTIONALzip_code:str# OPTIONAL# endregion fieldsthere are also a number of utility functions generated with each model that should help in the ease of use.# accelbyte_py_sdk/api/basic/models/user_profile_info.py...defwith_user_id(self,value:str)->UserProfileInfo:self.user_id=valuereturnself# other with_x() methods toodefto_dict(self,include_empty:bool=False)->dict:result:dict={}...returnresult@classmethoddefcreate(cls,avatar_large_url:Optional[str]=None,avatar_small_url:Optional[str]=None,avatar_url:Optional[str]=None,custom_attributes:Optional[Dict[str,Any]]=None,date_of_birth:Optional[str]=None,first_name:Optional[str]=None,language:Optional[str]=None,last_name:Optional[str]=None,namespace:Optional[str]=None,status:Optional[Union[str,StatusEnum]]=None,time_zone:Optional[str]=None,user_id:Optional[str]=None,zip_code:Optional[str]=None,)->UserProfileInfo:instance=cls()...returninstance@classmethoddefcreate_from_dict(cls,dict_:dict,include_empty:bool=False)->UserProfileInfo:instance=cls()...returninstance@staticmethoddefget_field_info()->Dict[str,str]:return{"avatarLargeUrl":"avatar_large_url","avatarSmallUrl":"avatar_small_url","avatarUrl":"avatar_url","customAttributes":"custom_attributes","dateOfBirth":"date_of_birth","firstName":"first_name","language":"language","lastName":"last_name","namespace":"namespace","status":"status","timeZone":"time_zone","userId":"user_id","zipCode":"zip_code",}...OperationsEach path item in#/pathsis turned into an Operation.Example:# GET /basic/v1/public/namespaces/{namespace}/users/{userId}/profilesdescription:'Getuserprofile.&lt;br&gt;Otherdetailinfo:&lt;ul&gt;&lt;li&gt;&lt;i&gt;Requiredpermission&lt;/i&gt;:resource=&lt;b&gt;&#34;NAMESPACE:{namespace}:USER:{userId}:PROFILE&#34;&lt;/b&gt;,action=2&lt;b&gt;(READ)&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;i&gt;Actioncode&lt;/i&gt;:11403&lt;/li&gt;&lt;li&gt;&lt;i&gt;Returns&lt;/i&gt;:userprofile&lt;/li&gt;&lt;/ul&gt;'operationId:publicGetUserProfileInfoparameters:-description:namespace, only accept alphabet and numericin:pathname:namespacerequired:truetype:string-description:user's id, should follow UUID version 4 without hyphenin:pathname:userIdrequired:truetype:stringproduces:-application/jsonresponses:'200':description:Successful operationschema:$ref:'#/definitions/UserProfileInfo''400':description:<table><tr><td>errorCode</td><td>errorMessage</td></tr><tr><td>20002</td><td>validationerror</td></tr></table>schema:$ref:'#/definitions/ValidationErrorEntity''401':description:<table><tr><td>errorCode</td><td>errorMessage</td></tr><tr><td>20001</td><td>unauthorized</td></tr></table>schema:$ref:'#/definitions/ErrorEntity''403':description:<table><tr><td>errorCode</td><td>errorMessage</td></tr><tr><td>20013</td><td>insufficientpermission</td></tr></table>schema:$ref:'#/definitions/ErrorEntity''404':description:'<table><tr><td>errorCode</td><td>errorMessage</td></tr><tr><td>11440</td><td>Unableto{action}:Userprofilenotfoundinnamespace[{namespace}]</td></tr></table>'schema:$ref:'#/definitions/ErrorEntity'security:-authorization:[]-HasPermission:-NAMESPACE:{namespace}:USER:{userId}:PROFILE [READ]authorization:[]summary:Get user profiletags:-UserProfilex-authorization:action:'2'resource:NAMESPACE:{namespace}:USER:{userId}:PROFILEsame with the models there are also a number of utility functions generated with each operation that should help in the ease of use.# Copyright (c) 2021 AccelByte Inc. All Rights Reserved.# This is licensed software from AccelByte Inc, for limitations# and restrictions contact your company contract manager.## Code generated. DO NOT EDIT!# template file: operation.j2# pylint: disable=duplicate-code# pylint: disable=line-too-long# pylint: disable=missing-function-docstring# pylint: disable=missing-module-docstring# pylint: disable=too-many-arguments# pylint: disable=too-many-branches# pylint: disable=too-many-instance-attributes# pylint: disable=too-many-lines# pylint: disable=too-many-locals# pylint: disable=too-many-public-methods# pylint: disable=too-many-return-statements# pylint: disable=too-many-statements# pylint: disable=unused-import# AccelByte Gaming Services Basic Service (2.13.1)from__future__importannotationsfromtypingimportAny,Dict,List,Optional,Tuple,Unionfromaccelbyte_py_sdk.coreimportOperationfromaccelbyte_py_sdk.coreimportHeaderStrfromaccelbyte_py_sdk.coreimportHttpResponsefrom...modelsimportErrorEntityfrom...modelsimportUserProfileInfofrom...modelsimportValidationErrorEntityclassPublicGetUserProfileInfo(Operation):"""Get user profile (publicGetUserProfileInfo)Get user profile.Other detail info:* Required permission : resource= "NAMESPACE:{namespace}:USER:{userId}:PROFILE" , action=2 (READ)* Action code : 11403* Returns : user profileRequired Permission(s):- NAMESPACE:{namespace}:USER:{userId}:PROFILE [READ]Properties:url: /basic/v1/public/namespaces/{namespace}/users/{userId}/profilesmethod: GETtags: ["UserProfile"]consumes: []produces: ["application/json"]securities: [BEARER_AUTH] or [BEARER_AUTH]namespace: (namespace) REQUIRED str in pathuser_id: (userId) REQUIRED str in pathResponses:200: OK - UserProfileInfo (Successful operation)400: Bad Request - ValidationErrorEntity (20002: validation error)401: Unauthorized - ErrorEntity (20001: unauthorized)403: Forbidden - ErrorEntity (20013: insufficient permission)404: Not Found - ErrorEntity (11440: Unable to {action}: User profile not found in namespace [{namespace}])"""# region fields_url:str="/basic/v1/public/namespaces/{namespace}/users/{userId}/profiles"_method:str="GET"_consumes:List[str]=[]_produces:List[str]=["application/json"]_securities:List[List[str]]=[["BEARER_AUTH"],["BEARER_AUTH"]]_location_query:str=Nonenamespace:str# REQUIRED in [path]user_id:str# REQUIRED in [path]# endregion fields# region properties@propertydefurl(self)->str:returnself._url@propertydefmethod(self)->str:returnself._method@propertydefconsumes(self)->List[str]:returnself._consumes@propertydefproduces(self)->List[str]:returnself._produces@propertydefsecurities(self)->List[List[str]]:returnself._securities@propertydeflocation_query(self)->str:returnself._location_query# endregion properties# region get methods# endregion get methods# region get_x_params methodsdefget_all_params(self)->dict:return{"path":self.get_path_params(),}defget_path_params(self)->dict:result={}ifhasattr(self,"namespace"):result["namespace"]=self.namespaceifhasattr(self,"user_id"):result["userId"]=self.user_idreturnresult# endregion get_x_params methods# region is/has methods# endregion is/has methods# region with_x methodsdefwith_namespace(self,value:str)->PublicGetUserProfileInfo:self.namespace=valuereturnselfdefwith_user_id(self,value:str)->PublicGetUserProfileInfo:self.user_id=valuereturnself# endregion with_x methods# region to methodsdefto_dict(self,include_empty:bool=False)->dict:result:dict={}ifhasattr(self,"namespace")andself.namespace:result["namespace"]=str(self.namespace)elifinclude_empty:result["namespace"]=""ifhasattr(self,"user_id")andself.user_id:result["userId"]=str(self.user_id)elifinclude_empty:result["userId"]=""returnresult# endregion to methods# region response methods# noinspection PyMethodMayBeStaticdefparse_response(self,code:int,content_type:str,content:Any)->Tuple[Union[None,UserProfileInfo],Union[None,ErrorEntity,HttpResponse,ValidationErrorEntity],]:"""Parse the given response.200: OK - UserProfileInfo (Successful operation)400: Bad Request - ValidationErrorEntity (20002: validation error)401: Unauthorized - ErrorEntity (20001: unauthorized)403: Forbidden - ErrorEntity (20013: insufficient permission)404: Not Found - ErrorEntity (11440: Unable to {action}: User profile not found in namespace [{namespace}])---: HttpResponse (Undocumented Response)---: HttpResponse (Unexpected Content-Type Error)---: HttpResponse (Unhandled Error)"""pre_processed_response,error=self.pre_process_response(code=code,content_type=content_type,content=content)iferrorisnotNone:returnNone,Noneiferror.is_no_content()elseerrorcode,content_type,content=pre_processed_responseifcode==200:returnUserProfileInfo.create_from_dict(content),Noneifcode==400:returnNone,ValidationErrorEntity.create_from_dict(content)ifcode==401:returnNone,ErrorEntity.create_from_dict(content)ifcode==403:returnNone,ErrorEntity.create_from_dict(content)ifcode==404:returnNone,ErrorEntity.create_from_dict(content)returnself.handle_undocumented_response(code=code,content_type=content_type,content=content)# endregion response methods# region static methods@classmethoddefcreate(cls,namespace:str,user_id:str,**kwargs)->PublicGetUserProfileInfo:instance=cls()instance.namespace=namespaceinstance.user_id=user_idreturninstance@classmethoddefcreate_from_dict(cls,dict_:dict,include_empty:bool=False)->PublicGetUserProfileInfo:instance=cls()if"namespace"indict_anddict_["namespace"]isnotNone:instance.namespace=str(dict_["namespace"])elifinclude_empty:instance.namespace=""if"userId"indict_anddict_["userId"]isnotNone:instance.user_id=str(dict_["userId"])elifinclude_empty:instance.user_id=""returninstance@staticmethoddefget_field_info()->Dict[str,str]:return{"namespace":"namespace","userId":"user_id",}@staticmethoddefget_required_map()->Dict[str,bool]:return{"namespace":True,"userId":True,}# endregion static methodsCreating:bulb: there are 4 ways to create an instance of these models and operations.# 1. using the python __init__() function then setting the parameters manually:model=ModelName()model.param_a="foo"model.param_b="bar"# 2. using the python __init__() function together with the 'with_x' methods:# # the 'with_x' functions are type annotated and will show warnings if a wrong type is passed.model=ModelName()\.with_param_a("foo")\.with_param_b("bar")# 3. using the ModelName.create(..) class method:# # parameters here are also type annotated and will throw a TypeError if a required field was not filled out.model=ModelName.create(param_a="foo",param_b="bar",)# 4. using the ModelName.create_from_dict(..) class method:# # this method also has a 'include_empty' option that would get ignore values that evaluate to False, None, or len() == 0.model_params={"param_a":"foo","param_b":"bar","param_c":False,"param_d":None,"param_e":[],"param_f":{},}model=ModelName.create_from_dict(model_params)# all of these apply to all operations too.WrappersTo improve ergonomics the code generator also generates wrappers around the operations. The purpose of these wrappers is to automatically fill up parameters that the SDK already knows. (e.g. namespace, client_id, access_token, etc.)They are located ataccelbyte_py_sdk.api.<service-name>.wrappersbut can be accessed like so:from accelbyte_py_sdk.api.<service-name> import <wrapper-name>importaccelbyte_py_sdkfromaccelbyte_py_sdk.api.iamimporttoken_grant_v3if__name__=="__main__":accelbyte_py_sdk.initialize()token,error=token_grant_v3(grant_type="client_credentials")asserterrorisnotNoneThe wrapper functiontoken_grant_v3is a wrapper for theTokenGrantV3operation. It automatically passes in the information needed like the Basic Auth Headers. The values are gotten from the currentConfigRepository.continuing from the previous examples (GetUserProfileInfo), its wrapper would be:# accelbyte_py_sdk/api/basic/wrappers/_user_profile.pyfromtypingimportAny,Dict,List,Optional,Tuple,Unionfromaccelbyte_py_sdk.coreimportget_namespaceasget_services_namespacefromaccelbyte_py_sdk.coreimportrun_requestfromaccelbyte_py_sdk.coreimportsame_doc_asfrom..operations.user_profileimportPublicGetUserProfileInfo@same_doc_as(PublicGetUserProfileInfo)defpublic_get_user_profile_info(user_id:str,namespace:Optional[str]=None,x_additional_headers:Optional[Dict[str,str]]=None,**kwargs):ifnamespaceisNone:namespace,error=get_services_namespace()iferror:returnNone,errorrequest=PublicGetUserProfileInfo.create(user_id=user_id,namespace=namespace,)returnrun_request(request,additional_headers=x_additional_headers,**kwargs)this wrapper function automatically fills up the required path parameternamespace.now to use it only theuser_idis now required.importaccelbyte_py_sdkfromaccelbyte_py_sdk.api.basicimportpublic_get_user_profile_infoif__name__=="__main__":accelbyte_py_sdk.initialize()user_profile_info,error=public_get_user_profile_info(user_id="lorem")asserterrorisnotNoneprint(f"Hello there{user_profile_info.first_name}!")
accelbyte-py-sdk-core
AccelByte Modular Python SDK:warning:Thisaccelbyte-python-modular-sdkis not to be confused withaccelbyte-python-sdk:The former (modular SDK) is planned to be the sucessor for the latter (monolithic SDK).The modular SDK allows developers to include only the required modules in projects.If you are starting a new project, you may experiment with modular SDK.If you use monolithic SDK in an existing project, a migration path is available via compatibility layer in modular SDK.Both monolithic and modular SDK will be maintained for some time to give time for migration until monolithic SDK is deprecated in the future.A software development kit (SDK) for interacting with AccelByte Gaming Services (AGS) written in Python.This SDK was generated from AGS OpenAPI spec files included in thespecdirectory.SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreand then install the service you needpipinstallaccelbyte-py-sdk-service-achievement pipinstallaccelbyte-py-sdk-service-ams pipinstallaccelbyte-py-sdk-service-basic pipinstallaccelbyte-py-sdk-service-cloudsave pipinstallaccelbyte-py-sdk-service-dslogmanager pipinstallaccelbyte-py-sdk-service-dsmc pipinstallaccelbyte-py-sdk-service-eventlog pipinstallaccelbyte-py-sdk-service-gametelemetry pipinstallaccelbyte-py-sdk-service-gdpr pipinstallaccelbyte-py-sdk-service-iam pipinstallaccelbyte-py-sdk-service-leaderboard pipinstallaccelbyte-py-sdk-service-legal pipinstallaccelbyte-py-sdk-service-lobby pipinstallaccelbyte-py-sdk-service-match2 pipinstallaccelbyte-py-sdk-service-matchmaking pipinstallaccelbyte-py-sdk-service-platform pipinstallaccelbyte-py-sdk-service-qosm pipinstallaccelbyte-py-sdk-service-reporting pipinstallaccelbyte-py-sdk-service-seasonpass pipinstallaccelbyte-py-sdk-service-session pipinstallaccelbyte-py-sdk-service-sessionbrowser pipinstallaccelbyte-py-sdk-service-social pipinstallaccelbyte-py-sdk-service-ugcand then install any feature you wantpipinstallaccelbyte-py-sdk-feat-auth pipinstallaccelbyte-py-sdk-feat-token-validationor install everythingpipinstallaccelbyte-py-sdk-allEnvironment VariablesThe following environment variables need to be set when usingEnvironmentConfigRepository(default).NameRequiredExampleAB_BASE_URLYeshttps://demo/accelbyte.ioAB_CLIENT_IDYesabcdef0123456789abcdef0123456789AB_CLIENT_SECRETYes, only if you use a privateAB_CLIENT_IDab#c,d)ef(ab#c,d)ef(ab#c,d)ef(abAB_NAMESPACEYes, the SDK will automatically fill up the{namespace}path parameter (overridable)accelbyteAB_APP_NAMENo, the SDK will automatically fill up theUser-Agentheader (overridable)MyAppAB_APP_VERSIONNo, the SDK will automatically fill up theUser-Agentheader (overridable)1.0.0UsageInitializingYou'll have to initialize the SDK using theinitialize()function.importaccelbyte_py_sdkif__name__=="__main__":accelbyte_py_sdk.initialize()# uses EnvironmentConfigRepository by default# which in turn uses '$AB_BASE_URL', '$AB_CLIENT_ID', '$AB_CLIENT_SECRET', '$AB_NAMESPACE'You could also pass in options like so:fromosimportenvironimportaccelbyte_py_sdkfromaccelbyte_py_sdk.coreimportMyConfigRepositoryif__name__=="__main__":base_url=environ["MY_BASE_URL"]client_id=environ["MY_CLIENT_ID"]client_secret=environ["MY_CLIENT_SECRET"]namespace=environ["MY_NAMESPACE"]app_name=environ["MY_APP_NAME"]app_version=environ["MY_APP_VERSION"]my_config_repository=MyConfigRepository(base_url=base_url,client_id=client_id,client_secret=client_secret,namespace=namespace,app_name=app_name,app_version=app_version)options={"config":my_config_repository}accelbyte_py_sdk.initialize(options)# you could still set some of these options after initializing.# ex. accelbyte_py_sdk.core.set_config_repository(my_config_repository)Logging In and Logging OutLogin using Username and Passwordfromosimportenvironimportaccelbyte_py_sdkfromaccelbyte_py_sdk.services.authimportlogin_user,logoutif__name__=="__main__":accelbyte_py_sdk.initialize()username=environ["AB_USERNAME"]password=environ["AB_PASSWORD"]token,error=login_user(username,password)asserterrorisNone_,error=logout()asserterrorisNoneHerelogin_user(username, password)andlogout()are wrapper functions.You can also specify the scope you want forlogin_user(...). With thescopeparameter typed asOptional[Union[str, List[str]]]. By default the scope used iscommerce account social publishing analytics(a space seprated value string).login_user(username,password,scope="scopeA")# login_user(username, password, scope="scopeA scopeB") # space separated values# login_user(username, password, scope=["scopeA", "scopeB"])Login using OAuth Client (Public or Confidential)fromosimportenvironimportaccelbyte_py_sdkfromaccelbyte_py_sdk.services.authimportlogin_clientif__name__=="__main__":accelbyte_py_sdk.initialize()client_id=environ["AB_CLIENT_ID"]client_secret=environ["AB_CLIENT_SECRET"]token,error=login_client(client_id,client_secret)# passing '$AB_CLIENT_ID' and '$AB_CLIENT_SECRET' is same as:# token, error = login_client()asserterrorisnotNone:bulb: The use of a Public OAuth Client is highly discouraged for this use case. Please ensure that you both set the Client ID and Client Secret.Refreshing Tokens:bulb: Usinglogin_x(..., auto_refresh=True)automatically refreshes the token once the expiration draws near.# set the refresh rate for 'login_client'# 0.5 means refresh when we 50% of the expiration duration has passedres,err=login_client(client_id,client_secret,auto_refresh=True,refresh_rate=0.5)# set the refresh rate for 'login_user'# 0.5 means refresh when we 50% of the expiration duration has passedres,err=login_user(username,password,auto_refresh=True,refresh_rate=0.5)The auto refresh is only triggered when another request is fired. If you want to the refresh run automatically in the background. Use any of theLoginXTimerclasses.fromaccelbyte_py_sdk.services.authimportLoginClientTimer,LoginPlatformTimer,LoginUserTimer,RefreshLoginTimerres,err=login_user(username,password)iferrisnotNone:exit(1)# creates a threading.Timer-like object that calls login_user every 1800 seconds repeatedly 'inf' timesinterval=1800timer=LoginUserTimer(interval,username=username,password=password,repeats=-1,# <0: repeat 'inf' times | 0 or None: repeat 0 times | >0: repeat n timesautostart=True,)To manually refresh the token:fromaccelbyte_py_sdk.coreimportget_token_repositoryfromaccelbyte_py_sdk.services.authimportrefresh_logintoken_repo=get_token_repository()refresh_token=token_repo.get_refresh_token()token,error=refresh_login(refresh_token)asserterrorisNoneTo use on-demand token refresh, enable therefresh_if_possibleoption by setting it to True. This configuration involves checking for the presence of an existing token, verifying its expiration status. If the token has expired, the SDK will then examine whether a refresh token exists. If a refresh token is available, it will be utilized to obtain a new token.res,error=login_user(username,password,refresh_if_possible=True)Using multiple SDK instancesThe examples above demonstrates using just one instance of the Modular Python SDK (the default which is also global), but you could also instantiate multiple instances of the SDK and use them at the same time.importaccelbyte_py_sdk.services.authasauth_serviceimportaccelbyte_py_sdk.api.iamasiam_serviceimportaccelbyte_py_sdk.api.iam.modelsasiam_modelsfromaccelbyte_py_sdkimportAccelByteSDKfromaccelbyte_py_sdk.coreimportEnvironmentConfigRepositoryfromaccelbyte_py_sdk.coreimportInMemoryTokenRepository# Create 3 instances of the SDKclient_sdk=AccelByteSDK()user_sdk1=AccelByteSDK()user_sdk2=AccelByteSDK()# Initialize the SDKsclient_sdk.initialize(options={"config":EnvironmentConfigRepository(),"token":InMemoryTokenRepository(),})user_sdk1.initialize(options={"config":EnvironmentConfigRepository(),"token":InMemoryTokenRepository(),})user_sdk2.initialize(options={"config":user_sdk1.get_config_repository(),# you could also share the config repo with User 1 SDK's"token":InMemoryTokenRepository(),# you could also do the same thing with token repos but that is not advisable.})# Login the SDKs_,error=auth_service.login_client(sdk=client_sdk)username1,password1=..._,error=auth_service.login_user(username1,password1,sdk=user_sdk1)username2,password2=..._,error=auth_service.login_user(username2,password2,sdk=user_sdk2)# Call an endpoint as User 1result1,error=iam_service.public_create_user_v4(body=iam_models.AccountCreateUserRequestV4.create_from_dict({...}),sdk=user_sdk1,)# Call an endpoint as User 2result2,error=iam_service.public_create_user_v4(body=iam_models.AccountCreateUserRequestV4.create_from_dict({...}),sdk=user_sdk2,)# Call an endpoint as the Admin IAM Clientresult,error=admin_update_user_v4(body=iam_models.ModelUserUpdateRequestV3.create_from_dict({...}),user_id=result1.user_id,sdk=client_sdk,)# Reset/Deintialize the SDKs after usingclient_sdk1.deintialize()client_sdk1.deintialize()client_sdk1.deintialize()Interacting with a Service EndpointExample AIn this example we will create a new user using thePOSTendpoint/iam/v3/public/namespaces/{namespace}/usersimportjsonimportaccelbyte_py_sdkfromaccelbyte_py_sdk.services.authimportlogin_client# Import the wrapper 'public_create_user_v3'# to know which wrapper to use open the docs/<service-name>-index.md and# use the search function to find the wrapper namefromaccelbyte_py_sdk.api.iamimportpublic_create_user_v3# This POST endpoint also requires a body of 'ModelUserCreateRequestV3'# so you will need to import that too, import it using this scheme:# from accelbyte_py_sdk.api.<service-name>.models import <model-name>fromaccelbyte_py_sdk.api.iam.modelsimportModelUserCreateRequestV3fromaccelbyte_py_sdk.api.iam.modelsimportModelUserCreateResponseV3defmain():# 1 Initialize the SDKaccelbyte_py_sdk.initialize()# 2 Login as a client (uses $AB_CLIENT_ID and $AB_CLIENT_SECRET)_,error=login_client()# 3 Create a user using the POST endpoint: /iam/v3/public/namespaces/{namespace}/users# * this endpoint requires:# - a 'body' (ModelUserCreateRequestV3)# - a 'namespace' (string)# 'namespace' here is unique because it can be omitted, omitting it will result in# the SDK to automatically fill it out with the value of '$AB_NAMESPACE'# * more details on this endpoint can be found in:# accelbyte_py_sdk/api/iam/operations/users/public_create_user_v3.pyresult,error=public_create_user_v3(body=ModelUserCreateRequestV3.create(auth_type="EMAILPASSWD",country="US",date_of_birth="2001-01-01",display_name="************",email_address="******@fakemail.com",password="******************",))# 4 Check for errorsiferror:exit(1)# 5 Do something with the resultprint(json.dumps(result.to_dict(),indent=2))# {# "authType": "EMAILPASSWD",# "country": "US",# "dateOfBirth": "2001-01-01T00:00:00Z",# "displayName": "************",# "emailAddress": "******@fakemail.com",# "namespace": "******",# "userId": "********************************"# }if__name__=="__main__":main():bulb: All wrapper functions follow the return value format ofresult, error.:bulb: You could also write your own wrapper functions by using the models and operations inaccelbyte_py_sdk.api.<service-name>andaccelbyte_py_sdk.api.<service-name>.modelsrespectively.:bulb: All wrapper functions have an asynchronous counterpart that ends with_async.Example A (async)To convertExample Aasynchronously the following steps are needed.Import the asyncio package.importasyncioConvert the main method intoasync.# def main():asyncdefmain():Change how themainfunction is invoked.if__name__=="__main__":# main()loop=asyncio.get_event_loop()loop.run_until_complete(main())UseHttpxHttpClient.# accelbyte_py_sdk.initialize()accelbyte_py_sdk.initialize(options={"http":"HttpxHttpClient"})Use theasyncversion of the wrapper by appending_async.# from accelbyte_py_sdk.api.iam import public_create_user_v3fromaccelbyte_py_sdk.api.iamimportpublic_create_user_v3_asyncUse theasyncwrapper with theawaitkeyword.# result, error = public_create_user_v3(result,error=awaitpublic_create_user_v3_async(Here is the full code:importasyncioimportjsonimportaccelbyte_py_sdkfromaccelbyte_py_sdk.services.authimportlogin_client# Import the wrapper 'public_create_user_v3_async'# to know which wrapper to use open the docs/<service-name>-index.md and# use the search function to find the wrapper namefromaccelbyte_py_sdk.api.iamimportpublic_create_user_v3_async# This POST endpoint also requires a body of 'ModelUserCreateRequestV3'# so you will need to import that too, import it using this scheme:# from accelbyte_py_sdk.api.<service-name>.models import <model-name>fromaccelbyte_py_sdk.api.iam.modelsimportModelUserCreateRequestV3fromaccelbyte_py_sdk.api.iam.modelsimportModelUserCreateResponseV3asyncdefmain():# 1 Initialize the SDKaccelbyte_py_sdk.initialize(options={"http":"HttpxHttpClient"})# 2 Login as a client (uses $AB_CLIENT_ID and $AB_CLIENT_SECRET)_,error=login_client()# 3 Create a user using the POST endpoint: /iam/v3/public/namespaces/{namespace}/users# * this endpoint requires:# - a 'body' (ModelUserCreateRequestV3)# - a 'namespace' (string)# 'namespace' here is unique because it can be omitted, omitting it will result in# the SDK to automatically fill it out with the value of '$AB_NAMESPACE'# * more details on this endpoint can be found in:# accelbyte_py_sdk/api/iam/operations/users/public_create_user_v3.pyresult,error=awaitpublic_create_user_v3_async(body=ModelUserCreateRequestV3.create(auth_type="EMAILPASSWD",country="US",date_of_birth="2001-01-01",display_name="************",email_address="******@fakemail.com",password="******************",))# 4 Check for errorsiferror:exit(1)# 5 Do something with the resultprint(json.dumps(result.to_dict(),indent=2))# {# "authType": "EMAILPASSWD",# "country": "US",# "dateOfBirth": "2001-01-01T00:00:00Z",# "displayName": "************",# "emailAddress": "******@fakemail.com",# "namespace": "******",# "userId": "********************************"# }if__name__=="__main__":loop=asyncio.get_event_loop()loop.run_until_complete(main())Configuring HTTP RetryTo use theHTTP Retryfeature, set theHttpClient'sretry_policyandbackoff_policy.importaccelbyte_py_sdkfromaccelbyte_py_sdk.coreimportget_http_client# 1 Initialize the SDKaccelbyte_py_sdk.initialize()# 2 Get the HTTP Clienthttp_client=get_http_client()# 3 Configure the `retry_policy` and `backoff_policy`# 3a. Retry 3 times with 0.5 seconds delay in betweenfromaccelbyte_py_sdk.coreimportConstantHttpBackoffPolicyfromaccelbyte_py_sdk.coreimportMaxRetriesHttpRetryPolicyhttp_client.retry_policy=MaxRetriesHttpRetryPolicy(3)# 3b. Retry when total elapsed time is less than 15 seconds, with an exponential backoff duration.fromaccelbyte_py_sdk.coreimportExponentialHttpBackoffPolicyfromaccelbyte_py_sdk.coreimportMaxElapsedHttpRetryPolicyhttp_client.backoff_policy=ExponentialHttpBackoffPolicy(initial=1.0,multiplier=2.0)http_client.retry_policy=MaxElapsedHttpRetryPolicy(15)# 3c. Use custom retry and backoff policies.fromdatetimeimporttimedeltafromtypingimportOptionaldefmy_custom_retry_policy(request,response,/,*,retries:int=0,elapsed:Optional[timedelta]=None,**kwargs)->float:return"Retry-After"inresponse.headersandretries==1# Retry if the 'Retry-After' header exists and we are on the 1st retry (2nd attempt).defmy_custom_backoff_policy(request,response,/,*,retries:int=0,elapsed:Optional[timedelta]=None,**kwargs)->float:returnresponse.headers.get("Retry-After",1)# Use the value of the 'Retry-After' response header, default to 1.0s.http_client.backoff_policy=my_custom_backoff_policyhttp_client.retry_policy=my_custom_retry_policy# 3d. Combining multiple retry policies.fromaccelbyte_py_sdk.coreimportCompositeHttpRetryPolicyfromaccelbyte_py_sdk.coreimportMaxRetriesHttpRetryPolicyfromaccelbyte_py_sdk.coreimportStatusCodesHttpRetryPolicyhttp_client.retry_policy=CompositeHttpRetryPolicy(StatusCodesHttpRetryPolicy(401,(501,503)),# Retry when response status code is 401, 501 to 503 (501, 502, or 503) -- ANDMaxRetriesHttpRetryPolicy(3)# when number of retries is less than or equal to 3.)Validating TokensYou can useaccelbyte_py_sdk.token_validation.caching.CachingTokenValidatororaccelbyte_py_sdk.token_validation.iam.IAMTokenValidator.token_validator=CachingTokenValidator(sdk)# or IAMTokenValidator(sdk)# access_token = ...error=token_validator.validate_token(access_token)iferror:raiseerrorSeetestsfor more usage.SamplesSample apps are available in thesamplesdirectoryDocumentationFor documentation about AccelByte services and SDK, seedocs.accelbyte.io:bulb: Check out the index files in thedocsdirectory if you are looking for a specific endpoint.MiscUtility FunctionsCheck if the SDK is initialized.importaccelbyte_py_sdkis_initialized=accelbyte_py_sdk.is_initialized()print(is_initialized)# FalseCreate a Basic Auth from a string tuple.importaccelbyte_py_sdkbasic_auth=accelbyte_py_sdk.core.create_basic_authentication("foo","bar")print(basic_auth)# Basic Zm9vOmJhcg==Gets the stored access token.importaccelbyte_py_sdkaccess_token,error=accelbyte_py_sdk.core.get_access_token()print(access_token)# ************************************GetAB_*environment configuration values.importaccelbyte_py_sdkbase_url,client_id,client_secret,namespace=accelbyte_py_sdk.core.get_env_config()print(f"{base_url},{client_id},{client_secret},{namespace}")# <$AB_BASE_URL>, <$AB_CLIENT_ID>, <$AB_CLIENT_SECRET>, <$AB_NAMESPACE>GetAB_*environment user credential values.importaccelbyte_py_sdkusername,password=accelbyte_py_sdk.core.get_env_user_credentials()print(f"{base_url}:{client_id}")# <$AB_USERNAME>: <$AB_PASSWORD>Set logger level and add logger handlers.importlogging# 1. The SDK has helper functions for logging.accelbyte_py_sdk.core.set_logger_level(logging.INFO)# 'accelbyte_py_sdk'accelbyte_py_sdk.core.set_logger_level(logging.INFO,"http")# 'accelbyte_py_sdk.http'accelbyte_py_sdk.core.set_logger_level(logging.INFO,"ws")# 'accelbyte_py_sdk.ws'# 2. You could also use this helper function for debugging.accelbyte_py_sdk.core.add_stream_handler_to_logger()# sends content of the 'accelbyte_py_sdk' logger to 'sys.stderr'.# 3. There is a helper function that helps you get started with log files.accelbyte_py_sdk.core.add_buffered_file_handler_to_logger(# flushes content of the 'accelbyte_py_sdk' logger to a file named 'sdk.log' every 10 logs.filename="/path/to/sdk.log",capacity=10,level=logging.INFO)accelbyte_py_sdk.core.add_buffered_file_handler_to_logger(# flushes content of the 'accelbyte_py_sdk.http' logger to a file named 'http.log' every 3 logs.filename="/path/to/http.log",capacity=3,level=logging.INFO,additional_scope="http")# 3.a. Or you could the same thing when initializing the SDK.accelbyte_py_sdk.initialize(options={"log_files":{"":"/path/to/sdk.log",# flushes content of the 'accelbyte_py_sdk' logger to a file named 'sdk.log' every 10 logs."http":{# flushes content of the 'accelbyte_py_sdk.http' logger to a file named 'http.log' every 3 logs."filename":"/path/to/http.log","capacity":3,"level":logging.INFO}}})# 4. By default logs from 'accelbyte_py_sdk.http' are stringified dictionaries, you can set your own formatter like so.defformat_request_response_as_yaml(data:dict)->str:returnf"---\n{yaml.safe_dump(data,sort_keys=False).rstrip()}\n..."http_client=accelbyte_py_sdk.core.get_http_client()http_client.request_log_formatter=format_request_response_as_yamlhttp_client.response_log_formatter=format_request_response_as_yamlIn-depth TopicsGenerated codeModelsEach definition in#/definitions/is turned into a Model.Example:# UserProfileInfoproperties:avatarLargeUrl:type:stringavatarSmallUrl:type:stringavatarUrl:type:stringcustomAttributes:additionalProperties:type:objecttype:objectdateOfBirth:format:datetype:stringx-nullable:truefirstName:type:stringlanguage:type:stringlastName:type:stringnamespace:type:stringstatus:enum:-ACTIVE-INACTIVEtype:stringtimeZone:type:stringuserId:type:stringzipCode:type:stringtype:object# accelbyte_py_sdk/api/basic/models/user_profile_info.pyclassUserProfileInfo(Model):"""User profile info (UserProfileInfo)Properties:avatar_large_url: (avatarLargeUrl) OPTIONAL stravatar_small_url: (avatarSmallUrl) OPTIONAL stravatar_url: (avatarUrl) OPTIONAL strcustom_attributes: (customAttributes) OPTIONAL Dict[str, Any]date_of_birth: (dateOfBirth) OPTIONAL strfirst_name: (firstName) OPTIONAL strlanguage: (language) OPTIONAL strlast_name: (lastName) OPTIONAL strnamespace: (namespace) OPTIONAL strstatus: (status) OPTIONAL Union[str, StatusEnum]time_zone: (timeZone) OPTIONAL struser_id: (userId) OPTIONAL strzip_code: (zipCode) OPTIONAL str"""# region fieldsavatar_large_url:str# OPTIONALavatar_small_url:str# OPTIONALavatar_url:str# OPTIONALcustom_attributes:Dict[str,Any]# OPTIONALdate_of_birth:str# OPTIONALfirst_name:str# OPTIONALlanguage:str# OPTIONALlast_name:str# OPTIONALnamespace:str# OPTIONALstatus:Union[str,StatusEnum]# OPTIONALtime_zone:str# OPTIONALuser_id:str# OPTIONALzip_code:str# OPTIONAL# endregion fieldsthere are also a number of utility functions generated with each model that should help in the ease of use.# accelbyte_py_sdk/api/basic/models/user_profile_info.py...defwith_user_id(self,value:str)->UserProfileInfo:self.user_id=valuereturnself# other with_x() methods toodefto_dict(self,include_empty:bool=False)->dict:result:dict={}...returnresult@classmethoddefcreate(cls,avatar_large_url:Optional[str]=None,avatar_small_url:Optional[str]=None,avatar_url:Optional[str]=None,custom_attributes:Optional[Dict[str,Any]]=None,date_of_birth:Optional[str]=None,first_name:Optional[str]=None,language:Optional[str]=None,last_name:Optional[str]=None,namespace:Optional[str]=None,status:Optional[Union[str,StatusEnum]]=None,time_zone:Optional[str]=None,user_id:Optional[str]=None,zip_code:Optional[str]=None,)->UserProfileInfo:instance=cls()...returninstance@classmethoddefcreate_from_dict(cls,dict_:dict,include_empty:bool=False)->UserProfileInfo:instance=cls()...returninstance@staticmethoddefget_field_info()->Dict[str,str]:return{"avatarLargeUrl":"avatar_large_url","avatarSmallUrl":"avatar_small_url","avatarUrl":"avatar_url","customAttributes":"custom_attributes","dateOfBirth":"date_of_birth","firstName":"first_name","language":"language","lastName":"last_name","namespace":"namespace","status":"status","timeZone":"time_zone","userId":"user_id","zipCode":"zip_code",}...OperationsEach path item in#/pathsis turned into an Operation.Example:# GET /basic/v1/public/namespaces/{namespace}/users/{userId}/profilesdescription:'Getuserprofile.&lt;br&gt;Otherdetailinfo:&lt;ul&gt;&lt;li&gt;&lt;i&gt;Requiredpermission&lt;/i&gt;:resource=&lt;b&gt;&#34;NAMESPACE:{namespace}:USER:{userId}:PROFILE&#34;&lt;/b&gt;,action=2&lt;b&gt;(READ)&lt;/b&gt;&lt;/li&gt;&lt;li&gt;&lt;i&gt;Actioncode&lt;/i&gt;:11403&lt;/li&gt;&lt;li&gt;&lt;i&gt;Returns&lt;/i&gt;:userprofile&lt;/li&gt;&lt;/ul&gt;'operationId:publicGetUserProfileInfoparameters:-description:namespace, only accept alphabet and numericin:pathname:namespacerequired:truetype:string-description:user's id, should follow UUID version 4 without hyphenin:pathname:userIdrequired:truetype:stringproduces:-application/jsonresponses:'200':description:Successful operationschema:$ref:'#/definitions/UserProfileInfo''400':description:<table><tr><td>errorCode</td><td>errorMessage</td></tr><tr><td>20002</td><td>validationerror</td></tr></table>schema:$ref:'#/definitions/ValidationErrorEntity''401':description:<table><tr><td>errorCode</td><td>errorMessage</td></tr><tr><td>20001</td><td>unauthorized</td></tr></table>schema:$ref:'#/definitions/ErrorEntity''403':description:<table><tr><td>errorCode</td><td>errorMessage</td></tr><tr><td>20013</td><td>insufficientpermission</td></tr></table>schema:$ref:'#/definitions/ErrorEntity''404':description:'<table><tr><td>errorCode</td><td>errorMessage</td></tr><tr><td>11440</td><td>Unableto{action}:Userprofilenotfoundinnamespace[{namespace}]</td></tr></table>'schema:$ref:'#/definitions/ErrorEntity'security:-authorization:[]-HasPermission:-NAMESPACE:{namespace}:USER:{userId}:PROFILE [READ]authorization:[]summary:Get user profiletags:-UserProfilex-authorization:action:'2'resource:NAMESPACE:{namespace}:USER:{userId}:PROFILEsame with the models there are also a number of utility functions generated with each operation that should help in the ease of use.# Copyright (c) 2021 AccelByte Inc. All Rights Reserved.# This is licensed software from AccelByte Inc, for limitations# and restrictions contact your company contract manager.## Code generated. DO NOT EDIT!# template file: operation.j2# pylint: disable=duplicate-code# pylint: disable=line-too-long# pylint: disable=missing-function-docstring# pylint: disable=missing-module-docstring# pylint: disable=too-many-arguments# pylint: disable=too-many-branches# pylint: disable=too-many-instance-attributes# pylint: disable=too-many-lines# pylint: disable=too-many-locals# pylint: disable=too-many-public-methods# pylint: disable=too-many-return-statements# pylint: disable=too-many-statements# pylint: disable=unused-import# AccelByte Gaming Services Basic Service (2.13.1)from__future__importannotationsfromtypingimportAny,Dict,List,Optional,Tuple,Unionfromaccelbyte_py_sdk.coreimportOperationfromaccelbyte_py_sdk.coreimportHeaderStrfromaccelbyte_py_sdk.coreimportHttpResponsefrom...modelsimportErrorEntityfrom...modelsimportUserProfileInfofrom...modelsimportValidationErrorEntityclassPublicGetUserProfileInfo(Operation):"""Get user profile (publicGetUserProfileInfo)Get user profile.Other detail info:* Required permission : resource= "NAMESPACE:{namespace}:USER:{userId}:PROFILE" , action=2 (READ)* Action code : 11403* Returns : user profileRequired Permission(s):- NAMESPACE:{namespace}:USER:{userId}:PROFILE [READ]Properties:url: /basic/v1/public/namespaces/{namespace}/users/{userId}/profilesmethod: GETtags: ["UserProfile"]consumes: []produces: ["application/json"]securities: [BEARER_AUTH] or [BEARER_AUTH]namespace: (namespace) REQUIRED str in pathuser_id: (userId) REQUIRED str in pathResponses:200: OK - UserProfileInfo (Successful operation)400: Bad Request - ValidationErrorEntity (20002: validation error)401: Unauthorized - ErrorEntity (20001: unauthorized)403: Forbidden - ErrorEntity (20013: insufficient permission)404: Not Found - ErrorEntity (11440: Unable to {action}: User profile not found in namespace [{namespace}])"""# region fields_url:str="/basic/v1/public/namespaces/{namespace}/users/{userId}/profiles"_method:str="GET"_consumes:List[str]=[]_produces:List[str]=["application/json"]_securities:List[List[str]]=[["BEARER_AUTH"],["BEARER_AUTH"]]_location_query:str=Nonenamespace:str# REQUIRED in [path]user_id:str# REQUIRED in [path]# endregion fields# region properties@propertydefurl(self)->str:returnself._url@propertydefmethod(self)->str:returnself._method@propertydefconsumes(self)->List[str]:returnself._consumes@propertydefproduces(self)->List[str]:returnself._produces@propertydefsecurities(self)->List[List[str]]:returnself._securities@propertydeflocation_query(self)->str:returnself._location_query# endregion properties# region get methods# endregion get methods# region get_x_params methodsdefget_all_params(self)->dict:return{"path":self.get_path_params(),}defget_path_params(self)->dict:result={}ifhasattr(self,"namespace"):result["namespace"]=self.namespaceifhasattr(self,"user_id"):result["userId"]=self.user_idreturnresult# endregion get_x_params methods# region is/has methods# endregion is/has methods# region with_x methodsdefwith_namespace(self,value:str)->PublicGetUserProfileInfo:self.namespace=valuereturnselfdefwith_user_id(self,value:str)->PublicGetUserProfileInfo:self.user_id=valuereturnself# endregion with_x methods# region to methodsdefto_dict(self,include_empty:bool=False)->dict:result:dict={}ifhasattr(self,"namespace")andself.namespace:result["namespace"]=str(self.namespace)elifinclude_empty:result["namespace"]=""ifhasattr(self,"user_id")andself.user_id:result["userId"]=str(self.user_id)elifinclude_empty:result["userId"]=""returnresult# endregion to methods# region response methods# noinspection PyMethodMayBeStaticdefparse_response(self,code:int,content_type:str,content:Any)->Tuple[Union[None,UserProfileInfo],Union[None,ErrorEntity,HttpResponse,ValidationErrorEntity],]:"""Parse the given response.200: OK - UserProfileInfo (Successful operation)400: Bad Request - ValidationErrorEntity (20002: validation error)401: Unauthorized - ErrorEntity (20001: unauthorized)403: Forbidden - ErrorEntity (20013: insufficient permission)404: Not Found - ErrorEntity (11440: Unable to {action}: User profile not found in namespace [{namespace}])---: HttpResponse (Undocumented Response)---: HttpResponse (Unexpected Content-Type Error)---: HttpResponse (Unhandled Error)"""pre_processed_response,error=self.pre_process_response(code=code,content_type=content_type,content=content)iferrorisnotNone:returnNone,Noneiferror.is_no_content()elseerrorcode,content_type,content=pre_processed_responseifcode==200:returnUserProfileInfo.create_from_dict(content),Noneifcode==400:returnNone,ValidationErrorEntity.create_from_dict(content)ifcode==401:returnNone,ErrorEntity.create_from_dict(content)ifcode==403:returnNone,ErrorEntity.create_from_dict(content)ifcode==404:returnNone,ErrorEntity.create_from_dict(content)returnself.handle_undocumented_response(code=code,content_type=content_type,content=content)# endregion response methods# region static methods@classmethoddefcreate(cls,namespace:str,user_id:str,**kwargs)->PublicGetUserProfileInfo:instance=cls()instance.namespace=namespaceinstance.user_id=user_idreturninstance@classmethoddefcreate_from_dict(cls,dict_:dict,include_empty:bool=False)->PublicGetUserProfileInfo:instance=cls()if"namespace"indict_anddict_["namespace"]isnotNone:instance.namespace=str(dict_["namespace"])elifinclude_empty:instance.namespace=""if"userId"indict_anddict_["userId"]isnotNone:instance.user_id=str(dict_["userId"])elifinclude_empty:instance.user_id=""returninstance@staticmethoddefget_field_info()->Dict[str,str]:return{"namespace":"namespace","userId":"user_id",}@staticmethoddefget_required_map()->Dict[str,bool]:return{"namespace":True,"userId":True,}# endregion static methodsCreating:bulb: there are 4 ways to create an instance of these models and operations.# 1. using the python __init__() function then setting the parameters manually:model=ModelName()model.param_a="foo"model.param_b="bar"# 2. using the python __init__() function together with the 'with_x' methods:# # the 'with_x' functions are type annotated and will show warnings if a wrong type is passed.model=ModelName()\.with_param_a("foo")\.with_param_b("bar")# 3. using the ModelName.create(..) class method:# # parameters here are also type annotated and will throw a TypeError if a required field was not filled out.model=ModelName.create(param_a="foo",param_b="bar",)# 4. using the ModelName.create_from_dict(..) class method:# # this method also has a 'include_empty' option that would get ignore values that evaluate to False, None, or len() == 0.model_params={"param_a":"foo","param_b":"bar","param_c":False,"param_d":None,"param_e":[],"param_f":{},}model=ModelName.create_from_dict(model_params)# all of these apply to all operations too.WrappersTo improve ergonomics the code generator also generates wrappers around the operations. The purpose of these wrappers is to automatically fill up parameters that the SDK already knows. (e.g. namespace, client_id, access_token, etc.)They are located ataccelbyte_py_sdk.api.<service-name>.wrappersbut can be accessed like so:from accelbyte_py_sdk.api.<service-name> import <wrapper-name>importaccelbyte_py_sdkfromaccelbyte_py_sdk.api.iamimporttoken_grant_v3if__name__=="__main__":accelbyte_py_sdk.initialize()token,error=token_grant_v3(grant_type="client_credentials")asserterrorisnotNoneThe wrapper functiontoken_grant_v3is a wrapper for theTokenGrantV3operation. It automatically passes in the information needed like the Basic Auth Headers. The values are gotten from the currentConfigRepository.continuing from the previous examples (GetUserProfileInfo), its wrapper would be:# accelbyte_py_sdk/api/basic/wrappers/_user_profile.pyfromtypingimportAny,Dict,List,Optional,Tuple,Unionfromaccelbyte_py_sdk.coreimportget_namespaceasget_services_namespacefromaccelbyte_py_sdk.coreimportrun_requestfromaccelbyte_py_sdk.coreimportsame_doc_asfrom..operations.user_profileimportPublicGetUserProfileInfo@same_doc_as(PublicGetUserProfileInfo)defpublic_get_user_profile_info(user_id:str,namespace:Optional[str]=None,x_additional_headers:Optional[Dict[str,str]]=None,**kwargs):ifnamespaceisNone:namespace,error=get_services_namespace()iferror:returnNone,errorrequest=PublicGetUserProfileInfo.create(user_id=user_id,namespace=namespace,)returnrun_request(request,additional_headers=x_additional_headers,**kwargs)this wrapper function automatically fills up the required path parameternamespace.now to use it only theuser_idis now required.importaccelbyte_py_sdkfromaccelbyte_py_sdk.api.basicimportpublic_get_user_profile_infoif__name__=="__main__":accelbyte_py_sdk.initialize()user_profile_info,error=public_get_user_profile_info(user_id="lorem")asserterrorisnotNoneprint(f"Hello there{user_profile_info.first_name}!")
accelbyte-py-sdk-feat-auth
AccelByte Modular Python SDK - Feature ModuleThis is a feature module for theAccelByte Modular Python SDKpackage.SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-feat-authand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-feat-token-validation
AccelByte Modular Python SDK - Feature ModuleThis is a feature module for theAccelByte Modular Python SDKpackage.SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-feat-token-validationand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-achievement
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Achievement Service * Version: 2.21.12SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-achievementand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-ams
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.Fleet Commander * Version: 1.10.0SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-amsand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-basic
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Basic Service * Version: 2.18.0SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-basicand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-chat
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Chat Service * Version: 0.4.21SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-chatand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-cloudsave
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Cloudsave Service * Version: 3.15.1SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-cloudsaveand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-dsartifact
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Ds Artifact Manager * Version: 1.10.1SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-dsartifactand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-dslogmanager
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Ds Log Manager Service * Version: 3.4.1SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-dslogmanagerand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-dsmc
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Dsm Controller Service * Version: 6.4.7SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-dsmcand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-eventlog
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Event Log Service * Version: 2.2.3SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-eventlogand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-gametelemetry
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.Analytics Game Telemetry * Version: 1.23.0SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-gametelemetryand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-gdpr
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Gdpr Service * Version: 2.7.0SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-gdprand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-group
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Group Service * Version: 2.19.1SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-groupand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-iam
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Iam Service * Version: 7.11.0SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-iamand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-inventory
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Inventory Service * Version: 0.1.0SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-inventoryand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-leaderboard
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Leaderboard Service * Version: 2.27.2SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-leaderboardand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-legal
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Legal Service * Version: 1.37.1SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-legaland then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-lobby
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Lobby Server * Version: 3.35.0SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-lobbyand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-match2
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Match Service V2 * Version: 2.16.0SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-match2and then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-matchmaking
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Matchmaking Service * Version: 2.30.1SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-matchmakingand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-platform
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Platform Service * Version: 4.47.0SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-platformand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-qosm
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Qos Manager Service * Version: 1.18.5SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-qosmand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-reporting
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Reporting Service * Version: 0.1.32SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-reportingand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-seasonpass
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Seasonpass Service * Version: 1.21.2SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-seasonpassand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-session
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Session Service * Version: 3.13.14SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-sessionand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-sessionbrowser
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Session Browser Service * Version: 1.18.4SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-sessionbrowserand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-social
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Social Service * Version: 2.12.0SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-socialand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
accelbyte-py-sdk-service-ugc
AccelByte Modular Python SDK - Service ModuleThis is a service module for theAccelByte Modular Python SDKpackage.AccelByte Gaming Services Ugc Service * Version: 2.19.6SetupThis SDK requires Python 3.9 to be installed.Install with PipInstall the core package from PyPIpipinstallaccelbyte-py-sdk-coreinstallthisservicepipinstallaccelbyte-py-sdk-service-ugcand then install any service or feature you wantpipinstallaccelbyte-py-sdk-service-{SERVICE}pipinstallaccelbyte-py-sdk-feat-{FEATURE}or install everythingpipinstallaccelbyte-py-sdk-allUsageSeeREADME.md
acceldata-airflow-sdk
ACCELDATA-AIRFLOW-SDKAcceldata airflow sdk provides support for observability of airflow dags in torch catalog. With the use of acceldata airflow SDK, user can e2e observability on airflow dag run in torch UI. Every dag is associated with pipeline in torch.Make sure while configuring airflow, 4 environment variable are set up in airflow environment docker container.TORCH_CATALOG_URL - URL of the torch catalogTORCH_ACCESS_KEY - API access key generated from torch UITORCH_SECRET_KEY - API secret key generated from torch UIENABLE_VERSION_CHECK - This is used to enable or disable version compatibility check between Torch and SDK. Default Value is 'True'. To disable version check please set it to 'False'.Creating Airflow connectionIf you want to avoid using environment variables then you can create a connection in Airflow UI as described below and provide the connection id of that connection in TorchInitializer. Set the following in connection:Conn id: Create a unique ID for the connectionConn Type: HTTPHost - URL of the torch catalogLogin - API access key generated from torch UIPassword - API secret key generated from torch UIExtra - {"ENABLE_VERSION_CHECK": "{{value}}"}. This value will be used to enable or disable version compatibility check between Torch and SDK. Default Value is 'True'. To disable version check please set it to 'False'.First of all, install below mentioned 2 pypi package to expose ETL in torch.pipinstallacceldata-sdkRead more about acceldata-sdk fromherepipinstallacceldata-airflow-sdkRead more about acceldata-airflow-sdk fromhereCreate TorchClientWhile creating a TorchClient connection to torch by default version compatibility checks between torch and sdk is enabled. If we want we can disable that check by passingdo_version_checkas `Falsefromacceldata-sdk.torch_clientimportTorchClienttorchClient=TorchClient(url="https://torch.acceldata.local:5443",access_key="OY2VVIN2N6LJ",secret_key="da6bDBimQfXSMsyyhlPVJJfk7Zc2gs",do_version_check=False)Create DAGIn airflow DAG code, import torch dag instead of airflow dag. All the parameters will be the same as standard apache airflow dag. But there will be 2 additional parametersoverride_success_callback,override_failure_callback.Params:override_success_callback: A Boolean parameter to allow the user to override the success callback provided by the SDK. The success callback end the pipeline run when DAG ends successfully. Default value is False. It should be set to True if we do not want the pipeline run to be ended at the end of the successful run of the DAG.override_failure_callback: A Boolean parameter to allow the user to override the failure callback provided by the SDK. The failure callback ends the pipeline run with error when DAG ends in failure. Default value is False. It should be set to True if we do not want the pipeline run to be ended at the end of the unsuccessful run of the DAG.These can be useful if few steps of the pipeline are being executed outside of Airflow DAG.fromacceldata_airflow_sdk.dagimportDAGdag=DAG(dag_id='pipeline_demo_final',schedule_interval='@daily',default_args=default_args,start_date=datetime(2020,2,2),catchup=False,on_failure_callback=failure_callback,on_success_callback=success_callback,override_success_callback=False,override_failure_callback=False,)Create Job and Span using decoratorThis was added in version 0.0.36To create a job and span in the pipeline, the user needs to decorate the python function with a job decorator as shown in the below example.Object ofNodeshould have either asset_uid ( {data source}.{asset path from its root}) or job_uid (Uid of next Job) as parameters.Params:span_uid: A String parameter to specify the UID of span to be created. Default value is None. Ifspan_uidis not provided a span corresponding to the job will be created with value of job_uid.job_uid: A String parameter to specify the job UID of the pipeline. Default value is None. Ifjob_uidis not provided, uid is constructed using the task id and function name.inputs: An Array parameter of Node type objects being used by job as input. Default value is empty array.outputs: An Array parameter of Node type objects being returned by job as output. Default value is empty array.metadata: Parameter of type JobMetadata specifying the metadata of the job. Default value is None.xcom_to_event_mapper_ids: A list Parameter having list of xcom keys used to send xcom variables in span event. Default value is empty list.bounded_by_span: A boolean parameter deciding whether to create a span along with the Job. Default value is True. If its is set to True to create span make sure, it has**contextparameter inside the function argument. That gives access to the context of the task. Using the context, various span events can be sent inside the function. Use span_context = context['span_context_parent'] to get you the span context.NOTE: Passing context is mandatory to the function being decorated as our decorators use context to share information through xcom.NOTE: The job_id for a task should be unique in a pipeline.If there are multiple tasks being created from a job decorated function then do not pass job_uid as that will end up using same job_uid for multiple tasks. In this scenario if we do not pass job_uid the autogenerated job_uid will be unique.fromacceldata_airflow_sdk.decorators.jobimportjobfromacceldata_sdk.models.jobimportJobMetadata,Node@job(job_uid='monthly.order.aggregate.job',inputs=[Node(asset_uid='POSTGRES_LOCAL_DS.pipeline.pipeline.customer_orders')],outputs=[Node(job_uid='Job2_uid')],metadata=JobMetadata(name='Vaishvik_brahmbhatt',team='backend',code_location='https://github.com/acme/reporting/report.scala'),span_uid='customer.orders.datagen.span',xcom_to_event_mapper_ids=['run_id','event_id'],bounded_by_span=True)defmonthly_order_aggregate(**context):passCreate Span Using DecoratorTo create a span for a python function, the user can decorate a python function with a span decorator that contains span uid as parameters. To decorate function with span make sure, it has**contextparameter inside the function argument. That gives access to the context of the task. Using the context, various span events can be sent inside the function. To get the parent span context, use the key namespan_context_parentin xcom pull of the task instance. Itโ€™s value will be span context instance which can be used to create child spans and send custom events (As shown in below example.)Params:span_uid: A String parameter to specify the UID of span to be created. Default value is None. Ifspan_uidis not provided, uid is constructed using the task id and function name.xcom_to_event_mapper_ids: A Parameter having list of xcom keys used to send xcom variables in span event. Default value is empty list.NOTE: Passing context is mandatory to the function being decorated as our decorators use context to share information through xcom.fromacceldata_airflow_sdk.decorators.spanimportspanfromacceldata_sdk.events.generic_eventimportGenericEvent@span(span_uid='customer.orders.datagen.span',associated_job_uids=['monthly.order.aggregate.transfer'],xcom_to_event_mapper_ids=['run_id','event_id'])defdata_gen(**context):datagen_span_context=context['span_context_parent']# Send event for current spandatagen_span_context.send_event(GenericEvent(context_data={'client_time':str(datetime.now()),'detail':'Generating data'},event_uid="order.customer.join.result"))customer_datagen_span=datagen_span_context.create_child_span(uid="customer.data.gen",context_data={'client_time':str(datetime.now())})# Send event for child spancustomer_datagen_span.send_event(GenericEvent(context_data={'client_time':str(datetime.now()),'row_count':len(rows)},event_uid="order.customer.join.result"))customer_datagen_span.end(context_data={'client_time':str(datetime.now()),'customers_count':len(customer_ids)})Custom OperatorsAcceldata airflow sdk contains 4 custom operators.TorchInitializer Operator:The user needs to add a task with a given operator at the root of your dag. This operator will create a new pipeline. Additionally, this will create new pipeline run and root span for that dag run of the airflow dag.Params:create_pipeline: A Boolean parameter deciding whether to create a pipeline(if not exists) and pipeline run. Default Value is True. This can be useful if pipeline/pipeline run has been created outside of Airflow DAG.span_name: A string parameter specifying name of the Root Span. Default value is None. If not provided we will use the pipeline_uid.span as span name.meta: A parameter specifying the metadata for pipeline (PipelineMetadata). Default value is None. If not provided PipelineMetadata(owner='sdk/pipeline-user', team='TORCH', codeLocation='...') is set as meta.pipeline_uid: A string parameter specifying the UID of the pipeline. It is a mandatory parameter.pipeline_name: A string parameter specifying the Name of the pipeline. Default value is None. If not provided pipeline_uid will be used as name.continuation_id: A string parameter that uniquely identifies a pipeline run. This parameter can accept jinja templates as well. Default value is None. This parameter is useful when we want to have a pipeline run span over multiple DAG's. To use it we need to provide a continuation id while creating pipeline in first DAG with create_pipeline=True and then provide the same continuation id in the second DAG where we want to continue the same pipeline run with create_pipeline=False.connection_id: A string parameter that uniquely identifies a connection storing Torch credentials. Default value is None. This parameter is useful when we want to use Torch credentials from Airflow connection instead of environment variables. To get details about creating a connection refer 'Creating Airflow connection' section above.fromacceldata_airflow_sdk.operators.torch_initialiser_operatorimportTorchInitializerfromacceldata_sdk.models.pipelineimportPipelineMetadata# example of jinja templates being used in continuation_id# jinja template to pull value from config json# continuation_id=f"{{{{ dag_run.conf['continuation_id'] }}}}"# jinja template to pull value from xcom# continuation_id=f"{{{{ task_instance.xcom_pull(key='continuation_id') }}}}"torch_initializer_task=TorchInitializer(task_id='torch_pipeline_initializer',pipeline_uid='customer.orders.monthly.agg.demo',pipeline_name='CUSTOMERS ORDERS MONTHLY AGG',continuation_id='heterogeneous_test',create_pipeline=True,span_name='customer.orders.monthly.agg.demo.span',meta=PipelineMetadata(owner='test',team='testing',codeLocation='...'),dag=dag)SpanOperator Operator :SpanOperator Operator will execute any std operator being passed asoperatorparameter and send span start and end event it. Just wrap the std operator with a span operator. Make sure that the wrapped operator is not added in the DAG. If the operator is wrapped with a span operator, the span operator will take care of that operator task inside its execution.Params:span_uid: A string parameter specifying the UID of span to be created. Ifjob_uidis not provided, uid is constructed using the task id and operator name.xcom_to_event_mapper_ids: A list parameter having list of xcom keys used to send xcom variables in span event. Default value is empty list.operator: A parameter specifying the Standard airflow operator. It is a mandatory parameter.Other parameters will be the same as the airflow standard base operator.WARNING: Do not specify thedagparameter in std airflow operator being passed as an argument to SpanOperator as the execution of operator task is taken care of by SpanOperator.fromacceldata_airflow_sdk.operators.span_operatorimportSpanOperatorget_order_agg_for_q4=PostgresOperator(task_id="get_monthly_order_aggregate_last_quarter",postgres_conn_id='example_db',sql="select * from information_schema.attributess",)get_order_agg_for_q4=SpanOperator(task_id="get_monthly_order_aggregate_last_quarter",span_uid='monthly.order.agg.q4.span',operator=get_order_agg_for_q4,associated_job_uids=['monthly.order.aggregate.transfer'],xcom_to_event_mapper_ids=['run_id','event_id'],dag=dag)JobOperator Operator :JobOperator Operator will execute any std operator being passed asoperatorparameter and create a job and send span start and end event. Just wrap the std operator with a Job operator. Make sure that the wrapped operator is not added in the DAG. If the operator is wrapped with a Job operator, the Job operator will take care of that operator task inside its execution.Object ofNodeshould have either asset_uid ( {data source}.{asset path from its root}) or job_uid (Uid of next Job) as parameters.Params:span_uid: A string parameter to specify the UID of span to be created. Default value is None. Ifspan_uidis not provided a span corresponding to the job will be created with value of job_uid.job_uid: A string parameter to specify the job UID of the pipeline. Default value is None. Ifjob_uidis not provided, uid is constructed using the task id and operator name.inputs: An array parameter of Node type objects being used by job as input. Default value is empty array.outputs: An array parameter of Node type objects being returned by job as output. Default value is empty array.metadata: A parameter of type JobMetadata specifying the metadata of the job. Default value is None.xcom_to_event_mapper_ids: A list parameter having list of xcom keys used to send xcom variables in span event. Default value is empty list.bounded_by_span: A boolean parameter deciding whether to create a span along with the Job. Default value is True. If its is set to True to create span make sure, it has**contextparameter inside the function argument. That gives access to the context of the task. Using the context, various span events can be sent inside the function. Use span_context = context['span_context_parent'] to get you the span context.operator: A Parameter specifying the Standard airflow operator. It is a mandatory parameter.Other parameters will be the same as the airflow standard base operator. Make sure, inside a Node the type of the object which will have asset_uid ( {data source}.{asset path from its root}) or job_uid (Uid of next Job) as parameters.WARNING: Do not specify thedagparameter in std airflow operator being passed as an argument to JobOperator as the execution of operator task is taken care of by JobOperator.fromacceldata_airflow_sdk.operators.job_operatorimportJobOperatorfromacceldata_sdk.models.jobimportNode,JobMetadataget_order_agg_for_q4=PostgresOperator(task_id="get_monthly_order_aggregate_last_quarter",postgres_conn_id='example_db',sql="select * from information_schema.attributess",)get_order_agg_for_q4=JobOperator(task_id="get_monthly_order_aggregate_last_quarter",job_uid='customer.order.join.job',inputs=[Node(asset_uid='POSTGRES_LOCAL_DS.pipeline.pipeline.orders'),Node(asset_uid='POSTGRES_LOCAL_DS.pipeline.pipeline.customers')],outputs=[Node(job_uid='next_job_uid')],metadata=JobMetadata('name','team','code_location'),span_uid='monthly.order.agg.q4.span',operator=get_order_agg_for_q4,xcom_to_event_mapper_ids=['run_id','event_id'],bounded_by_span=True,dag=dag)ExecutePolicyOperator Operator :ExecutePolicyOperatoris used to execute a policy by passingpolicytypeandpolicy_id.Params:sync: A boolean parameter used to decide if the policy should be executed synchronously or asynchronously. It is a mandatory parameter. If it is set toTrueit will return only after the execution ends. If it is set toFalseit will return immediately after starting the execution.policy_type: A PolicyType parameter used to specify the policy type. It is a mandatory parameter. It is a enum which will take values from constants as PolicyType.DATA_QUALITY or PolicyType.RECONCILIATION.policy_id: A string parameter used to specify the policy id to be executed. It is a mandatory parameter.incremental: A boolean parameter used to specify if the policy execution should be incremental or full. Default value is False.failure_strategy: An enum parameter used to decide the behaviour in case of failure. Default value is DoNotFail.failure_strategytakes enum of typeFailureStrategywhich can have 3 values DoNotFail, FailOnError and FailOnWarning.DoNotFail will never throw. In case of failure it will log the error.FailOnError will Throw exception only if it's an error. In case of warning it return without any errors.FailOnWarning will Throw exception on warning as well as error.fromacceldata_airflow_sdk.operators.execute_policy_operatorimportExecutePolicyOperatorfromacceldata_sdk.constantsimportFailureStrategy,PolicyTypeoperator_task=ExecutePolicyOperator(task_id='torch_pipeline_operator_test',policy_type=PolicyType.DATA_QUALITY,policy_id=46,sync=True,failure_strategy=FailureStrategy.DoNotFail,dag=dag)ExecutePolicyOperatorstores the execution id of the policy executed in xcom using the key {policy_type.name}_{policy_id}_execution_id. Replace the policy_type and policy_id based on the policy.Hence, to query the result in another task you need to pull the execution id from xcom using the same key {policy_type}_{policy_id}_execution_idget_polcy_execution_resultcan be used to query the result using the execution id pulled from xcom. In this example the policy_type is const.PolicyType.DATA_QUALITY.name and the policy_id is 46.Params:policy_type: A PolicyType parameter used to specify the policy type. It is a mandatory parameter. It is a enum which can take values from constants as PolicyType.DATA_QUALITY or PolicyType.RECONCILIATION.execution_id: A string parameter specifying the execution id for which we want to query the results. It is a mandatory parameter.failure_strategy: An Enum parameter used to decide the behaviour in case of failure. Default value is DoNotFail.failure_strategytakes enum of typeFailureStrategywhich can have 3 values DoNotFail, FailOnError and FailOnWarning.DoNotFail will never throw. In case of failure it will log the error.FailOnError will Throw exception only if it's an error. In case of warning it return without any errors.FailOnWarning will Throw exception on warning as well as error.fromacceldata_sdk.torch_clientimportTorchClientfromacceldata_airflow_sdk.initialiserimporttorch_credentialsfromacceldata_sdk.constantsimportFailureStrategy,PolicyType,RuleExecutionStatusdefruleoperator_result(**context):xcom_key=f'{PolicyType.DATA_QUALITY.name}_46_execution_id'task_instance=context['ti']# pull the execution id from xcomexecution_id=task_instance.xcom_pull(key=xcom_key)ifexecution_idisnotNone:torch_client=TorchClient(**torch_credentials)result=torch_client.get_polcy_execution_result(policy_type=PolicyType.DATA_QUALITY,execution_id=execution_id,failure_strategy=FailureStrategy.DoNotFail)ifresult.execution.resultStatus==RuleExecutionStatus.ERRORED:print(result.execution.executionError)get_policy_statuscan be used to query the current status of execution.Params:policy_type: A PolicyType parameter used to specify the policy type. It is a mandatory parameter. It is an enum which can take values from constants as PolicyType.DATA_QUALITY or PolicyType.RECONCILIATION.execution_id: A string parameter specifying the execution id which we want to query the status. It is a mandatory parameter.You need to pull the execution id from xcom using the same key {policy_type.name}_{policy_id}_execution_id which was pushed byExecutePolicyOperator. Replace the policy_type and policy_id based on the policy. In this example the policy_type is PolicyType.DATA_QUALITY.name and the policy_id is 46.fromacceldata_sdk.torch_clientimportTorchClientfromacceldata_airflow_sdk.initialiserimporttorch_credentialsimportacceldata_sdk.constantsasconstdefruleoperator_status(**context):xcom_key=f'{const.PolicyType.DATA_QUALITY.name}_46_execution_id'task_instance=context['ti']# pull the execution id from xcomexecution_id=task_instance.xcom_pull(key=xcom_key)ifexecution_idisnotNone:torch_client=TorchClient(**torch_credentials)result=torch_client.get_policy_status(policy_type=const.PolicyType.DATA_QUALITY,execution_id=execution_id)ifresult==const.RuleExecutionStatus.ERRORED:print("Policy execution encountered an error.")Version Log0.0.1 (12/09/2022)Acceldata airflow sdk - Wrapper on apache airflowAcceldata airflow sdk provides support for observability of airflow dags in torch catalog. With the use of acceldata airflow SDK, user can e2e observability on airflow dag run in torch UI.Support for airflow 2.0.
acceldata-sdk
Pipeline APIsAcceldata Torch is a complete solution to observe the quality of the data present in your data lake and warehouse. Using Torch, you can ensure that high-quality data backs your business decisions. Torch provides you with tools to measure the quality of data in a data catalog and to never miss significant data sources. All users including analysts, data scientists, and developers, can rely on Torch to observe the data flowing in the warehouse or data lake and can rest assured that there is no loss of data.Acceldata SDK is used to trigger torch catalog and pipeline APIs. By creating a Torch client, all the torch apis can be accessed.Installacceldata-sdkpypi package in a python environment.pipinstallacceldata-sdkCreate Torch ClientTorch client is used to send data to the torch servers. It consists of various methods to communicate with the torch server. Torch client have access to catalog and pipeline APIs. To create a torch client, torch url and API keys are required. To create torch API keys, go to torch uiโ€™s settings and generate keys for the client.While creating a TorchClient connection to torch by default version compatibility checks between torch and sdk is enabled. If we want we can disable that check by passingdo_version_checkas `False.fromacceldata_sdk.torch_clientimportTorchClienttorch_client=TorchClient(url='https://acceldata.host.dev:9999',access_key='******',secret_key='*****************',do_version_check=True)Pipeline APIThere are various pipeline APIs are supported through Acceldata SDK. Pipeline APIs like create pipeline, add jobs and spans, initiate pipeline run et cetera. Acceldata sdk is able to send various event during span life cycle. Hence, Acceldata sdk has full control over the pipelines.Create Pipeline And Job and span to bound the jobPipelinerepresents the ETL pipeline in its entirety and will contain Asset nodes and Jobs associated. The complete pipeline definition forms the Lineage graph for all the data assets.Job NodeorProcess Noderepresents an entity that does some job in the ETL workflow. From this representation,Jobโ€™s inputis some assets or some other Jobs, and output is few other assets or few other Jobs. Torch will use the set of Jobs definition in the workflow to create the Lineage, and also track version changes for the Pipeline.Acceldata sdk providesCreateJobclass which need to be passed tocreate_jobfunction as a parameter to create a job.Params forCreateJob:uid: uid of the job. It should be unique for the job. It is a mandatory parameter.name: name of the job. It is a mandatory parameter.NOTE: This changed in 2.4.1 releasepipeline_run_id: id of the pipeline_run for which you want to add a job. It is a mandatory parameter ifjobis being created usingpipeline. Its is not needed if job is being created usingpipeline_run.description: description of the jobinputs: (list[Node]) input for the job. This can be uid of an asset specified using asset_uid parameter of Node object or it can be uid of another job specified using job_uid parameter of Node object.outputs: (list[Node]) output for the job.This can be uid of an asset specified using asset_uid parameter of Node object or it can be uid of another job specified using job_uid parameter of Node object.meta: Metadata of the Jobcontext: context of the jobbounded_by_span: (Boolean) This has to be set to True if the job has to be bounded with a span. Default value is false. It is an optional parameter.span_uid: (String) This is uid of new span to be created. This is a mandatory parameter if bounded_by_span is set to True.with_explicit_time: An optional boolean parameter used when a job is bounded by a span.If set to True, the child span will be started at the specified time provided in the subsequent events.If not set, the span will be automatically started with the current time at the moment of creation.fromacceldata_sdk.torch_clientimportTorchClientfromacceldata_sdk.models.jobimportCreateJob,JobMetadata,Nodefromacceldata_sdk.models.pipelineimportCreatePipeline,PipelineMetadata,PipelineRunResult,PipelineRunStatus# Create pipelinepipeline=CreatePipeline(uid='monthly_reporting_pipeline',name='Monthly reporting Pipeline',description='Pipeline to create monthly reporting tables',meta=PipelineMetadata('Vaishvik','acceldata_sdk_code','...'),context={'key1':'value1'})torch_client=TorchClient(url="https://torch.acceldata.local",access_key="*******",secret_key="******************************",do_version_check=False)pipeline_response=torch_client.create_pipeline(pipeline=pipeline)pipeline_run=pipeline_response.create_pipeline_run()# Create a job using pipeline object.# Passing of pipeline_run_id is mandatoryjob=CreateJob(uid='monthly_sales_aggregate',name='Monthly Sales Aggregate',description='Generates the monthly sales aggregate tables for the complete year',inputs=[Node(asset_uid='datasource-name.database.schema.table_1')],outputs=[Node(job_uid='job2_uid')],meta=JobMetadata('vaishvik','backend','https://github.com/'),context={'key21':'value21'},bounded_by_span=True,pipeline_run_id=pipeline_run.id,span_uid="test_shubh")job_response=pipeline_response.create_job(job)# Create a job using pipeline_run object.# Passing of pipeline_run_id is not neededjob=CreateJob(uid='monthly_sales_aggregate',name='Monthly Sales Aggregate',description='Generates the monthly sales aggregate tables for the complete year',inputs=[Node(asset_uid='datasource-name.database.schema.table_1')],outputs=[Node(job_uid='job2_uid')],meta=JobMetadata('vaishvik','backend','https://github.com/'),context={'key21':'value21'})job_response_using_run=pipeline_run.create_job(job)Create Pipeline Run And Generate Spans And Send Span EventsPipeline run indicates the execution of the pipeline. The same pipeline can be executed multiple times and each execution (run) has new snapshot version. Each pipeline run has hierarchical span's group. ASpanis a way to group a bunch of metrics, and they are hierarchical. It can be as granular as possible. The APIs will support creating a span object from a pipeline object, and then hierarchical spans are started from parent spans. A Span typically encompasses a process or a task and can be granular. This hierarchical system is powerful enough to model extremely complex pipeline observability flows. Optionally, a span can also be associated with a Job. This way, we can track starting and completion of Job, including the failure tracking. Start and stop are implicitly tracked for a span.Acceldata sdk also has support for create new pipeline run, add spans in it. During the span life cycle, sdk is able to send some customs and standard span events to collect pipeline run metrics for observability.Params forcreate_spanfunction which is available under apipeline_runuid: uid of the span being created. This should be unique. This is a mandatory parameter.associatedJobUids: List of job uids with which the span needs to be associated with.context_data: This is dict of key-value pair providing custom context information related to a span.Params forcreate_child_spanfunction which is available underspan_context. This is used to create hierarchy of span by creating a span under another spanuid: uid of the span being created. This should be unique. This is a mandatory parameter.context_data: This is dict of key-value pair providing custom context information related to a span.associatedJobUids: List of job uids with which the span needs to be associated with.fromacceldata_sdk.events.generic_eventimportGenericEventfromdatetimeimportdatetime# create a pipeline run of the pipelinepipeline_run=pipeline_response.create_pipeline_run()# get root span of a pipeline runroot_span=pipeline_run.get_root_span()# create span in the pipeline runspan_context=pipeline_run.create_span(uid='monthly.generate.data.span')# check current span is root or notspan_context.is_root()# end the spanspan_context.end()# check if the current span has children or notspan_context.has_children()# create a child spanchild_span_context=span_context.create_child_span('monthly.generate.customer.span')# send custom eventchild_span_context.send_event(GenericEvent(context_data={'client_time':str(datetime.now()),'row_count':100},event_uid="order.customer.join.result"))# abort spanchild_span_context.abort()# failed spanchild_span_context.failed()# update a pipeline run of the pipelineupdatePipelineRunRes=pipeline_run.update_pipeline_run(context_data={'key1':'value2','name':'backend'},result=PipelineRunResult.SUCCESS,status=PipelineRunStatus.COMPLETED)Get Latest Pipeline RunAcceldata sdk can get the latest pipeline run of the pipeline. With use of the latest pipeline run instance, user can continue ETL pipeline and add spans, jobs, events too. Hence, Acceldata sdk has complete access on the torch pipeline service. Params forget_pipeline:pipeline_identity: String parameter used to filter pipeline. It can be either id or uid of the pipeline.pipeline=torch_client.get_pipeline('monthly.reporting.pipeline')pipeline_run=pipeline.get_latest_pipeline_run()Get Pipeline Run with a particular pipeline run idAcceldata sdk can get a pipeline run of the pipeline with a particular pipeline run id. With use of the pipeline run instance, user can continue ETL pipeline and add spans, jobs, events too. Hence, Acceldata sdk has complete access on the torch pipeline service.Params forget_pipeline_run:pipeline_run_id: run id of the pipeline runcontinuation_id: continuation id of the pipeline runpipeline_id: id of the pipeline to which the run belongs topipeline_run=torch_client.get_pipeline_run(pipeline_run_id=pipeline_run_id)pipeline=torch_client.get_pipeline(pipeline_id=pipeline_id)pipeline_run=torch_client.get_pipeline_run(continuation_id=continuation_id,pipeline_id=pipeline.id)pipeline_run=pipeline.get_run(continuation_id=continuation_id)Get Pipeline details for a particular pipeline run idAcceldata sdk can get Pipeline details for a particular pipeline run.pipeline_details=pipeline_run.get_details()Get all spans for a particular pipeline run idAcceldata sdk can get all spans for a particular pipeline run id.pipeline_run_spans=pipeline_run.get_spans()Get Pipeline Runs for a pipelineAcceldata sdk can get all pipeline runs. Params forget_pipeline_runs:pipeline_id: id of the pipelineruns=torch_client.get_pipeline_runs(pipeline_id)runs=pipeline.get_runs()Get all PipelinesAcceldata sdk can get all pipelines.pipelines=torch_client.get_pipelines()Delete a PipelineAcceldata sdk can delete a pipeline.delete_response=pipeline.delete()Execute policy synchronously and asynchronouslyAcceldata sdk provides utility functionexecute_policyto execute policies synchronously and asynchronously. This will return an object on whichget_resultandget_statuscan be called to get result and status of the execution respectively.Params forexecute_policy:sync: Boolean parameter used to decide if the policy should be executed synchronously or asynchronously. It is a mandatory parameter. If its is set toTrueit will return only after the execution ends. If it is set toFalseit will return immediately after starting the execution.policy_type: Enum parameter used to specify the policy type. It is a mandatory parameter. It is a enum which will take values from constants as PolicyType.DATA_QUALITY or PolicyType.RECONCILIATION.policy_id: String parameter used to specify the policy id to be executed. It is a mandatory parameter.incremental: Boolean parameter used to specify if the policy execution should be incremental or full. Default value is False.pipeline_run_id: Long parameter used to specify Run id of the pipeline run where the policy is being executed. This can be used to link the policy execution with a particular pipeline run.failure_strategy: Enum parameter used to decide the behaviour in case of failure. Default value is DoNotFail.failure_strategytakes enum of typeFailureStrategywhich can have 3 values DoNotFail, FailOnError and FailOnWarning.DoNotFail will never throw. In case of failure it will log the error.FailOnError will Throw exception only if it's an error. In case of warning it return without any errors.FailOnWarning will Throw exception on warning as well as error.To get the execution result we can callget_policy_execution_resulton torch_client or callget_resulton execution object which will return a result object.Params forget_policy_execution_result:policy_type: Enum parameter used to specify the policy type. It is a mandatory parameter. It is a enum which will take values from constants as PolicyType.DATA_QUALITY or PolicyType.RECONCILIATION.execution_id: String parameter used to specify the execution id to be queried for rsult. It is a mandatory parameter.failure_strategy: Enum parameter used to decide the behaviour in case of failure. Default value is DoNotFail.Params forget_result:failure_strategy: Enum parameter used to decide the behaviour in case of failure. Default value is DoNotFail.To get the current status we can callget_policy_statuson torch_client or callget_statuson execution object which will get the currentresultStatusof the execution.params forget_policy_status:policy_type: Enum parameter used to specify the policy type. It is a mandatory parameter. It is a enum which will take values from constants as PolicyType.DATA_QUALITY or PolicyType.RECONCILIATION.execution_id: String parameter used to specify the execution id to be queried for rsult. It is a mandatory parameter.get_statusdoes not take any parameter.Asynchronous execution examplefromacceldata_sdk.torch_clientimportTorchClientimportacceldata_sdk.constantsasconsttorch_credentials={'url':'https://torch.acceldata.local:5443/torch','access_key':'PJSAJALFHSHU','secret_key':'E6LLJHKGSHJJTRHGK540E5','do_version_check':'True'}torch_client=TorchClient(**torch_credentials)async_executor=torch_client.execute_policy(const.PolicyType.DATA_QUALITY,46,sync=False,failure_strategy=const.FailureStrategy.DoNotFail,pipeline_run_id=None)# Wait for execution to get final resultexecution_result=async_executor.get_result(failure_strategy=const.FailureStrategy.DoNotFail)# Get the current statusexecution_status=async_executor.get_status()Synchronous execution example.fromacceldata_sdk.torch_clientimportTorchClientimportacceldata_sdk.constantsasconsttorch_credentials={'url':'https://torch.acceldata.local:5443/torch','access_key':'PJSAJALFHSHU','secret_key':'E6LLJHKGSHJJTRHGK540E5','do_version_check':'True'}torch_client=TorchClient(**torch_credentials)# This will wait for execution to get final resultsync_executor=torch_client.execute_policy(const.PolicyType.DATA_QUALITY,46,sync=True,failure_strategy=const.FailureStrategy.DoNotFail,pipeline_run_id=None)# Wait for execution to get final resultexecution_result=sync_executor.get_result(failure_strategy=const.FailureStrategy.DoNotFail)# Get the current statusexecution_status=sync_executor.get_status()Cancel execution example.execution_result=sync_executor.cancel()Example of continuing the same pipeline run across multiple ETL scripts using continuation_idETL1 - Here a new pipeline_run is created using a continuation_id but pipeline_run is not closedfromacceldata_sdk.torch_clientimportTorchClientfromacceldata_sdk.models.pipelineimportCreatePipeline,PipelineMetadata,PipelineRunResult,PipelineRunStatus# Create pipelinepipeline_uid='monthly_reporting_pipeline'pipeline=CreatePipeline(uid=pipeline_uid,name='Monthly reporting Pipeline',description='Pipeline to create monthly reporting tables',meta=PipelineMetadata('Vaishvik','acceldata_sdk_code','...'),context={'key1':'value1'})torch_client=TorchClient(url="https://torch.acceldata.local",access_key="*******",secret_key="******************************",do_version_check=False)pipeline_response=torch_client.create_pipeline(pipeline=pipeline)# A new continuation id should be generated on every run. Same continuation id cannot be reused.cont_id="continuationid_demo_1"pipeline_run=pipeline_response.create_pipeline_run(continuation_id=cont_id)# Make sure pipeline_run is not ended using the update_pipeline_run call so that same run can be used in next ETL scriptETL2 - This script will continue the same pipeline run from ETL1fromacceldata_sdk.torch_clientimportTorchClientfromacceldata_sdk.models.pipelineimportPipelineRunResult,PipelineRunStatustorch_client=TorchClient(url="https://torch.acceldata.local",access_key="*******",secret_key="******************************",do_version_check=False)pipeline_uid='monthly_reporting_pipeline'# First get the same pipeline using the previously used UID. Then we will get the previously started pipeline_run using the continuation_idpipeline=torch_client.get_pipeline(pipeline_uid)# continuation_id should be a same ID used in ETL1 script so that same pipeline_run is continued in the pipeline.cont_id="continuationid_demo_1"pipeline_run=pipeline.get_run(continuation_id=cont_id)# Use this pipeline run to create span and jobs# At the end of this script close the pipeline run using update_pipeline_run if we do not want to continue the same pipeline_run furtherupdatePipelineRunRes=pipeline_run.update_pipeline_run(context_data={'key1':'value2','name':'backend'},result=PipelineRunResult.SUCCESS,status=PipelineRunStatus.COMPLETED)Datasource APIsAcceldata SDK has full access on catalog APIs as well.Datasource APITorch has support for more 15+ datasource crawling support.# Get datasourceds_res=torch_client.get_datasource('snowflake_ds_local')ds_res=torch_client.get_datasource(5,properties=True)# Get datasources based on typedatasources=torch_client.get_datasources(const.AssetSourceType.SNOWFLAKE)Assets APIsAcceldata sdk has methods to get assets in the given datasource.fromacceldata_sdk.models.create_assetimportAssetMetadata# Get asset by id/uidasset=torchclient.get_asset(1)asset=torch_client.get_asset('Feature_bag_datasource.feature_1')Asset's tags, labels, metadata and sample dataUser can add tags, labels custom metadata and also get sample data of the asset using sdk. Tags and labels can be used to filter out asset easily.# asset metadatafromacceldata_sdk.models.tagsimportAssetLabel,CustomAssetMetadataasset=torch_client.get_asset(asset_id)# Get metadata of an assetasset.get_metadata()# Get all tagstags=asset.get_tags()# Add tag assettag_add=asset.add_tag(tag='asset_tag')# Add asset labelslabels=asset.add_labels(labels=[AssetLabel('test1','demo1'),AssetLabel('test2','demo2')])# Get asset labelslabels=asset.get_labels()# Add custom metadataasset.add_custom_metadata(custom_metadata=[CustomAssetMetadata('testcm1','democm1'),CustomAssetMetadata('testcm2','democm2')])Crawler OperationsUser can start crawler as well as check for running crawler status.# Start a crawlerdatasource.start_crawler()torch_client.start_crawler('datasource_name')# Get running crawler statusdatasource.get_crawler_status()torch_client.get_crawler_status('datasource_name')Trigger policies, Profiling and sampling of an assetCrawled assets can be profiled and sampled with use of spark jobs running on the livy. Furthermore, Created policies (Recon + DQ) can be triggered too.importacceldata_sdk.constantsasconst# profile an asset, get profile req details, cancel profileprofile_res=asset.start_profile(profiling_type=ProfilingType.FULL)profile_req_details=profile_res.get_status()cancel_profile_res=profile_res.cancel()profile_res=asset.get_latest_profile_status()profile_req_details_by_req_id=torch_client.get_profile_status(asset_id=profile_req_details.assetId,req_id=profile_req_details.id)# sample datasample_data=asset.sample_data()# Rule execution and status# Execute policyexecute_dq_rule=torch_client.execute_policy(const.PolicyType.DATA_QUALITY,1114,incremental=False)failure_strategy=const.FailureStrategy.DoNotFail# Get policy execution resultresult=torch_client.get_policy_execution_result(policy_type=const.PolicyType.DATA_QUALITY,execution_id=execute_dq_rule.id,failure_strategy=failure_strategy)# Get policy and executefromacceldata_sdk.models.ruleExecutionResultimportRuleType,PolicyFilterrule=torch_client.get_policy(const.PolicyType.RECONCILIATION,"auth001_reconciliation")# Execute policyasync_execution=rule.execute(sync=False)# Get execution resultasync_execution_result=async_execution.get_result()# Get current execution statusasync_execution_status=async_execution.get_status()# Cancel policy execution jobcancel_rule=async_execution.cancel()# List all executions# List executions by iddq_rule_executions=torch_client.policy_executions(1114,RuleType.DATA_QUALITY)# List executions by namedq_rule_executions=torch_client.policy_executions('dq-scala',RuleType.DATA_QUALITY)# List executions by rulerecon_rule_executions=rule.get_executions()filter=PolicyFilter(policyType=RuleType.RECONCILIATION,enable=True)# List all rulesrecon_rules=torch_client.list_all_policies(filter=filter)Version Log0.0.1 (12/09/2022)Acceldata python sdkSupport for flow APIs and catalog APIs of the torch
accele
No description available on PyPI.
accelerate
Run your *raw* PyTorch training script on any kind of deviceEasy to integrate๐Ÿค— Accelerate was created for PyTorch users who like to write the training loop of PyTorch models but are reluctant to write and maintain the boilerplate code needed to use multi-GPUs/TPU/fp16.๐Ÿค— Accelerate abstracts exactly and only the boilerplate code related to multi-GPUs/TPU/fp16 and leaves the rest of your code unchanged.Here is an example:import torchimport torch.nn.functional as Ffrom datasets import load_dataset+ from accelerate import Accelerator+ accelerator = Accelerator()- device = 'cpu'+ device = accelerator.devicemodel = torch.nn.Transformer().to(device)optimizer = torch.optim.Adam(model.parameters())dataset = load_dataset('my_dataset')data = torch.utils.data.DataLoader(dataset, shuffle=True)+ model, optimizer, data = accelerator.prepare(model, optimizer, data)model.train()for epoch in range(10):for source, targets in data:source = source.to(device)targets = targets.to(device)optimizer.zero_grad()output = model(source)loss = F.cross_entropy(output, targets)- loss.backward()+ accelerator.backward(loss)optimizer.step()As you can see in this example, by adding 5-lines to any standard PyTorch training script you can now run on any kind of single or distributed node setting (single CPU, single GPU, multi-GPUs and TPUs) as well as with or without mixed precision (fp8, fp16, bf16).In particular, the same code can then be run without modification on your local machine for debugging or your training environment.๐Ÿค— Accelerate even handles the device placement for you (which requires a few more changes to your code, but is safer in general), so you can even simplify your training loop further:import torchimport torch.nn.functional as Ffrom datasets import load_dataset+ from accelerate import Accelerator- device = 'cpu'+ accelerator = Accelerator()- model = torch.nn.Transformer().to(device)+ model = torch.nn.Transformer()optimizer = torch.optim.Adam(model.parameters())dataset = load_dataset('my_dataset')data = torch.utils.data.DataLoader(dataset, shuffle=True)+ model, optimizer, data = accelerator.prepare(model, optimizer, data)model.train()for epoch in range(10):for source, targets in data:- source = source.to(device)- targets = targets.to(device)optimizer.zero_grad()output = model(source)loss = F.cross_entropy(output, targets)- loss.backward()+ accelerator.backward(loss)optimizer.step()Want to learn more? Check out thedocumentationor have a look at ourexamples.Launching script๐Ÿค— Accelerate also provides an optional CLI tool that allows you to quickly configure and test your training environment before launching the scripts. No need to remember how to usetorch.distributed.runor to write a specific launcher for TPU training! On your machine(s) just run:accelerateconfigand answer the questions asked. This will generate a config file that will be used automatically to properly set the default options when doingacceleratelaunchmy_script.py--args_to_my_scriptFor instance, here is how you would run the GLUE example on the MRPC task (from the root of the repo):acceleratelaunchexamples/nlp_example.pyThis CLI tool isoptional, and you can still usepython my_script.pyorpython -m torchrun my_script.pyat your convenience.You can also directly pass in the arguments you would totorchrunas arguments toaccelerate launchif you wish to not runaccelerate config.For example, here is how to launch on two GPUs:acceleratelaunch--multi_gpu--num_processes2examples/nlp_example.pyTo learn more, check the CLI documentation availablehere.Launching multi-CPU run using MPI๐Ÿค— Here is another way to launch multi-CPU run using MPI. You can learn how to install Open MPI onthis page. You can use Intel MPI or MVAPICH as well. Once you have MPI setup on your cluster, just run:mpirun-np2pythonexamples/nlp_example.pyLaunching training using DeepSpeed๐Ÿค— Accelerate supports training on single/multiple GPUs using DeepSpeed. To use it, you don't need to change anything in your training code; you can set everything using justaccelerate config. However, if you desire to tweak your DeepSpeed related args from your Python script, we provide you theDeepSpeedPlugin.fromaccelerateimportAccelerator,DeepSpeedPlugin# deepspeed needs to know your gradient accumulation steps beforehand, so don't forget to pass it# Remember you still need to do gradient accumulation by yourself, just like you would have done without deepspeeddeepspeed_plugin=DeepSpeedPlugin(zero_stage=2,gradient_accumulation_steps=2)accelerator=Accelerator(mixed_precision='fp16',deepspeed_plugin=deepspeed_plugin)# How to save your ๐Ÿค— Transformer?accelerator.wait_for_everyone()unwrapped_model=accelerator.unwrap_model(model)unwrapped_model.save_pretrained(save_dir,save_function=accelerator.save,state_dict=accelerator.get_state_dict(model))Note: DeepSpeed support is experimental for now. In case you get into some problem, please open an issue.Launching your training from a notebook๐Ÿค— Accelerate also provides anotebook_launcherfunction you can use in a notebook to launch a distributed training. This is especially useful for Colab or Kaggle notebooks with a TPU backend. Just define your training loop in atraining_functionthen in your last cell, add:fromaccelerateimportnotebook_launchernotebook_launcher(training_function)An example can be found inthis notebook.Why should I use ๐Ÿค— Accelerate?You should use ๐Ÿค— Accelerate when you want to easily run your training scripts in a distributed environment without having to renounce full control over your training loop. This is not a high-level framework above PyTorch, just a thin wrapper so you don't have to learn a new library. In fact, the whole API of ๐Ÿค— Accelerate is in one class, theAcceleratorobject.Why shouldn't I use ๐Ÿค— Accelerate?You shouldn't use ๐Ÿค— Accelerate if you don't want to write a training loop yourself. There are plenty of high-level libraries above PyTorch that will offer you that, ๐Ÿค— Accelerate is not one of them.Frameworks using ๐Ÿค— AccelerateIf you like the simplicity of ๐Ÿค— Accelerate but would prefer a higher-level abstraction around its capabilities, some frameworks and libraries that are built on top of ๐Ÿค— Accelerate are listed below:Amphionis a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junior researchers and engineers get started in the field of audio, music, and speech generation research and development.Animusis a minimalistic framework to run machine learning experiments. Animus highlights common "breakpoints" in ML experiments and provides a unified interface for them withinIExperiment.Catalystis a PyTorch framework for Deep Learning Research and Development. It focuses on reproducibility, rapid experimentation, and codebase reuse so you can create something new rather than write yet another train loop. Catalyst provides aRunnerto connect all parts of the experiment: hardware backend, data transformations, model training, and inference logic.fastaiis a PyTorch framework for Deep Learning that simplifies training fast and accurate neural nets using modern best practices. fastai provides aLearnerto handle the training, fine-tuning, and inference of deep learning algorithms.Finetuneris a service that enables models to create higher-quality embeddings for semantic search, visual similarity search, cross-modal text<->image search, recommendation systems, clustering, duplication detection, anomaly detection, or other uses.InvokeAIis a creative engine for Stable Diffusion models, offering industry-leading WebUI, terminal usage support, and serves as the foundation for many commercial products.Korniais a differentiable library that allows classical computer vision to be integrated into deep learning models. Kornia provides aTrainerwith the specific purpose to train and fine-tune the supported deep learning algorithms within the library.Open Assistantis a chat-based assistant that understands tasks, can interact with their party systems, and retrieve information dynamically to do so.pytorch-acceleratedis a lightweight training library, with a streamlined feature set centered around a general-purposeTrainer, that places a huge emphasis on simplicity and transparency; enabling users to understand exactly what is going on under the hood, but without having to write and maintain the boilerplate themselves!Stable Diffusion web UIis an open-source browser-based easy-to-use interface based on the Gradio library for Stable Diffusion.torchkerasis a simple tool for training pytorch model just in a keras style, a dynamic and beautiful plot is provided in notebook to monitor your loss or metric.transformersas a tool for helping train state-of-the-art machine learning models in PyTorch, Tensorflow, and JAX. (Accelerate is the backend for the PyTorch side).InstallationThis repository is tested on Python 3.8+ and PyTorch 1.10.0+You should install ๐Ÿค— Accelerate in avirtual environment. If you're unfamiliar with Python virtual environments, check out theuser guide.First, create a virtual environment with the version of Python you're going to use and activate it.Then, you will need to install PyTorch: refer to theofficial installation pageregarding the specific install command for your platform. Then ๐Ÿค— Accelerate can be installed using pip as follows:pipinstallaccelerateSupported integrationsCPU onlymulti-CPU on one node (machine)multi-CPU on several nodes (machines)single GPUmulti-GPU on one node (machine)multi-GPU on several nodes (machines)TPUFP16/BFloat16 mixed precisionFP8 mixed precision withTransformer EngineDeepSpeed support (Experimental)PyTorch Fully Sharded Data Parallel (FSDP) support (Experimental)Megatron-LM support (Experimental)Citing ๐Ÿค— AccelerateIf you use ๐Ÿค— Accelerate in your publication, please cite it by using the following BibTeX entry.@Misc{accelerate,title={Accelerate: Training and inference at scale made simple, efficient and adaptable.},author={Sylvain Gugger and Lysandre Debut and Thomas Wolf and Philipp Schmid and Zachary Mueller and Sourab Mangrulkar and Marc Sun and Benjamin Bossan},howpublished={\url{https://github.com/huggingface/accelerate}},year={2022}}
accelerated-numpy
numpy-threading-extensionsFaster loops for NumPy using multithreading and other tricks. The first release will target NumPy binary and unary ufuncs. Eventually we will enable overriding other NumPy functions, and provide an C-based (non-Python) API for extending via third-party functions.Installationpip install accelerated_numpyYou can also install the in-development version 0.0.1 with:pip install https://github.com/Quansight/numpy-threading-extensions/archive/v0.0.1.zipor latest withpip install https://github.com/Quansight/numpy-threading-extensions/archive/main.zipDocumentationTo use the project:importaccelerated_numpyaccelerated_numpy.initialize()DevelopmentTo run all the tests run::toxNote, to combine the coverage data from all the tox environments run:OSCommandWindowsset PYTEST_ADDOPTS=--cov-appendtoxOtherPYTEST_ADDOPTS=--cov-append tox
accelerated-scan
Accelerated ScanThis package implements the fastestfirst-order parallel associative scanon the GPU for forward andbackward.The scan efficiently solves first-order recurrences of the formx[t] = gate[t] * x[t-1] + token[t], common in state space models and linear RNNs.Theaccelerated_scan.warpC++ CUDA kernel uses a chunked processing algorithm that leverages the fastest GPU communication primitives available on each level of hierarchy:warp shuffleswithin warps of 32 threads and shared memory (SRAM) between warps within a thread block. One sequence per channel dimension is confined to one thread block.The derivation ofChunked Scanhas been used to extend tree-level Blelloch algorithm to block.A similar implementation is available inaccelerated_scan.tritonusing a Triton'stl.associative_scanprimitive. Itrequires Triton 2.2 for itsenable_fp_fusionflag.Quick Start:pipinstallaccelerated-scanimporttorchfromaccelerated_scan.warpimportscan# a pure c++ kernel, faster than cub#from accelerated_scan.triton import scan # uses tl.associative_scan#from accelerated_scan.ref import scan # reference torch implementation# sequence lengths must be a power of 2 of lengths between 32 and 65536# hit me up if you need different lengths!batch_size,dim,seqlen=3,1536,4096gates=0.999+0.001*torch.rand(batch_size,dim,seqlen,device="cuda")tokens=torch.rand(batch_size,dim,seqlen,device="cuda")out=scan(gates,tokens)To ensure numerical equivalence, a reference implementation for trees is provided in Torch. It can be sped up usingtorch.compile.Benchmarks:See more benchmarks in nanokitchen:https://github.com/proger/nanokitchenforward speed of (8,1536,seqlen), inference mode:SEQUENCE_LENGTH accelerated_scan.triton (triton 2.2.0) accelerated_scan.ref accelerated_scan.warp 0 128.0 0.027382 0.380874 0.026844 1 256.0 0.049104 0.567916 0.048593 2 512.0 0.093008 1.067906 0.092923 3 1024.0 0.181856 2.048471 0.183581 4 2048.0 0.358250 3.995369 0.355414 5 4096.0 0.713511 7.897022 0.714536 6 8192.0 1.433052 15.698944 1.411390 7 16384.0 3.260965 31.305046 2.817152 8 32768.0 31.459671 62.557182 5.645697 9 65536.0 66.787331 125.208572 11.297921Notes on PrecisionWhen gates and tokens are sampled uniformly from 0..1 the lack of bfloat16 precision dominates the error (compared to the reference implementation):
accelerated-sequence-clustering
No description available on PyPI.
accelerate-fft
Implements fft using apple's accelerate framework (vDSP)
acceleration
Failed to fetch description. HTTP Status Code: 404
acceleration2
Failed to fetch description. HTTP Status Code: 404
accelerator
The Accelerator is a tool for fast and reproducible processing of large amounts of data. Extensive documentation is available here:Reference ManualHome PageAfter installation try "ax --help".Supported EnvironmentsThe Accelerator project has been built, tested, and runs on:Ubuntu 18.04, 20.04Debian 10, 11FreeBSD 13.0but is not limited to these systems or versions.Windows is not supported, but WSL should work.LicenseCopyright 2017-2018 eBay Inc.Modifications copyright (c) 2018-2023 Carl DrouggeModifications copyright (c) 2019-2023 Anders BerkemanLicensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License athttps://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
accelerator-physics
Accelerator UtilsThis is a monolithic repository containing any functions related to doing accelerator physics in python that I think would be useful to others. Enjoy!
accelerator-toolbox
IntroductionAccelerator Toolbox is a code used for simulating particle accelerators, used particularly for synchrotron light sources. It is hosted onGithub. Its original implementation is in Matlab.pyAT is a Python interface to Accelerator Toolbox. It uses the โ€˜pass methodsโ€™ defined in Accelerator Toolbox, implemented by compiling the C code used in the AT โ€˜integratorsโ€™ into a Python extension. These pass methods are used by higher-level functions to provide physics results.See thepyAT websitefor a more detailed introduction.pyAT supports Python 3.7 to 3.11.InstallationInstall accelerator-toolbox from PyPI:$ pip install accelerator-toolboxUsageExample usage:>>> import at >>> ring = at.Lattice.load('machine_data/hmba.mat') >>> print(at.radiation_parameters(ring)) Frac. tunes: [0.2099983 0.34001317 0.00349013] Tunes: [76.2099983 27.34001317] Chromaticities: [5.73409894 3.91761206] Momentum compact. factor: 8.506669e-05 Slip factor: -8.505944e-05 Energy: 6.000000e+09 eV Energy loss / turn: 2.526189e+06 eV Radiation integrals - I1: 0.07179435013387388 m I2: 0.13844595446798158 m^-1 I3: 0.003357584058614851 m^-2 I4: -0.07375725030666251 m^-1 I5: 5.281495714523264e-07 m^-1 Mode emittances: [1.3148797e-10 nan nan] Damping partition numbers: [1.53275121 1. 1.46724879] Damping times: [0.00872477 0.0133729 0.00911427] s Energy spread: 0.000934463 Bunch length: 0.0030591 m Cavities voltage: 6000000.0 V Synchrotron phase: 2.70701 rd Synchrotron frequency: 1239.74 HzFor more examples of how to use pyAT, seepyat_examples.rst.Developer NotesDeveloper notes are indevelopers.rst.
accelerator-tools
Failed to fetch description. HTTP Status Code: 404
accelerator-utils
Accelerator UtilsThis is a monolithic repository containing any functions related to doing accelerator physics in python that I think would be useful to others. Enjoy!
accelerometer
A tool to extract meaningful health information from large accelerometer datasets. The software generates time-series and summary metrics useful for answering key questions such as how much time is spent in sleep, sedentary behaviour, or doing physical activity.InstallMinimum requirements: Python>=3.7, Java 8 (1.8)The following instructions make use of Anaconda to meet the minimum requirements:Download & installMiniconda(light-weight version of Anaconda).(Windows) Once installed, launch theAnaconda Prompt.Create a virtual environment:$condacreate-naccelerometerpython=3.9openjdkpipThis creates a virtual environment calledaccelerometerwith Python version 3.9, OpenJDK, and Pip.Activate the environment:$condaactivateaccelerometerYou should now see(accelerometer)written in front of your prompt.Installaccelerometer:$pipinstallaccelerometerYou are all set! The next time that you want to useaccelerometer, open the Anaconda Prompt and activate the environment (step 4). If you see(accelerometer)in front of your prompt, you are ready to go!UsageTo extract summary movement statistics from an Axivity file (.cwa):$accProcessdata/sample.cwa.gz<output written to data/sample-outputSummary.json><time series output written to data/sample-timeSeries.csv.gz>Movement statistics will be stored in a JSON file:{"file-name":"sample.cwa.gz","file-startTime":"2014-05-07 13:29:50","file-endTime":"2014-05-13 09:49:50","acc-overall-avg(mg)":32.78149,"wearTime-overall(days)":5.8,"nonWearTime-overall(days)":0.04,"quality-goodWearTime":1}SeeData Dictionaryfor the list of output variables.Actigraph and GENEActiv files are also supported, as well as custom CSV files. SeeUsagefor more details.To plot the activity profile:$accPlotdata/sample-timeSeries.csv.gz<output plot written to data/sample-timeSeries-plot.png>TroubleshootingSome systems may face issues with Java when running the script. If this is your case, try fixing OpenJDK to version 8:$condainstall-naccelerometeropenjdk=8Under the hoodInterpreted levels of physical activity can vary, as many approaches can be taken to extract summary physical activity information from raw accelerometer data. To minimise error and bias, our tool uses published methods to calibrate, resample, and summarise the accelerometer data.SeeMethodsfor more details.Citing our workWhen using this tool, please consider the works listed inCITATION.md.LicenceSeeLICENSE.md.AcknowledgementsWe would like to thank all our code contributors and manuscript co-authors.Contributors Graph
accelo
Acceldata ML Observability SDKThe SDK helps Data organizations track their ML models, data that deliver business value.Pre-requisitesRegistering yourself with Acceldata Data Observability Cloud platformDriven through Acceldata Cloud PlatformEnabling ML Observability toolkitDriven through Acceldata Cloud PlatformGenerating API keysDriven through Acceldata ML Observability UISetting up env varsexportCLOUD_ACCESS_KEY=XXXX0000exportCLOUD_SECRET_KEY=XXXX0000exportACCELO_API_ACCESS_KEY=XXXX0000exportACCELO_API_SECRET_KEY=XXXX0000exportACCELO_API_ENDPOINT=https://some_acceldata_endpointInstall the SDKpipinstallacceloSet Go!Sample Usage PatternsBefore we delve into code, let's just see an example of a pattern in which you can use the SDK.Project CreationModesUI - Users will be able to create projects via the Catalog UI where they can either have a model view or a project viewAPI - Users can create a project in their training pipeline. If a project already exists, API throws a custom error that can be used to avoid any failures in the training pipelineModel Registration and Baseline logging (training pipeline)User registers a model against a projectModel registration API expects the project id, model name and bunch of other metadata that can be used to track models on the catalog UIPrediction logging (serving pipeline)The serving pipeline can be used to log the predictions to Acceldata datastoreThe API expects model id, model version, and predictions along with their id columns as mandatory params.Actual logging (actuals pipeline)The actuals for any features may arrive at a later point and the API provides 2 ways to log the actuals.UUIDs: generated by the API during the serving pipeline stage; but the users are expected to keep track of them and map them to the appropriate actualsID COLUMNS: If users specify certain columns to considered as the IDโ€™s, the API will be able to automatically log the actuals against the APIโ€™s and the backend services will be able to compare the actuals to predictions based on these ID COLUMNSNote: Please refer to the API documentation for more information.Basic APIsFinally, let's see how you can annotate the SDK into your production code pipelines. Below are some examples of how a Data Scientist or ML Engineer can annotate the SDK into the existing ML code and observe them using Acceldata ML Observability platform.Import the libraryfromaccelo_mlopsimportAcceloClientCreating a client with a workspaceThe workspace is the top level name that you would want to associate your organization with. This can also be thought of like a tenant name.client=AcceloClient(workspace='your_organization_name')Creating a ProjectNow, when it comes to code, the atomic unit is aProject. The project name can be a team name, domain name within a company or any other logical separation Data Science groups.client.create_project(name='marketing-team',description='All models related to the marketing team reside here. ')Register a ModelNow, assuming that you have developed a model that you want to observe using the Acceldata ML Observability platform. The model object is calledclassifier.model_metadata={'frequency':'DAILY','model_type':'binary_classification','performance_metric':'f1_score','model_obj':classifier}additional_params={'owner':'[email protected]','last_trained':'2021-08-01','training_job_name':'click_prediction_ml_pipeline','label':'flower_type','total_consumers':2}client.register_model(project_id=12,model_name='click_prediction_model',model_version='v1',model_metadata=model_metadata,**additional_params)Let's see what above variables mean.classifier: this is the model objectmodel_meatadata: this is a mandatory dictionary users have to pass to the register model call to make most use of the ML observability platform.additional_params: this is a optional dictionary users can use to log any additional details about the model which might be useful when viewed in the ML Catalog.Now, it's time to log the data that was used in model.Log baseline dataclient.log_baseline(model_id=client.model_id,model_version='v1',baseline_data=X_train,labels=y_train,label_name='click',id_cols=['campaign_id'],publish_date='2021-08-02')This API call logs your baseline data to Acceldata data store and will be further used for analysis that you sign up for.Log predictionsids=client.log_predictions(model_id=client.model_id,model_version='v1',feature_data=feature_data,predictions=preds,publish_date='2021-06-02')Note: As of now, we support batch predictions only but soon enough, will be able to support logging online predictions.Log actualsAt a later time, when actuals arrive, you'd be able to log them using below API.client.log_actuals(model_id=client.model_id,model_version='v1',id_cols_df=id_columns_frane,actuals=y_test,publish_date='2021-06-03')You are now done logging both metadata and the data itself.Detailed activity logs can be viewed in thead-mlops.logfile in the directory where your code file exists, however, location of the log file is configurable.What happens after you create a project and register a model?MetadataThe model and the other metadata are now part of Acceldata ML Catalog and can be viewed on the UI.DataThebaseline, prediction, actualdata are logged into the Acceldata Store. This data will be used for further analysis.DashboardYou will be able to track model performance, data drifts, etc by visiting this dashboard.AlertsYou can set alerts on charts, generate reports, etc using the dashboard or the catalog.Contact UsPlease get in touch with us [email protected] access to Acceldata catalog, dashboard, and assistance with bringing ML Observability into your organization.
accelphys
accelphysA package for basic accelerator physics. May also be useful in other areas of study but accelerator physics is the main focus area.InstallationThe package can be installed in editable mode by running the following command in the terminal:gitclonehttps://github.com/jako4295/accelphys.git pipinstall-e.
accelpy
Scalable Accelerator Interface in Python
accelrotate
No description available on PyPI.
acceltools
No description available on PyPI.
accent
Failed to fetch description. HTTP Status Code: 404
accentcolordetect
AccentcolordetectThis package allows you to detect the user's accent color on:macOS (untested)Windows 10+The main application of this package is to detect the accent color from your GUI Python application and apply the needed adjustments to your interface. Inspired by thedarkdetectpackage byAlberto SottileUsageimportaccentcolordetect>>>accentcolordetect.accent()((255,140,0),'#ff8c00')Installpip install accentcolordetect
accentdatabase
Source Code:https://github.com/accentdesign/accentdatabase
accenter
UNKNOWN
accentnotifications
Source Code:https://github.com/accentdesign/accentnotificationsInstallationFor smtp:pipinstallaccentnotifications[smtp]For twilio sms:pipinstallaccentnotifications[twiliosms]
accept
A simple library for parsing and ordering a HTTP Accept header.Includes parameter extraction.InstallationpipinstallacceptOr if youmustuse easy_install:aliaseasy_install="pip install$1"easy_installacceptUsage>>>importaccept>>>accept.parse("text/*, text/html, text/html;level=1, */*")[<MediaType:text/html;q=1.0;level=1>,<MediaType:text/html;q=1.0>,<MediaType:text/*;q=1.0>,<MediaType:*/*;q=1.0>]>>>d=accept.parse("application/json; version=1; q=1.0; response=raw")[0]>>>d.media_type'application/json'>>>d.quality1.0>>>d.q1.0>>>d.params{'version':'1','response':'raw'}>>>d['version']'1'>>>d['potato']NoneContributeCheck for open issues or open a fresh issue to start a discussion around a feature idea or a bug. There is a Contributor Friendly tag for issues that should be ideal for people who are not very familiar with the codebase yet.Forkthe repositoryon Github to start making your changes to themasterbranch (or branch off of it).Write a test which shows that the bug was fixed or that the feature works as expected.Send a pull request and bug the maintainer until it gets merged and published.History0.1.0 (2015-01-05)Initial Release!
acceptable
Acceptable is python tool to annotate and capture the metadata around your python web API. This metadata can be used for validation, documentation, testing and linting of your API code.It works standalone, or can be hooked into Flask (beta support for Django) web apps for richer integration.Design Goals:Tightly couple code, metadata, and documentation to reduce drift and increase DRY.Validation of JSON input and outputProvide tools for developers to make safe changes to APIsMake it easy to generate API documentation.Tools for generating testing doubles from the API metadata.UsageAnd example, for flask:from acceptable import AcceptableService service = AcceptableService('example') foo_api = service.api('foo', '/foo', introduced_at=1, methods=['POST']) foo_api.request_schema = <JSON Schema...> foo_api.response_schema = <JSON Schema...> foo_api.changelog(3, 'Changed other thing') foo_api.changelog(2, 'Changed something') @foo_api def view(): ...You can use this metadata to bind the URL to a flask app:from acceptable import get_metadata() app = Flask(__name__) get_metadata().bind_all(app)You can now generate API metadata like so:acceptable metadata your.import.path > api.jsonThis metadata can now be used to generate documentation, and provide API linting.DjangoNote: Django support is very limited at the minute, and is mainly for documentation.Marking up the APIs themselves is a little different:from acceptable import AcceptableService service = AcceptableService('example') # url is looked up from name, like reverse() foo_api = service.django_api('app:foo', introduced_at=1) foo_api.django_form = SomeForm foo_api.changelog(3, 'Changed other thing) foo_api.changelog(2, 'Changed something') @foo_api.handler class MyHandler(BaseHandler): allowed_methods=['POST'] ...Acceptable will generate a JSON schema representation of the form for documentation.To generate API metadata, you should add โ€˜acceptableโ€™ to INSTALLED_APPS. This will provide an โ€˜acceptableโ€™ management command:./manage.py acceptable metadata > api.json # generate metadataAnd also:./manage.py acceptable api-version api.json # inspect the current versionDocumentation (beta)One of the goals of acceptable is to use the metadata about your API to build documentation.Once you have your metadata in JSON format, as above, you can transform that into markdown documentation:acceptable render api.json --name 'My Service'You can do this in a single step:acceptable metadata path/to/files*.py | acceptable render --name 'My Service'This markdown is designed to rendered to html bydocumentation-builder <https://docs.ubuntu.com/documentation-builder/en/>:documentation-builder --base-directory docsIncludable MakefileIf you are using make files to automate your build you might find this useful.The acceptable package contains a make file fragment that can be included to give you the following targets:api-lint- Checks backward compatibility and version numbers;api-update-metadata- Check likeapi-lintthen update the saved metadata;api-version- Print the saved metadata and current API version;api-docs-markdown- Generates markdown documentation.The make file has variables for the following which you can override if needed:ACCEPTABLE_ENV- The virtual environment with acceptable installed, it defaults to$(ENV).ACCEPTABLE_METADATA- The saved metadata filename, it defaults toapi.json;ACCEPTABLE_DOCS- The directoryapi-docs-markdownwill generate documentation under, it defaults todocs.You will need to create a saved metadata manually the first time usingacceptable metadatacommand and saving it to the value ofACCEPTABLE_METADATA.The make file assumes the following variables:ACCEPTABLE_MODULESis a space separated list of modules containing acceptable annotated services;ACCEPTABLE_SERVICE_TITLEis the title of the service used byapi-docs-markdown.ACCEPTABLE_SERVICE_TITLEshould not be quoted e.g.:ACCEPTABLE_SERVICE_TITLE := Title of the ServiceTo include the file youโ€™ll need to get its path, if the above variables and conditions exist you can put this in your make file:include $(shell $(ENV)/bin/python -c 'import pkg_resources; print(pkg_resources.resource_filename("acceptable", "make/Makefile.acceptable"))' 2> /dev/null)Developmentmake testandmake toxshould run without errors.To run a single test module invoke:python setup.py test --test-suite acceptable.tests.test_moduleor:tox -epy38 -- --test-suite acceptable.tests.test_moduleโ€ฆthe latter runs โ€œtest_moduleโ€ against Python 3.8 only.
acceptance
UNKNOWN
acceptanceutils
[![Build Status](https://travis-ci.org/Brian-Williams/acceptanceutils.svg?branch=master)](https://travis-ci.org/Brian-Williams/acceptanceutils) [![codecov](https://codecov.io/gh/Brian-Williams/acceptanceutils/branch/master/graph/badge.svg)](https://codecov.io/gh/Brian-Williams/acceptanceutils)# Acceptance testing tools for use in unit/acceptance testingThis is a set of tools that may be useful for testing.### SurjectionSurjection is a useful utility when you have an arbitrarily large number of options to test and want to make sure you touch each option once without taking the time to test all combinations.Surjection in this always means minimally surjective.<!โ€” The first โ€˜imagesโ€™ is the branch name, the second is the folder in that branch โ€”> ![Set theory is fun](/../images/images/Surjection.svg.png?raw=true โ€œSurjectionโ€)### WatcherSubClassWatcher is useful for confirming hierarchies when specs require it.
accept-header-match
No description available on PyPI.
accepton
DocumentationPlease see thePython developer documentationfor more information.InstallationInstall from PyPI usingpip, a package manager for Python.$pipinstallacceptonDonโ€™t have pip installed? Try installing it by running this from the command line:$curlhttps://raw.github.com/pypa/pip/master/contrib/get-pip.py|pythonYou may need to run the above commands withsudo.ContributingFork itCreate your feature branch (git checkout-bmy-new-feature)Run the test suite on all supported Pythons (tox)Run the code linter to find style violations (tox-epep8)Commit your changes (git commit-am'Add some feature')Push the branch (git push originmy-new-feature)Create a new Pull Request
accept-paymob
A Python library for Paymob Acceptโ€™s API.SetupYou can install this package by using the pip tool and installing:$ pip install accept-paymobSetting up a Accept AccountSign up for Accept athttps://accept.paymob.com/.Using the Accept APIfromaccept.paymentimport*API_KEY="<API-KEY>"accept=AcceptAPI(API_KEY)# Authentication Requestauth_token=accept.retrieve_auth_token()print(auth_token)# Order RegistrationOrderData={"auth_token":auth_token,"delivery_needed":"false","amount_cents":"1100","currency":"EGP","merchant_order_id":125,# UNIQUE"items":[{"name":"ASC1515","amount_cents":"500000","description":"Smart Watch","quantity":"1"},{"name":"ERT6565","amount_cents":"200000","description":"Power Bank","quantity":"1"}],"shipping_data":{"apartment":"803","email":"[email protected]","floor":"42","first_name":"Clifford","street":"Ethan Land","building":"8028","phone_number":"+86(8)9135210487","postal_code":"01898","extra_description":"8 Ram , 128 Giga","city":"Jaskolskiburgh","country":"CR","last_name":"Nicolas","state":"Utah"},"shipping_details":{"notes":" test","number_of_packages":1,"weight":10,"weight_unit":"Kilogram","length":100,"width":100,"height":100,"contents":"product of some sorts"}}order=accept.order_registration(OrderData)print(order)# Payment Key RequestRequest={"auth_token":auth_token,"amount_cents":"1500","expiration":3600,"order_id":order.get("id"),"billing_data":{"apartment":"803","email":"[email protected]","floor":"42","first_name":"Clifford","street":"Ethan Land","building":"8028","phone_number":"+86(8)9135210487","shipping_method":"PKG","postal_code":"01898","city":"Jaskolskiburgh","country":"CR","last_name":"Nicolas","state":"Utah"},"currency":"EGP","integration_id":246701,# https://accept.paymob.com/portal2/en/PaymentIntegrations"lock_order_when_paid":"false"}payment_token=accept.payment_key_request(Request)print(payment_token)# Payments API [Kiosk, Mobile Wallets , Cash, Pay With Saved Token]identifier="cash"payment_method="CASH"transaction=accept.pay(identifier,payment_method,payment_token)print(transaction)# Auth-Capture Paymentstransaction00=accept.capture_transaction(transaction_id="7608793",amount_cents=1000)print(transaction00)# Refund Transactiontransaction01=accept.refund_transaction(transaction_id="7608793",amount_cents=10)print(transaction01)# Void Transactiontransaction02=accept.void_transaction(transaction_id="7608793")print(transaction02)# Retrieve Transactiontransaction03=accept.retrieve_transaction(transaction_id="7608793")print(transaction03)# Inquire Transactiontransaction_inquire=accept.inquire_transaction(merchant_order_id="123",order_id="10883471")print(transaction_inquire)# Trackingorder_10883471_track=accept.tracking(order_id="10883471")print(order_10883471_track)# Preparing Package# This will return a pdf file url to be printed.package=accept.preparing_package(order_id="10883471")print(package)# IFrame URLiframeURL=accept.retrieve_iframe(iframe_id="230796",payment_token=payment_token)print(iframeURL)# Loyalty Checkoutresponse=accept.loyalty_checkout(transaction_reference='',otp='123',payment_token=payment_token)print(response)
accepts
Installation$[sudo]pipinstallacceptsFeaturessupportmultiple typesargumentsupportNoneargumenthuman readable detailed exception messageExamples>>>fromacceptsimportaccepts>>>@accepts(int)definc(value):returnvalue+1>>>inc(1)# ok# multiple types>>>@accepts((int,float))>>>inc(1.5)# ok>>>inc("string")TypeError:inc()argument#0 is not instance of (<class 'int'>, <class 'float'>)# None>>>@accepts((int,float,type(None)))readme42.com
accept-types
accept-typeshelps your application respond to a HTTP request in a way that a client prefers. TheAcceptheader of an HTTP request informs the server which MIME types the client is expecting back from this request, with weighting to indicate the most prefered. If your server can respond in multiple formats (e.g.: JSON, XML, HTML), the client can easily tell your server which is the prefered format without resorting to hacks like โ€˜&amp;format=jsonโ€™ on the end of query strings.Usageget_best_matchWhen provided with anAcceptheader and a list of types your server can respond with, this function returns the clients most prefered type. This function will only return one of the acceptable types you passed in, orNoneif no suitable type was found:fromaccept_typeimportget_best_matchdefget_the_info(request):info=gather_info()return_type=get_best_match(request.META.get('HTTP_ACCEPT'),['text/html','application/xml','text/json'])ifreturn_type=='application/xml':returnrender_xml(info)elifreturn_type=='text/json':returnrender_json(info)elifreturn_type=='text/html':returnrender_html(info)elifreturn_type==None:returnHttpResponse406()parse_headerWhen provided with anAcceptheader, this will parse it and return a sorted list of the clients accepted mime types. These will be instances of theAcceptableTypeclass.>>>fromaccept_typeimportparse_header>>>parse_header('text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8')['text/html, weight 1','application/xhtml+xml, weight 1','application/xml, weight 0.9','*/*, weight 0.8']AcceptableTypeAcceptableTypeinstances represent one of the types that a client is willing to accept. This type could include wildcards, to match more than one MIME type.>>>fromaccept_typeimportAcceptableType>>>type=AcceptableType('image/*;q=0.9')AcceptableType>>>type.mime_type'image/*'>>>type.weight0.9>>>type.matches('image/png')True>>>type.matches('text/html')False
accera
Problem at HandWriting highly optimized compute-intensive code in a traditional programming language is strenuous and time-consuming. Not only does it require advanced engineering skills such as fluency in Assembly language, but a deep understanding of computer architecture is also indispensable. Manual optimization of even the simplest numerical algorithms demands a significant engineering effort. Needless to say, a highly optimized numerical code is often prone to bugs, lacks readability, and offers little to no usability. Code maintenance becomes a nightmare resulting in the reimplementation of the same logic every time an architecture level change is introduced.Accera: An Optimized SolutionAccera is a compiler that enables you to experiment with loop optimizations without hand-writing Assembly code. With Accera, these problems and impediments can be addressed in an optimized way. It is available as a Python library and supports cross-compiling to a wide range ofprocessor targets.Accera has THREE primary goals:Performance: To guarantee the fastest implementation for any compute-intensive algorithm.Readability: To ensure effective implementation of algorithms without sacrificing the readability of code.Writability: To provide a user-friendly programming model, designed for agility and maintainability.InstallTo install for Linux, macOS, or Windows (requires Python 3.7-3.10):pipinstallacceraSee theInstall Instructionsfor more details on installing pre-built Python 3 packages and how to build Accera from the source.QuickstartIn this example, we will:Implement matrix multiplication with a ReLU activation (matmul + ReLU), commonly used in machine learning algorithms.Generate two implementations: a naive algorithm and loop-based transformations.Compare the execution time of both implementations.Run in your browserNo installation is required. This will launch a Jupyter notebook with the quickstart example running in the cloud.Run on your machineCreate a Python 3 script calledquickstart.py:importacceraasacc# define placeholder inputs/outputA=acc.Array(role=acc.Role.INPUT,shape=(512,512))B=acc.Array(role=acc.Role.INPUT,shape=(512,512))C=acc.Array(role=acc.Role.INPUT_OUTPUT,shape=(512,512))# implement the logic for matmul and relumatmul=acc.Nest(shape=(512,512,512))i1,j1,k1=matmul.get_indices()@matmul.iteration_logicdef_():C[i1,j1]+=A[i1,k1]*B[k1,j1]relu=acc.Nest(shape=(512,512))i2,j2=relu.get_indices()@relu.iteration_logicdef_():C[i2,j2]=acc.max(C[i2,j2],0.0)package=acc.Package()# fuse the i and j indices of matmul and relu, add to the packageschedule=acc.fuse(matmul.create_schedule(),relu.create_schedule(),partial=2)package.add(schedule,args=(A,B,C),base_name="matmul_relu_fusion_naive")# transform the schedule, add to the packagei,j,f,k=schedule.get_indices()ii,jj=schedule.tile({i:16,j:16})# loop tilingschedule.reorder(j,i,f,k,jj,ii)# loop reorderingplan=schedule.create_plan()plan.unroll(ii)# loop unrollingpackage.add(plan,args=(A,B,C),base_name="matmul_relu_fusion_transformed")# build a dynamically-linked package (a .dll or .so) that exports both functionsprint(package.build(name="hello_accera",format=acc.Package.Format.HAT_DYNAMIC))Ensure that you have a compiler in your PATH:Windows: Install Microsoft Visual Studio and runvcvars64.batto setup the command prompt.Linux/macOS: Install gccDon't have a compiler handy? We recommend trying Accera in your browser insteadInstall Accera:pipinstallacceraGenerate the library that implements two versions of matmul + ReLU:pythonquickstart.pyTo consume and compare the library functions, create a file calledbenchmark.pyin the same location:importhatlibashatimportnumpyasnp# load the package_,functions=hat.load("hello_accera.hat")# call one of the functions with test inputsA_test=np.random.rand(512,512).astype(np.float32)B_test=np.random.rand(512,512).astype(np.float32)C_test=np.zeros((512,512)).astype(np.float32)C_numpy=np.maximum(C_test+A_test@B_test,0.0)matmul_relu=functions["matmul_relu_fusion_transformed"]matmul_relu(A_test,B_test,C_test)# check correctnessnp.testing.assert_allclose(C_test,C_numpy,atol=1e-3)# benchmark all functionshat.run_benchmark("hello_accera.hat",batch_size=5,min_time_in_sec=5)Run the benchmark to get the execution time results:pythonbenchmark.pyNext StepsTheManualis the best introductory resource for the Accera Python programming model.In particular, theschedule transformationsdescribe how you can experiment with different loop transformations with just a few lines of Python code.Finally, the.hatformat is just a C header file containing the metadata. Learn more about theHAT formatandbenchmarking.How it worksIn a nutshell, Accera takes the Python code that defines the loop schedule and algorithm while converting it intoMLIRintermediate representation (IR). Accera's compiler then takes this IR through a series of MLIR pipelines to perform transformations. The result is a binary library with a C header file. The library implements the algorithms that are defined in Python and it is compatible with the target.To peek into the stages of IR transformation that Accera does, try replacingformat=acc.Package.Format.HAT_DYNAMICwithformat=acc.Package.Format.MLIR_DYNAMICinquickstart.py, re-run the script, and search the_tmpsubfolder for the intermediate*.mlirfiles. We plan to document these IR constructs in the future.DocumentationGet familiar with Accera's concepts and Python constructs in theDocumentationpage.TutorialsStep-by-step examples are available on theTutorialspage. We're working on adding more complementary examples and tutorials.ContributionsAccera is a research platform-in-progress that can certainly benefit from your contributions. We would love your feedback, recommendations, and feature requests. Not to mention that we are excited to answer your questions. Letโ€™s collaborate! Please file aGithub issueor send us a pull request. Please review theMicrosoft Code of Conductto learn more.CreditsAccera is built using several open source libraries, including:LLVM,pybind11,toml++,tomlkit,vcpkg,pyyaml, andHAT. For testing, we usednumpyandcatch2.LicenseThis project is released under theMIT License.
accera-compilers
Accera CompilersAcceraAccerais a programming model, a domain-specific programming language embedded in Python (eDSL), and an optimizing cross-compiler for compute-intensive code. Accera currently supports CPU and GPU targets and focuses on optimization of nested for-loops.Writing highly optimized compute-intensive code in a traditional programming language is a difficult and time-consuming process. It requires special engineering skills, such as fluency in Assembly language and a deep understanding of computer architecture. Manually optimizing the simplest numerical algorithms already requires a significant engineering effort. Moreover, highly optimized numerical code is prone to bugs, is often hard to read and maintain, and needs to be reimplemented every time a new target architecture is introduced. Accera aims to solve these problems.Accera has three goals:Performance: generate the fastest implementation of any compute-intensive algorithm.Readability: do so without sacrificing code readability and maintainability.Writability: a user-friendly programming model, designed for agility.accera-compilersTheaccera-compilerspackage contains pre-compiled compiler binaries used to produce optimized code using the Accera eDSL. It is not designed for standalone use, but is automatically installed when youpip install accera. You can find documentation and examples onGithub.
accera-gpu
Accera GPUAcceraAccerais a programming model, a domain-specific programming language embedded in Python (eDSL), and an optimizing cross-compiler for compute-intensive code. Accera currently supports CPU and GPU targets and focuses on optimization of nested for-loops.Writing highly optimized compute-intensive code in a traditional programming language is a difficult and time-consuming process. It requires special engineering skills, such as fluency in Assembly language and a deep understanding of computer architecture. Manually optimizing the simplest numerical algorithms already requires a significant engineering effort. Moreover, highly optimized numerical code is prone to bugs, is often hard to read and maintain, and needs to be reimplemented every time a new target architecture is introduced. Accera aims to solve these problems.Accera has three goals:Performance: generate the fastest implementation of any compute-intensive algorithm.Readability: do so without sacrificing code readability and maintainability.Writability: a user-friendly programming model, designed for agility.accera-gpuTheaccera-gpupackage contains add-ons for GPU support. You can find documentation and examples onGithub.
accera-llvm
Accera LLVMIntroductionAccerais a programming model, a domain-specific programming language embedded in Python (eDSL), and an optimizing cross-compiler for compute-intensive code. Accera currently supports CPU and GPU targets and focuses on optimization of nested for-loops.Writing highly optimized compute-intensive code in a traditional programming language is a difficult and time-consuming process. It requires special engineering skills, such as fluency in Assembly language and a deep understanding of computer architecture. Manually optimizing the simplest numerical algorithms already requires a significant engineering effort. Moreover, highly optimized numerical code is prone to bugs, is often hard to read and maintain, and needs to be reimplemented every time a new target architecture is introduced. Accera aims to solve these problems.Accera has three goals:Performance: generate the fastest implementation of any compute-intensive algorithm.Readability: do so without sacrificing code readability and maintainability.Writability: a user-friendly programming model, designed for agility.accera-llvmTheaccera-llvmpackage contains pre-compiled custom LLVM binaries used to produce optimized code using the Accera eDSL. It is not designed for standalone use, but is automatically installed when youpip install accera. You can find documentation and examples onGithub.Supported platforms:Linux (manylinux) x64macOS x64Windows x64
accern-data
Accern Data LibraryClient library for consuming Accern data feed API.PyPI page:Click hereInstallation:pip install accern-dataSample snippet:importaccern_data# Create a data client.client=accern_data.create_data_client("https://api.example.com/","SomeRandomToken")# Set a data format/mode in which the data has to be downloaded.# Split dates lets you divide files on the basis of dates.client.set_mode(mode="csv",split_dates=True)# Other modes: {"df", "json"}Set filters:client.set_filters({"provider_id":5,"entity_name":"Hurco Companies, Inc.","event":"Governance - Product Development, R&D and Innovation","entity_ticker":"HURC","entity_accern_id":"BBG000BLLFK1",})Set parameters to the download function:client.download_range(start_date="2022-01-03",output_path=".",output_pattern="data",end_date="2022-03-04")Note: To download single day's data, setend_date=Noneor can leave that unset:client.download_range(start_date="2022-01-03",output_path=".",output_pattern="data",end_date=None)ORclient.download_range(start_date="2022-01-03",output_path=".",output_pattern="data")One-liner download:accern_data.create_data_client("https://api.example.com/","SomeRandomToken").download_range(start_date="2022-01-03",output_path=".",output_pattern="data",end_date="2022-03-04",mode="csv",filters={"entity_ticker":"HURC"})Getting data using iterator:forresinclient.iterate_range(start_date="2022-01-03",end_date="2022-03-04"):do_something(res)Error logging:While downloading the data any critical error will get raised. Any non-critical errors, such as API timeouts, get silenced and API calls are repeated. To see a list of the lastnerrors use:client.get_last_silenced_errors()
accern-xyme
accern_xymeis a python/typescript library for accessing XYME functionality.Python UsageYou can installaccern_xymewith pip:pipinstall--useraccern-xymeImport it in python via:importaccern_xymexyme=accern_xyme.create_xyme_client("<URL>",token="<TOKEN>",namespace="default")print(xyme.get_dags())<URL>and<TOKEN>are the login credentials for XYME.You will need python3.6 or later.Typescript DevelopingYou can install dependency withyarn. Runyarn _postinstallto configure husky pre-commit hooks for your local environment.
accesomongo
No description available on PyPI.
access
Spatial AccessThis package provides classical and novel measures of spatial accessibility to services.For full documentation, seehere.
access2theMatrix
access2theMatrix is a Python library for accessing Scienta Omicron (NanoScience) (NanoTechnology) MATRIX Control System result files. Scanning Probe Microscopy (SPM) Image data, Single Point Spectroscopy (SPS) Curve data, Phase/Amplitude Curve data and volume Continuous Imaging Tunneling Spectroscopy (CITS) Curves data will be accessed by this library.The library access2theMatrix has the package access2thematrix which in turn contains the module access2thematrix. The class MtrxData in the access2thematrix module has the methods to open SPM Image, SPS Curve, Phase/Amplitude Curve and volume CITS Curves result files, to select one out of the four possible traces (forward/up, backward/up, forward/down and backward/down) for images and volume CITS curves, and to select one out of the two possible traces (trace, retrace) for spectroscopy curves. Includes method for experiment element parameters overview.Dependenciesaccess2theMatrix requires the NumPy (http://www.numpy.org) and the six (https://pypi.python.org/pypi/six/) library.InstallationUsing pip:> pip install access2theMatrixExample usageIn this example the MATRIX Control System has stored the acquired data in the folderc:\data. In addition to the result data files the folder must also contain the result file chain, see the MATRIX Application Manual for SPM. The image fileAu(111) bbikfe-20151110-112314--3_1.Z_mtrxwill be opened and theforward/uptrace will be selected.>>>importaccess2thematrix>>>mtrx_data=access2thematrix.MtrxData()>>>data_file=r'c:\data\Au(111) bbik fe-20151110-112314--3_1.Z_mtrx'>>>traces,message=mtrx_data.open(data_file)>>>traces{0: 'forward/up', 1: 'backward/up'}>>>im,message=mtrx_data.select_image(traces[0])>>>The variableimwill contain the data and the metadata of the selected image and theimobject has the initial attributesdata,width,height,y_offset,x_offset,angleandchannel_name_and_unit. We will continue the example by opening de curve fileAu(111) bbikfe-20151110-112314--2_1.Aux2(V)_mtrxand selecting theretracetrace.>>>data_file=r'c:\data\Au(111) bbik fe-20151110-112314--2_1.Aux2(V)_mtrx'>>>traces,message=mtrx_data.open(data_file)>>>traces{0: 'trace', 1: 'retrace'}>>>cu,message=mtrx_data.select_curve(traces[1])>>>The variablecuwill contain the data and the metadata of the selected curve and thecuobject has the initial attributesdata,referenced_by,x_data_name_and_unitandy_data_name_and_unit. The last example will open a volume CITS fileCO_Cu-111-spectra-20191028--10_1.Aux1(V)_mtrx.>>>importos>>>os.chdir('c:/data')>>>mtrx_data.open('CO_Cu-111-spectra-20191028--10_1.Aux1(V)_mtrx')({0: 'trace', 1: 'retrace', 2: 'forward/up'}, 'Successfully opened and processed data file CO_Cu-111-spectra-20191028--10_1.Aux1(V)_mtrx.')>>>mtrx_data.object_type'volume CITS'>>>im_3d=mtrx_data.volume_scan['forward/up']['trace']>>>im_3d.shape(3, 3, 251)>>>mtrx_data.channel_name_and_unit['Aux1(V)', 'V']>>>mtrx_data.z_data_name_and_unit['Spectroscopy Device 1', 'Volt']>>>The variableim_3dcontains the 3-dimensional combined scan data of the โ€˜forward/upโ€™ xy-scan with the โ€˜traceโ€™ z-scan of Spectroscopy Device 1 with channel output Aux1. The order of the dimensions is y, x, z. The z-data is inmtrx_data.scan[0].To get a list and a printable sorted text list of the experiment element parameters with their values and units, use the methodget_experiment_element_parameters.New in version 0.4.4The result file chain consist of several result file chain links. The file name is ending on_0001.mtrxfor the first part of the chain. This library can now use more than one chain link. A small bugfix for Python 2.7 users.New in version 0.4.3This is a bugfix release. The bug that originated in the previous version.New in version 0.4.2โ€˜Foreign Parametersโ€™ can be added to an experiment using the MATRIX Automated Task Environment (MATE). These โ€˜Foreign Parametersโ€™ can now be accessed and are treated as parameters of the pseudo-Experiment Element instance โ€˜-Foreignโ€™.Authors & affiliationsStephan J. M. Zevenhuizen[1][1]Condensed Matter and Interfaces, Debye Institute for Nanomaterials Science, Utrecht University, Utrecht, The Netherlands.
accessall
UNKNOWN
access-azure-keyvault
No description available on PyPI.
access-checker-sbhalodia
Access CheckerWhat Is This?This is a simple Python/Flask application intended to check if given access will be allowed or not via Cisco ACL(may contain hundreds of lines!!). This tool will take user inputs(Source IP, Source port, Protocol, Destination IP and Destination port) and will check against the user provided ACL and provide the result. This tool have both CLI and GUI options.How To Install ThisActivate your Python virtual environment by following below stepsRunpip3 install virtualenvCreate a project directory and navigate to itCreate virtual environment by runningvirtualenv -p python3 venvActivate virtual environment by runningsource venv/bin/activateMore infoHow To: Virtual environmentsInstall this package by runningpip3 install access-checker-sbhalodiaHow To Use ThisCLIRun the following command from cliExample:access-checker-cli -sip 10.1.1.10/24 -sport 22 -p tcp -dip 8.8.8.10/32 -dport 443 -f /Users/Mytestaccount/Desktop/myaclfile.aclGUIRunaccess-checker-guiNavigate tohttp://localhost:5000in your browserTestingBest effort testing has been done. No thorough testing is completed. Please conduct your own testing before using this.NotePlease follow the exact input format as suggested in GUI and CLI.
accesschk2df
Automating the retrieval and analysis of access information using accesschk.exepip install accesschk2dfTested against Windows 10 / Python 3.10 / Anaconda 3Individuals or organizations working with access control and security configurations can benefit from using this Python module by automating the retrieval and analysis of access information using accesschk.exe and leveraging the flexibility and functionality of Python and pandas for further data processing and analysis.accesschk.exe is a command-line tool developed by Microsoft that is used to view and analyze the security settings and access permissions of various system resources, such as files, directories, registry keys, services, and more. It provides detailed information about access control lists (ACLs) and user privileges for specific resources.This module utilizes the accesschk.exe tool to retrieve access information and convert it into a pandas DataFrame. By using this module, individuals or organizations working with access control and security configurations can programmatically access and analyze access permissions in a more convenient and automated manner.Advantages of using this Python module include:Automation: The module allows for the automation of accesschk.exe functionality through Python code, enabling users to retrieve and process access information programmatically.Integration: The module integrates the functionality of accesschk.exe with pandas, a popular data manipulation library in Python. This enables users to easily perform further data analysis, transformations, and visualizations on the access information using pandas' extensive capabilities.Flexibility: Python provides a wide range of data analysis and processing libraries, making it easier to integrate the access information with other data sources and perform complex analyses or combine it with additional security-related tasks.Reproducibility: By using Python code, users can document and reproduce their access information retrieval and analysis workflows. This is especially useful for auditing, troubleshooting, or creating reports related to access permissions.df=get_accesschk_df()# print(df[:3].to_string())# aa_pid aa_exe aa_rights aa_path# 0 592 lsass.exe RW NT-AUTORITT\SYSTEM# 1 592 lsass.exe RW VORDEFINIERT\Administrators# 2 84 svchost.exe R VORDEFINIERT\Administrators
access-client
Access ClientPython client to interact with acessai's various applications/services like access-face-visionInstallationpipinstallaccess-cleintUsagefromaccess_client.face_vision.clientimportAFV#instantiate by giving the host and port of the access-face-vision serverafv=AFV("http://localhost:5001")Now we need to create a face-group. This is will be our face-index.afv.create_face_group(face_group_name="celebrities")#Pass in the directory and face group nameafv.add_faces_to_face_group(dir="./samples/celebrities",face_group="celebrities")# Directory structure# **/Images/# A/# A_K_01.jpg# A_K_02.jpg# B/# B_S_01.jpg# B_S_02.jpgOnce faces are indexed we can run inference on it.afv.parse(img_path="./samples/celebrities/Bill Gates/Bill_Gates.jpg",face_group="celebrities")
access-cli-sealuzh
ACCESS Command line InterfaceA tool for verifying ACCESS course configurations locally.InstallationFirst, make sure you have docker installed and working correctly:docker run hello-worldThen, installaccess-cli:pip install access-cli-sealuzhQuick start:Run validation on the current (and all nested) folders:access-cli -AvIf you have problems relating to docker prermissions, you may need to specify an empty user or a specific user, e.g.:access-cli -Av -u= # or access-cli -Av -u=1001To check if the sample solutions work correctly by providing a command that replaces templates with solutions:access-cli -AvGs "cp -R solution/* task/"If you use theglobal filesfeature, the global files must be listed relative to the course root (unless running in the course root), for example:access-cli -AvGs "cp -R solution/* task/" -f universal/harness.pyUsageaccess-cliverifies courses, assignments and tasks for deployment on ACCESS. It checks whether any givenconfig.tomlconforms to the required schema and ensures that any referenced files, assignments or tasks actually exist, that start and end dates are sensible and that information is provided at least in English. For tasks, it also ensures that the file attributes are sensible with regard to each other (e.g., grading files should not be visible).Furthermore, it can also detect many kinds of bugs that could occur when designing tasks. In particular it can:Execute the run and test commands and ensure that the return code matches what is expectedExecute the grading command and ensure that zero points are awarded for an unmodified code templateSolve the task (typically by copying the solution over the template) and execute the grading command to ensure that the full points are awarded.All executions are done in docker containers.In its simplest form,access-cli -Awill, by default, validate configuration files and execute the run and test commands expecting a 0 return code. It will also execute the grading command on the template and expect zero points. It will attempt to read the parent course/assignment config to determine global files if possible. This will only work if you use a flat course/assignment/task directory structure.% access-cli -A > Validating task . โฐ Validation successful โฑ โœ“ ./config.tomlHowever, it cannot auto-detect what is necessary to solve a task to also check whether full points are awarded for a correction solution. In that case, you need to provide the solve-command:% access-cli -AGs "rm -R task; cp -R solution task" > Validating task . โฐ Validation successful โฑ โœ“ ./config.tomlAdd the-vflag for verbose output.Configuration file validationUnless using auto-detection,access-clionly verifies configuration files by default. Here's an example whereaccess-cliis run in a course directory where the override dates are invalid:% access-cli -l course -d ./ > Validating course ./ โฐ Validation failed โฑ โœ— ./config.toml override_start is after override_endHere's one for a task where file attributes are invalid:% access-cli -l task -d ./ > Validating task ./ โฐ Validation failed โฑ โœ— ./config.toml grading file grading.py marked as visibleHere, a course and all its assignemtns and tasks are verified recursively with validation succeeding:% access-cli -l course -d ./ -R > Validating course ./ > Validating assignment ./assignment_1 > Validating task ./assignment_1/task_1 โฐ Validation successful โฑ โœ“ ./config.toml โœ“ ./assignment_1/config.toml โœ“ ./assignment_1/task_1/config.tomlTask execution validationTo validate task execution and grading, docker needs to be available to the current user. To check if this is the case, rundocker run hello-world.Here is an example where the run command does not exit with the expected return code (0), because of a typo in therun_command:% access-cli -l task -d ./ -r0 > Validating task ./ โฐ Validation failed โฑ โœ— ./ python script.p (run_command): Expected returncode 0 but got 2Here is an example for a task where the grading command awards points for the unmodified code template (i.e., the student would get points for doing nothing):% access-cli -l task -d ./ -g > Validating task ./ โฐ Validation failed โฑ โœ— ./ template: 1 points awarded instead of expected 0If you also wish to check whether full points are awarded, you need to tellaccess-clihow to produce a valid solution by passing it a shell command. Typically, this just means copying the sample solution over the template:% access-cli -l task -d ./ -Gs "cp solution.py script.py" > Validating task ./ โฐ Validation failed โฑ โœ— ./ solution: 1 points awarded instead of expected 2Enabling verbose output will show the exact commands executed and the output streams produced within the docker container. Here's an example where we verify the run and test commands, as well as whether grading works correctly for both the template and the solution:% access-cli -l task -d ./ -r0 -t0 -gGvs "cp solution.py script.py" > Validating task ./ โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ Executing run_command in python:latest. โ”‚ โ”‚ Expecting return code 0 โ”‚ โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ โ”‚python script.py โ”œโ”€โ”€โ”€โ”€โ”€โ•ผ return code: 0 โ”œโ”€โ”€โ”€โ”€โ”€โ•ผ stdout: โ”œโ”€โ”€โ”€โ”€โ”€โ•ผ stderr: โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ Executing test_command in python:latest. โ”‚ โ”‚ Expecting return code 0 โ”‚ โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ โ”‚python -m unittest tests.py -v โ”œโ”€โ”€โ”€โ”€โ”€โ•ผ return code: 0 โ”œโ”€โ”€โ”€โ”€โ”€โ•ผ stdout: โ”œโ”€โ”€โ”€โ”€โ”€โ•ผ stderr: โ”‚test_x_is_number (tests.PublicTestSuite.test_x_is_number) ... ok โ”‚ โ”‚---------------------------------------------------------------------- โ”‚Ran 1 test in 0.000s โ”‚ โ”‚OK โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ Executing grade_command in python:latest. โ”‚ โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ โ”‚python -m unittest grading.py -v โ”œโ”€โ”€โ”€โ”€โ”€โ•ผ return code: 1 โ”œโ”€โ”€โ”€โ”€โ”€โ•ผ stdout: โ”œโ”€โ”€โ”€โ”€โ”€โ•ผ stderr: โ”‚test_x_is_42 (grading.PublicTestSuite.test_x_is_42) ... FAIL โ”‚ โ”‚====================================================================== โ”‚FAIL: test_x_is_42 (grading.PublicTestSuite.test_x_is_42) โ”‚---------------------------------------------------------------------- โ”‚Traceback (most recent call last): โ”‚ File "/workspace/harness.py", line 68, in wrapper โ”‚ return func(*args, **kwargs) โ”‚ ^^^^^^^^^^^^^^^^^^^^^ โ”‚ File "/workspace/grading.py", line 18, in test_x_is_42 โ”‚ self.assertEqual(implementation.x, 42) โ”‚AssertionError: 0 != 42 โ”‚ โ”‚---------------------------------------------------------------------- โ”‚Ran 1 test in 0.000s โ”‚ โ”‚FAILED (failures=1) โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ Solving task by running cp solution.py script.py. โ”‚ โ”‚ Executing grade_command in python:latest. โ”‚ โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ โ”‚python -m unittest grading.py -v โ”œโ”€โ”€โ”€โ”€โ”€โ•ผ return code: 0 โ”œโ”€โ”€โ”€โ”€โ”€โ•ผ stdout: โ”‚42 โ”œโ”€โ”€โ”€โ”€โ”€โ•ผ stderr: โ”‚test_x_is_42 (grading.PublicTestSuite.test_x_is_42) ... ok โ”‚ โ”‚---------------------------------------------------------------------- โ”‚Ran 1 test in 0.000s โ”‚ โ”‚OK โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ โฐ Validation failed โฑ โœ— ./ solution: 1 points awarded instead of expected 2Note that if your task depends on global course files (as specified in the coursesconfig.toml), and you're validating a task, then you need to tellaccess-cliabout the global files yourself via the-fparameter and also specify the course root where the global files reside via-C, e.g.:access-cli -l task -d ./ -r0 -t0 -gGvs "cp solution.py script.py" -f "universal/harness.py" -C "../.."will copy../../universal/harness.pyinto the docker container before grading.
access-control
Access ControlWithaccess-controlyou can manage access control list to check wether a principal has access to a context with a certain permission.ConceptsACL (Access Control List)AnACLis an ordered list ofACE(Access Control Entry). EveryContexthas anACL.ACE (Access Control Entry)AnACEconsists of:aPermitaPrincipalaPermissionPrincipalAPrincipalrepresents an entity, typically a user or group. This means that a typical user can have multiple principals, likeeveryone,userid:1234andgroup:admin.PermitThePermitis either ALLOW or DENY. This means that you can specify in theACEthat aPrincipalhas either to be denied of allowed access to theContext.ContextTheContextis a resource, like a page on a website, including the context of that resource, like the folders in which the page is located. Every context has anACL.PermissionThePermissionis the action likeview,change name,create useron theContext.MatchingTo get thePermitfor a combination ofContext,PrincipalandPermission, theACLof the context will be looked up (in the specified order). When there is a match (based onPrincipalandPermission), the specifiedPermit(DENY or ALLOW) is returned. When there is no match, the first match withACLof the parent (like folders) will be returned. When there is still no match, a DENY will be returned.Example>>> import access_control as ac >>> from typing import Optional Create some principals, next to the predefined ac.principal.everyone and ac.principal.authenticated. >>> user_1 = ac.principal.Principal('user:1') >>> group_admin = ac.principal.Principal('group:admin') Create some context. You can use predefined ObjectContext which can make a context from any object. >>> class Page(): ... def __init__(self, name: str, parent: Optional["Page"]): ... self.name = name ... self.parent = parent >>> root_page = Page('root', None) >>> contact_page = Page('contact', root_page) >>> context_contact_page = ac.context.ObjectContext(contact_page) >>> context_root = ac.context.ObjectContext(root_page) Create permissions. For the contact page you can define a view and an edit permission >>> view_permission = ac.permission.Permission('view') >>> edit_permission = ac.permission.Permission('edit') Next we need to glue them together in acls. The context has a `acl` attribute which has the acl of the context *and* the parents of the context. A subscription_list of the `subscribe` package will be used to get the acl of a certain context. You can subscribe one or more functions to a subscription_list of the context. All acls will be combined in the order of the subscription_list. Only the admins can edit the page. >>> @context_contact_page.acl_subscription_list.subscribe() ... def get_acl(context): ... return [ac.acl.ACE(ac.permit.Permit.ALLOW, group_admin, edit_permission)] And everyone can view everything. >>> @context_root.acl_subscription_list.subscribe() ... def get_acl(context): ... return [ac.acl.ACE(ac.permit.Permit.ALLOW, ac.principal.everyone, view_permission)] When a user want to access the page for edit, we can ask whether the user is allowed. Therefor we need to know the principals of that user. >>> unauthenticated_user_principals = [ac.principal.everyone] >>> admin_user_princpals = {ac.principal.everyone, ac.principal.authenticated, user_1, group_admin} Both users can access the root and contact page with view permission >>> ac.context.get_permit(context_contact_page, admin_user_princpals, view_permission) == ac.permit.Permit.ALLOW True >>> ac.context.get_permit(context_root, admin_user_princpals, view_permission) == ac.permit.Permit.ALLOW True >>> ac.context.get_permit(context_contact_page, unauthenticated_user_principals, view_permission) == ac.permit.Permit.ALLOW True >>> ac.context.get_permit(context_root, unauthenticated_user_principals, view_permission) == ac.permit.Permit.ALLOW True The unauthenticated user has no edit permission to the contact page >>> ac.context.get_permit(context_contact_page, unauthenticated_user_principals, edit_permission) == ac.permit.Permit.DENY True The admin user does have access >>> ac.context.get_permit(context_contact_page, admin_user_princpals, edit_permission) == ac.permit.Permit.ALLOW True
access-control-python3-v1
Failed to fetch description. HTTP Status Code: 404
accessdb
Why I created this package ?One by one insertion into Access DB, But it takes too much time to insert 100k records,cursor.execute("INSERT STATEMENT")cursor.commit()Bulk way insertion into Access DB and it is fast if you compare with above one, But even it is very slow fromhttps://github.com/mkleehammer/pyodbc/issues/120cursor.executemany("Bulk INSERT STATEMENT")cursor.commit()But think if you want to insert 1000k records into AccessDB, how much time you have to wait?What the package will do ?Imports the data from text file to Access Database.Creating Access Database from pandas dataframe very quickly.Primary Key support.Can create many tables in Access DatabaseData Types supportHow to Use:If you have pandas dataframe you can follow bellow exampleimportaccessdb# your dataframe# df.to_accessdb(<DB_PATH>, <TABLE_NAME>)df.to_accessdb(r'C:\Users\<user>\Desktop\test.accdb','SAMPLE')If you have text file you can follow bellow examplefromaccessdbimportcreate_accessdb# create_accessdb(<DB_PATH>, <TEXT_FILE_PATH>, <TABLE_NAME>)create_accessdb(r'C:\Users\<user>\Desktop\test.accdb',r'C:\Users\<user>\Documents\test.text','SAMPLE')Installation:pip install accessdbNote:It will create text file if you are using pandas dataframe to create Access Database, But the file will be deleted after completion of process.It supports only for Windows.
access-dict-by-dot
access_dict_by_dotUsing this package, we can access the items in a dictionary using dot operator instead of writiting dict_name["key"].for examplefromaccess_dict_by_dotimportAccessDictByDotdictionary={'key1':'value1','key2':'value2','key3':{'subkey1':'subvalue1','subkey2':'subvalue2'}}d=AccessDictByDot.load(dictionary)print(d.key1)print(d.key3.subkey1)