package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
acq4-autopatch
ACQ4-AutopatchAutomated cell patching extension for ACQ4InstallationInstall the package into your environment with e.g.pipinstallacq4_autopatchorgitclonehttps://github.com/sensapex/acq4-autopatch.git condadevelopacq4-autopatchCustomize the following to suit your hardware and add it to themodules:section of your ACQ4default.cfgfile:Autopatch:module:'acq4_autopatch.module.AutopatchModule'config:imagingDevice:'Camera'patchDevices:PatchPipette1:(0, 0)# bottom-left quadPatchPipette2:(50*mm, 0)# bottom-right quadPatchPipette3:(0, 50*mm)# top-left quadPatchPipette4:(50*mm, 50*mm)# top-right quadplateCenter:(0, 0, 0)wellPositions:[(0,0),(50*mm,0),(0,50*mm),(50*mm,50*mm)]patchStates:cell detect:maxAdvanceDistancePastTarget:1*umseal:autoSealTimeout:60pressureMode:'auto'cell attached:autoBreakInDelay:5.0clean:approachHeight:3*mmcleanSequence:[(-35e3,1.0),(65e3,1.5)]*5rinseSequence:[(-35e3,3.0),(65e3,15.0)]cameraChannels:# Use different (exposure, trigger) DAQ lines for each pipette devicePatchPipette1:('/Dev1/port0/line0', '/Dev1/port0/line1')PatchPipette2:('/Dev2/port0/line0', '/Dev2/port0/line1')PatchPipette3:('/Dev3/port0/line0', '/Dev3/port0/line1')PatchPipette4:('/Dev4/port0/line0', '/Dev4/port0/line1')UsageBriefly:Make sure you have an active Storage directory in the DataManager module.Open the Camera module.The first time through, use this to move each pipette into its home, clean and rinse positions.In the main ACQ4 Manager window, save the home on each Manipulator.In the main ACQ4 Manager window, save the clean and rinse on each PatchPipette.Do any other calibration necessary.For each pipette, open a separate TaskRunner module.Enable the Clamp associated with this pipette.Configure the tasks to be performed after a cell is patched.Open the MultiPatch module. This is useful for monitoring.Open the Autopatch module.Press "Add Points" and add one in the Camera for every cell you'd like to patch. Repeat for each well.Pick your acquisition protocol.Press "Start"Monitor status in the "Pipettes" pane or in the MultiPatch window.Read through results or look at errors in the "Results" pane.TODO create and then link to video explanation.LicensingAll software copyright (c) 2019-2020 Sensapex. All rights reserved. It is offered under multiple different licenses, depending on your needs:A commercial license is appropriate for development of proprietary/commercial software where you do not want to share any source code with third parties or otherwise cannot comply with the terms of the GNU GPL version 3. To purchase a commercial license, contactour sales team.Licensed under the GNU General Public License (GPL) version 3 is appropriate for the development of open-source applications, provided you can comply with the terms and conditions of the GNU GPL version 3 (or GNU GPL version 3). SeeGPL-3for details.
acq-addition
No description available on PyPI.
acqdiv
ACQDIVThis repository contains the code and configuration files for transforming the child language acquisition corpora into the ACQDIV database.PublicationIf you use the database in your reasearch, please cite as follows:Jancso, Anna, Steven Moran, and Sabine Stoll. "The ACQDIV Corpus Database and Aggregation Pipeline." Proceedings of The 12th Language Resources and Evaluation Conference. 2020.Link to PaperResourcesDownload the ACQDIV database (only public corpora):To request access to the full database including the private corpora (for research purposes only!), please refer toSabine Stoll. In case of technical questions, please open an issue on this repository.CorporaOur full database consists of the following corpora:CorpusISOPublic# WordsChintang Language Corpusctnno987'673Cree Child Language Acquisition Study (CCLAS) Corpuscreyes44'751English Manchester Corpusengyes2'016'043MPI-EVA Jakarta Child Language Databaseindyes2'489'329Allen Inuktitut Child Language Corpusikeno71'191MiiPro Japanese Corpusjpnyes1'011'670Miyata Japanese Corpusjpnyes373'021Ku Waru Child Language Socialization Studymuxyes65'723Sarvasy Nungon Corpusyuwyes19'659Qaqet Child Language Documentationbyxno56'239Stoll Russian Corpusrusno2'029'704Demuth Sesotho Corpussotyes177'963Tuatschin Corpusrohno118'310Koç University Longitudinal Language Development Databaseturno1'120'077Pfeiler Yucatec Child Language Corpusyuano262'382Total10'843'735Running the pipelineFor Windows users, follow the installation/run instructions here:https://github.com/acqdiv/acqdiv/wiki/Installation-Run-instructions-for-WindowsFor Mac and Linux user, continue here to run the pipeline yourself:Install the packageCreate a virtual environment [optional]:python3-mvenvvenvsourcevenv/bin/activateYou can install the package from PyPI or directly from source:PyPIpip install acqdivFrom source# Clone [email protected]:acqdiv/acqdiv.gitcdacqdiv# Install package (for users!)pipinstall.# Developer mode (for developers!)pipinstall-rrequirements.txtGet the corporaRun the following script to download the public corpora:python util/download_public_corpora.pyThe corpora are in the foldercorpora.For the private corpora, either place the session files incorpora/<corpus_name>/{cha|toolbox}/and the metadata files (only Toolbox corpora) incorpora/<corpus_name>/imdi/or edit the paths to those files in theconfig.ini(also see below).Generate the databaseGet the configuration filesrc/acqdiv/config.iniand specify the absolute paths (without trailing slashes) for the corpora directory (corpora_dir) and the directory where the database should be written to (db_dir):[.global]# directory containing corporacorpora_dir=/absolute/path/to/corpora/dir# directory where the database is written todb_dir=/absolute/path/to/database/dir...Optionally adapt the paths for the individual corpora (sessionsandmetadata_dir).Run the pipeline specifying the absolute path to the configuration file:acqdiv load -c /absolute/path/to/config.iniGenerate the R objectInstall dependencies$ R > install.packages("RSQLite") > install.packages("rlang")Navigate tosrc/acqdiv/databaseand run:Rscript sqlite_to_r.R /absolute/path/to/sqlite-DBRun testsRun the unittests:pytest tests/unittestsRun the integrity tests on the database:pytest tests/systemtests
acqdp
Alibaba Cloud Quantum Development Platform (ACQDP)IntroductionACQDP is an open-source platform designed for quantum computing. ACQDP provides a set of tools for aiding the development of both quantum computing algorithms and quantum processors, and is powered by an efficient tensor-network-based large-scale classical simulator.Computing EnginePartially inspired by the recent quantum supremacy experiment, classical simulation of quantum circuits attracts quite a bit of attention and impressive progress has been made along this line of research to significantly improve the performance of classical simulation of quantum circuits. Key ingredients includeQuantum circuit simulation as tensor network contraction[1];Undirected graph model formalism[2];Dynamic slicing[3];Contraction tree[4];Contraction subtree reconfiguration[5].We are happy to be part of this effort.Use CasesEfficient exact contraction of intermediate-sized tensor networksDeployment on large-scale clusters for contracting complex tensor networksEfficient exact simulation of intermediate sized quantum circuitClassical simulation under different quantum noise modelsDocumentationSee full documentation here.InstallationInstallation from PyPIpipinstall-UacqdpInstallation from source codegitclonehttps://github.com/alibaba/acqdpcdadqdp pipinstall-e.ContributingIf you are interested in contributing to ACQDP feel free to contact me or create an issue on the issue tracking system.References[1]Markov, I. and Shi, Y.(2008) Simulating quantum computation by contracting tensor networks SIAM Journal on Computing, 38(3):963-981, 2008[2]Boixo, S., Isakov, S., Smelyanskiy, V. and Neven, H. (2017) Simulation of low-depth quantum circuits as complex undirected graphical models arXiv preprint arXiv:1712.05384[3]Chen, J., Zhang, F., Huang, C., Newman, M. and Shi, Y.(2018) Classical simulation of intermediate-size quantum circuits arXiv preprint arXiv:1805.01450[4]Zhang, F., Huang, C., Newman M., Cai, J., Yu, H., Tian, Z., Yuan, B., Xu, H.,Wu, J., Gao, X., Chen, J., Szegedy, M. and Shi, Y.(2019) Alibaba Cloud Quantum Development Platform: Large-Scale Classical Simulation of Quantum Circuits arXiv preprint arXiv:1907.11217[5]Gray, J. and Kourtis, S.(2020) Hyper-optimized tensor network contraction arXiv preprint arXiv:2002.01935[6]Huang, C., Zhang, F.,Newman M., Cai, J., Gao, X., Tian, Z., Wu, J., Xu, H., Yu, H., Yuan, B.,Szegedy, M., Shi, Y. and Chen, J. (2020) Classical Simulation of Quantum Supremacy Circuits arXiv preprint arXiv:2005.06787
acqpack
DocsRequirementsLinuxWindowsCode to perform acquisitions (experiment automation and hardware control). Should be combined with MMCorePy for microscope control.-config/should be for computer scope settings-setup/should be for experiment scope settingsfrom acqpack import ...History0.1.0 (2017-07-03)First release on PyPI.
acquabr
Failed to fetch description. HTTP Status Code: 404
acquantum-connector
Examplefromacquantumconnector.connector.acquantumconnectorimportAcQuantumConnectorfromacquantumconnector.credentials.credentialsimportAcQuantumCredentialsfromacquantumconnector.model.backendtypeimportAcQuantumBackendTypefromacquantumconnector.model.gatesimportXGate,Measureapi=AcQuantumConnector()api.create_session(AcQuantumCredentials('username','password'))# Create Experimentexperiment_id=api.create_experiment(bit_width=4,experiment_type=AcQuantumBackendType.SIMULATE,experiment_name='Demo')print(experiment_id)# Update Experimentgates=[XGate(1,1),Measure(2,1)]api.update_experiment(experiment_id,gates)# Get Experimentexp_res=api.get_experiment(experiment_id)print(exp_res)# List Experimentsexp_list=api.get_experiments()print(exp_list)# Run Experimentapi.run_experiment(experiment_id,AcQuantumBackendType.SIMULATE,2,100)# Get Resultapi.get_result(experiment_id)# Download Resultapi.download_result(experiment_id)api.save_session()LicenseThe AcQuantumConnector isfreeandopen source, released under theApache License, Version 2.0.
acquantum-qiskit
A qiskit provider for the Alibaba’s quantum computer.InstallationThis plugin requires Python version 3.5 and above, as well as qiskit. Installation can be done using pip:Install from PyPI$pipinstallacquantum-qiskitGetting startedContributingWe welcome contributions - simply fork the repository of this plugin, and then make apull requestcontaining your contribution. All contributers to this plugin will be listed as authors on the releases.We also encourage bug reports, suggestions for new features and enhancements, and even links to cool projects or applications built on this project.LicenseThe AcQuantum Qiskit Provider isfreeandopen source, released under theApache License, Version 2.0.
acquifer
Acquifer-Python-APIPython package providing utilitary functionalities when working with an ACQUIFER Imaging Machine.Functionalities include :metadata persing from filenamescontrol of the microscope (tcpip)Similar functions are available for java programs (e.g Fiji) via the acquifer-core package, distributed via the ACQUIFER update site (upon request).InstallationIn a command prompt, usepip install acquifer.The package can be installed locally for test purpose, by opening a command line in the repository directory :pip install -e .with e for "experimental"This way any change to the code is directly reflected.
acquifer-napari
acquifer-napariThe acquifer-napari plugin allows loading IM04 dataset directory, as multi-dimensional images in napari.Sliders for well, channel, time and Z are automatically rendered when there are more than 1 coordinates along the dimension.The plugin uses Dask-Image for efficient data-loading "on request" similar to the VirtualStack in ImageJ.InstallationVia the napari plugin manager : acquifer-napari. Or with pip :pip install acquifer-napari.Usepip install -e .to install in developement mode, so any change in the source code is directly reflected.Usenpe2 listto check that the plugin is correctly installed and visible by napari.For instance here, the package defines 1 command, which is a reader.One could have more commands, which would be implement other types.This should output something like following ┌──────────────────────────────┬─────────┬──────┬───────────────────────────────────────────────────────────┐ │ Name │ Version │ Npe2 │ Contributions │ ├──────────────────────────────┼─────────┼──────┼───────────────────────────────────────────────────────────┤ │ acquifer-napari │ 0.0.1 │ ✅ │ commands (1), readers (1)The plugin should be installed in an environment with napari installed.Napari can be started with thenaparicommand in a command prompt with a system wide python installation.Once installed, napari can be opened in a IPython interactive session with>>importnapari>>napari.Viewer()ConfigurationsThe filenapari.yamlinacquifer_napari_plugindefines what functions of the python package are visible to napari.The top levelnamefield must be the same than the python package name defined insetup.cfg. It first define a set of commands, which have a customid, and apython_name, which is the actual location of the function in the python package (or module).Then the napari.yaml has optional subsectionsreaders,writers,widget, to reference some of the commands previously defined, to notify napari that they implemente those standard functions.For instance I first define a command myReader pointing to myPackage.myReader, and I reference that command using the id it in the section readersSeehttps://napari.org/stable/plugins/first_plugin.html#add-a-napari-yaml-manifestIssuesIf you encounter any problems, pleasefile an issuealong with a detailed description.
acquire
Acquireacquireis a tool to quickly gather forensic artifacts from disk images or a live system into a lightweight container. This makesacquirean excellent tool to, among others, speedup the process of digital forensic triage. It usesdissectto gather that information from the raw disk, if possible.acquiregathers artifacts based on modules. These modules are paths or globs on a filesystem which acquire attempts to gather. Multiple modules can be executed at once, which have been collected together inside a profile. These profiles (used with--profile) arefull,default,minimalandnone. Depending on what operating system gets detected, different artifacts are collected.The most basic usage ofacquireis as follows:user@dissect~$sudoacquireThe tool requires administrative access to read raw disk data instead of using the operating system for file access. However, there are some options available to use the operating system as a fallback option. (e.g--fallbackor--force-fallback)For more information, please seethe documentation.RequirementsThis project is part of the Dissect framework and requires Python.Information on the supported Python versions can be found in the Getting Started section ofthe documentation.Installationacquireis available onPyPI.pipinstallacquireBuild and test instructionsThis project usestoxto build source and wheel distributions. Run the following command from the root folder to build these:tox-ebuildThe build artifacts can be found in thedist/directory.toxis also used to run linting and unit tests in a self-contained environment. To run both linting and unit tests using the default installed Python version, run:toxFor a more elaborate explanation on how to build and test the project, please seethe documentation.ContributingThe Dissect project encourages any contribution to the codebase. To make your contribution fit into the project, please refer tothe development guide.Copyright and licenseDissect is released as open source by Fox-IT (https://www.fox-it.com) part of NCC Group Plc (https://www.nccgroup.com).Developed by the Dissect Team ([email protected]) and made available athttps://github.com/fox-it/acquire.License terms: AGPL3 (https://www.gnu.org/licenses/agpl-3.0.html). For more information, see the LICENSE file.
acquire-imaging
Acquirepython-mpipinstallacquire-imagingAcquire (acquire-imagingon PyPI) provides high-speed, multi-camera, video streaming for up to2cameras and image acquisition with a programming interface for streaming video data directly to Python, cloud-friendly file formats, and visualization platforms, such asnapari.NoteThis is an early stage project. If you find it interesting, please reach out!Acquire supports the following cameras (currently only on Windows):Hamamatsu Orca Fusion BT (C15440-20UP)Vieworks VC-151MX-M6H00FLIR Blackfly USB3 (BFLY-U3-23S6M-C)FLIR Oryx 10GigE (ORX-10GS-51S5M-C)Acquire also supports the following output file formats:TiffOME-ZarrforZarr v2Zarr v3For testing and demonstration purposes, Acquire provides a few simulated video sources.UsageCheck out our documentationhere.The providednapariplugin (code here) is a good example of how to stream for visualization.DevelopmentWe welcome contributors. The following will help you get started building the code.EnvironmentRequiresCMake 3.23+ (download pageor viachocolatey)A C++20 compiler (Microsoft Visual Studio Communitydownload page, or clang)Rust (via rustup, seeinstall page)conda (optional; viaminiconda)libclang >= v5.0 (on windows viachocochoco install llvmor, on osx, viabrewbrew install llvm)It's strongly recommended you create a python environment for developmentcondacreate--nameacquirepython=3.11 condaactivateacquireBuildcondaactivateacquire gitsubmoduleupdate--init--recursive pipinstallmaturin maturinbuild-ipythonImportantWhen updating the 'acquire-video-runtime' (the c api), you need to manually trigger a rebuild by touchingwrapper.h.gitsubmoduleupdate# updates acquire-video-runtimetouchwrapper.h# will trigger a rebuildpython-mbuildThis package depends on a submodule (acquire-video-runtime) and binaries from the following Acquire drivers:acquire-driver-commonacquire-driver-hdcamacquire-driver-egrabberacquire-driver-zarracquire-driver-spinnakerThe build script will automatically try to fetch the binaries from GitHub releases. In order to configure which release of each driver to use, you can set the value indrivers.json:{"acquire-driver-common":"0.1.0","acquire-driver-hdcam":"0.1.0","acquire-driver-egrabber":"0.1.0","acquire-driver-zarr":"0.1.0","acquire-driver-spinnaker":"0.1.0"}These values can be set to a specific version, or tonightlyfor nightly builds.Developpipinstall-e".[testing]"pytest-s--tb=short--log-cli-level=0This project usespre-committo run required checks as git hooks.pipinstallpre-commit pre-commitinstallTroubleshootingMaturin can't find a python interpreterMaturinis a command line tool associated withpyo3. It helps automate the build and packaging process. It's invoked bysetuptoolsduring a build.Double-check you've activated the right conda environment.Trymaturin build -i pythonThis seems to happen on windows in anaconda environments when multiple python interpreters are available on the path.It seems to happen less frequently when invoked via pip -pip install -e .will end up invoking maturin.Working with an editable install, how do I update the build?It depends on what you changed:acquire-video-runtime(c/c++ code):touch wrapper.h; maturin developrust code:maturin developZarr V3 tests are failingYou should make sure that the following environment variables are set:ZARR_V3_EXPERIMENTAL_API: 1 ZARR_V3_SHARDING: 1
acquirest
AcquiRestAcquiRest is a web service that turns PostgreSQL database into RESTful API.Free software: MIT licenseDocumentation:https://acquirest.readthedocs.io.FeaturesTODOCreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History0.1.0 (2022-02-22)First release on PyPI.
acquisition-case-transform
No description available on PyPI.
acquisition-decisions-legacy
No description available on PyPI.
acquisition-decisions-sc
No description available on PyPI.
acquisition-extractor
No description available on PyPI.
acquisition-ruling-phrase
No description available on PyPI.
acquisition-sanitizer
No description available on PyPI.
acquisition-statute-parser
No description available on PyPI.
acr
UNKNOWN
acr122u-websocket
ACR122U WebsocketThis project enables you to connect an usb ACR122U NFC card scanner to a computer and access it using Socket.IO.FeaturesRead UUID from nfc cards and send them over Socket.IO.Websocket messages to start and stop the polling for cards.Websocket messages to give a confirmation or error beep and light signal.Automatically reconnect to the reader when interrupted.InstallationYou can install this package fromPyPI.pipinstallacr122u-websocketUsageConnect the ACR122U reader to the computerRun the apppython-macr122u_websocket.appAPIYou can connect to the webserver usingSocket.IO. These are the available events:pollingStart or stop the polling.Requeststartto start the polling.stopto stop the polling.Replyno card reader connectedif no card reader is connected.polling startedif the polling has (already) started.polling stoppedif the polling has (already) stopped.invalid messageif the message is neitherstartnorstop.status indicatorSet the status indicator.Requestconfimto play the confirming status.errorto play the error status.Replyno card reader connectedif no card reader is connectedconfirm status setif the confirm beep and light have been showerror status setif the error beep and light have been showinvalid messageif the message is neitherconfirmnorerrorcard scannedBroadcasts when a card has been scannedBroadcast{"uuid": [..]}- An object containing the uuid in the form of a list of integers.ExampleSee an example webpage attest.html.This page is also served onhttp://localhost:8080/test.
acra
Failed to fetch description. HTTP Status Code: 404
acr-aiosmtpd
This is a server for SMTP and related protocols, similar in utility to the standard library’s smtpd.py module, but rewritten to be based on asyncio for Python 3.
acrawler
🔍 A powerful web-crawling framework, based on aiohttp.FeatureWrite your crawler in one Python script with asyncioSchedule task with priority, fingerprint, exetime, recrawl…Middleware: add handlers before or after task’s executionSimple shortcuts to speed up scriptingParse html conveniently withParselParse with rules and chained processorsSupport JavaScript/browser-automation withpyppeteerStop and Resume: crawl periodically and persistentlyDistributed work support with RedisInstallationTo install, simply use pip:$pipinstallacrawler(Optional)$pipinstalluvloop#(only Linux/macOS, for faster asyncio event loop)$pipinstallaioredis#(if you need Redis support)$pipinstallmotor#(if you need MongoDB support)$pipinstallaiofiles#(if you need FileRequest)DocumentationDocumentation and tutorial are available online athttps://acrawler.readthedocs.io/and in thedocsdirectory.Sample CodeScrape imdb.comfromacrawlerimportCrawler,Request,ParselItem,Handler,register,get_loggerclassMovieItem(ParselItem):log=Truecss={# just some normal css rules# see Parsel for detailed information"date":".subtext a[href*=releaseinfo]::text","time":".subtext time::text","rating":"span[itemprop=ratingValue]::text","rating_count":"span[itemprop=ratingCount]::text","metascore":".metacriticScore span::text",# if you provide a list with additional functions,# they are considered as field processor function"title":["h1::text",str.strip],# the following four fules is for getting all matching values# the rule starts with [ and ends with ] comparing to normal rules"genres":"[.subtext a[href*=genres]::text]","director":"[h4:contains(Director) ~ a[href*=name]::text]","writers":"[h4:contains(Writer) ~ a[href*=name]::text]","stars":"[h4:contains(Star) ~ a[href*=name]::text]",}classIMDBCrawler(Crawler):config={"MAX_REQUESTS":4,"DOWNLOAD_DELAY":1}asyncdefstart_requests(self):yieldRequest("https://www.imdb.com/chart/moviemeter",callback=self.parse)defparse(self,response):yield fromresponse.follow(".lister-list tr .titleColumn a::attr(href)",callback=self.parse_movie)defparse_movie(self,response):url=response.url_stryieldMovieItem(response.sel,extra={"url":url.split("?")[0]})@register()classHorrorHandler(Handler):family="MovieItem"logger=get_logger("horrorlog")asyncdefhandle_after(self,item):ifitem["genres"]and"Horror"initem["genres"]:self.logger.warning(f"({item['title']}) is a horror movie!!!!")@MovieItem.bind()defprocess_time(value):# a self-defined field processing function# process time to minutes# '3h 1min' -> 181ifvalue:res=0segs=value.split(" ")forseginsegs:ifseg.endswith("min"):res+=int(seg.replace("min",""))elifseg.endswith("h"):res+=60*int(seg.replace("h",""))returnresreturnvalueif__name__=="__main__":IMDBCrawler().run()Scrape quotes.toscrape.com# Scrape quotes from http://quotes.toscrape.com/fromacrawlerimportParser,Crawler,ParselItem,Requestlogger=get_logger("quotes")classQuoteItem(ParselItem):log=Truedefault={"type":"quote"}css={"author":"small.author::text"}xpath={"text":['.//span[@class="text"]/text()',lambdas:s.strip("“")[:20]]}classAuthorItem(ParselItem):log=Truedefault={"type":"author"}css={"name":"h3.author-title::text","born":"span.author-born-date::text"}classQuoteCrawler(Crawler):main_page=r"quotes.toscrape.com/page/\d+"author_page=r"quotes.toscrape.com/author/.*"parsers=[Parser(in_pattern=main_page,follow_patterns=[main_page,author_page],item_type=QuoteItem,css_divider=".quote",),Parser(in_pattern=author_page,item_type=AuthorItem),]asyncdefstart_requests(self):yieldRequest(url="http://quotes.toscrape.com/page/1/")if__name__=="__main__":QuoteCrawler().run()Seeexamples.TodoReplace parsel with parselxclean redundant handlersCralwer’s name for distinguishingUse dynaconf as configuration managerAdd delta_key support for requestMonitor all crawlers in webWrite detailed DocumentationTesting
acrawler-cfscrape
acrawler-cfscrapeThe handler works withaCrawlerandcloudscraperInstallation$pipinstallacrawler_cfscrapeUsageAdd Handler:classMyCrawler(Crawler):middleware_config={"acrawler_cfscrape.CfscrapeHandler":True,}config={"CFS_COOKIES_FILE":Path.home()/".cfscookies","CFS_URL":"http://www.example.com","CFS_PROXIES":None,}
acrawler-prometheus
acrawler-prometheusThe handler works withaCrawlerandPrometheus.Export statistics for:Concurrent requestsTask (Requests, Items) countsQueue statusInstallation$pipinstallacrawler_prometheusUsageAdd Handler:classMyCrawler(Crawler):middleware_config={"acrawler_prometheus.PromExporter":100,}config={"PROMETHEUS_INTERVAL":5}Avaliable Config:PROMETHEUS_ADDR="localhost"PROMETHEUS_PORT=8000PROMETHEUS_INTERVAL=1# exporting interval, in second
acrawriter
No description available on PyPI.
acrcasppi-ml
acrcasppi_ml predicts protein-protein interaction between Anti-CRISPR protein (Acr) and CRISPR-associated protein (Cas), based on the machine learning algorithm.Input file:The user need to provide pairs of acr and cas protein as input. The first and second column should contain acr and cas protein, respecticely. An example input file can be obtained from the given linkhttps://github.com/snehaiasri/acrcasppi/blob/main/example.csv.Usage:After installation, perform the following steps to use the packageimport the packageimport acrcasppi_ml as acpsave the input file as dataframe. Provide the full path of the input file.df = pd.read_csv("example.csv")convert the df into numpy arrayvar = df.to_numpy()call the predict functionx = acp.predict(var)to call the predict_proba functiony = acp.predict_proba(var)call gen_file function to generate the output file (output.txt) in your current working directoryacp.gen_file(var)HelpTo see the documentation of each function, use help command. For example, help(acr_cas_ppi) or help(predict).RequirmentsPython >3.9, numpy, pickle-mixin, sklearn.
acrclient
Python ACR Client moduleContains a simple client for calling the v2 endpoints of theACRCloudAPI.Installationpoetryaddacrclient# or on old setup style projectspip-minstallacrclientUsage>>>fromacrclientimportClient>>>client=Client(bearer_token="bearer-token")Development# setup a dev envpython-mvenvenv .env/bin/activate# install a modern poetry versionpython-mpipinstallpoetry>=1.2.0# install deps and dev versionpoetryinstall# make changes, run testspoetryrunpytestRelease ManagementThe CI/CD setup uses semantic commit messages following theconventional commits standard. There is a GitHub Action in.github/workflows/semantic-release.yamlthat usesgo-semantic-committo create new releases.The commit message should be structured as follows:<type>[optional scope]: <description> [optional body] [optional footer(s)]The commit contains the following structural elements, to communicate intent to the consumers of your library:fix:a commit of the typefixpatches gets released with a PATCH version bumpfeat:a commit of the typefeatgets released as a MINOR version bumpBREAKING CHANGE:a commit that has a footerBREAKING CHANGE:gets released as a MAJOR version bumptypes other thanfix:andfeat:are allowed and don't trigger a releaseIf a commit does not contain a conventional commit style message you can fix it during the squash and merge operation on the PR.Once a commit has landed on themainbranch a release will be created and automatically published topypiusing the GitHub Action in.github/workflows/release.yamlwhich usespoetryto publish the package to pypi.LicenseThis package is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, version 3 of the License.CopyrightCopyright (c) 2023Radio Bern RaBe
acrcloud
No description available on PyPI.
acr-cloud
EelEel is a little Python library for making simple Electron-like offline HTML/JS GUI apps, with full access to Python capabilities and libraries.Eel hosts a local webserver, then lets you annotate functions in Python so that they can be called from Javascript, and vice versa.Eel is designed to take the hassle out of writing short and simple GUI applications. If you are familiar with Python and web development, probably just jump tothis examplewhich picks random file names out of the given folder (something that is impossible from a browser).EelIntroInstallUsageDirectory StructureStarting the appApp optionsChrome/Chromium flagsExposing functionsEello, World!Return valuesCallbacksSynchronous returnsAsynchronous PythonBuilding distributable binary with PyInstallerMicrosoft EdgeIntroThere are several options for making GUI apps in Python, but if you want to use HTML/JS (in order to use jQueryUI or Bootstrap, for example) then you generally have to write a lot of boilerplate code to communicate from the Client (Javascript) side to the Server (Python) side.The closest Python equivalent to Electron (to my knowledge) iscefpython. It is a bit heavy weight for what I wanted.Eel is not as fully-fledged as Electron or cefpython - it is probably not suitable for making full blown applications like Atom - but it is very suitable for making the GUI equivalent of little utility scripts that you use internally in your team.For some reason many of the best-in-class number crunching and maths libraries are in Python (Tensorflow, Numpy, Scipy etc) but many of the best visualization libraries are in Javascript (D3, THREE.js etc). Hopefully Eel makes it easy to combine these into simple utility apps for assisting your development.InstallInstall from pypi withpip:pipinstalleelUsageDirectory StructureAn Eel application will be split into a frontend consisting of various web-technology files (.html, .js, .css) and a backend consisting of various Python scripts.All the frontend files should be put in a single directory (they can be further divided into folders inside this if necessary).my_python_script.py <-- Python scripts other_python_module.py static_web_folder/ <-- Web folder main_page.html css/ style.css img/ logo.pngStarting the appSuppose you put all the frontend files in a directory calledweb, including your start pagemain.html, then the app is started like this;importeeleel.init('web')eel.start('main.html')This will start a webserver on the default settings (http://localhost:8000) and open a browser tohttp://localhost:8000/main.html.If Chrome or Chromium is installed then by default it will open in that in App Mode (with the--appcmdline flag), regardless of what the OS's default browser is set to (it is possible to override this behaviour).App optionsAdditional options can be passed toeel.start()as keyword arguments.Some of the options include the mode the app is in (e.g. 'chrome'), the port the app runs on, the host name of the app, and adding additional command line flags.As of Eel v0.11.0, the following options are available tostart():mode, a string specifying what browser to use (e.g.'chrome','electron','edge','custom'). Can also beNoneorFalseto not open a window.Default:'chrome'host, a string specifying what hostname to use for the Bottle server.Default:'localhost')port, an int specifying what port to use for the Bottle server. Use0for port to be picked automatically.Default:8000.block, a bool saying whether or not the call tostart()should block the calling thread.Default:Truejinja_templates, a string specifying a folder to use for Jinja2 templates, e.g.my_templates.Default:Nonecmdline_args, a list of strings to pass to the command to start the browser. For example, we might add extra flags for Chrome;eel.start('main.html', mode='chrome-app', port=8080, cmdline_args=['--start-fullscreen', '--browser-startup-dialog']).Default:[]size, a tuple of ints specifying the (width, height) of the main window in pixelsDefault:Noneposition, a tuple of ints specifying the (left, top) of the main window in pixelsDefault:Nonegeometry, a dictionary specifying the size and position for all windows. The keys should be the relative path of the page, and the values should be a dictionary of the form{'size': (200, 100), 'position': (300, 50)}.Default: {}close_callback, a lambda or function that is called when a websocket to a window closes (i.e. when the user closes the window). It should take two arguments; a string which is the relative path of the page that just closed, and a list of other websockets that are still open.Default:Noneapp, an instance of Bottle which will be used rather than creating a fresh one. This can be used to install middleware on the instance before starting eel, e.g. for session management, authentication, etc.Exposing functionsIn addition to the files in the frontend folder, a Javascript library will be served at/eel.js. You should include this in any pages:<scripttype="text/javascript"src="/eel.js"></script>Including this library creates aneelobject which can be used to communicate with the Python side.Any functions in the Python code which are decorated [email protected] [email protected]_python_function(a,b):print(a,b,a+b)...will appear as methods on theeelobject on the Javascript side, like this...console.log("Calling Python...");eel.my_python_function(1,2);// This calls the Python function that was decoratedSimilarly, any Javascript functions which are exposed like this...eel.expose(my_javascript_function);functionmy_javascript_function(a,b,c,d){if(a<b){console.log(c*d);}}can be called from the Python side like this...print('Calling Javascript...')eel.my_javascript_function(1,2,3,4)# This calls the Javascript functionWhen passing complex objects as arguments, bear in mind that internally they are converted to JSON and sent down a websocket (a process that potentially loses information).Eello, World!See full example in:examples/01 - hello_worldPutting this together into aHello, World!example, we have a short HTML page,web/hello.html:<!DOCTYPE html><html><head><title>Hello, World!</title><!-- Include eel.js - note this file doesn't exist in the 'web' directory --><scripttype="text/javascript"src="/eel.js"></script><scripttype="text/javascript">eel.expose(say_hello_js);// Expose this function to Pythonfunctionsay_hello_js(x){console.log("Hello from "+x);}say_hello_js("Javascript World!");eel.say_hello_py("Javascript World!");// Call a Python function</script></head><body>Hello, World!</body></html>and a short Python scripthello.py:importeel# Set web files folder and optionally specify which file types to check for eel.expose()# *Default allowed_extensions are: ['.js', '.html', '.txt', '.htm', '.xhtml']eel.init('web',allowed_extensions=['.js','.html'])@eel.expose# Expose this function to Javascriptdefsay_hello_py(x):print('Hello from%s'%x)say_hello_py('Python World!')eel.say_hello_js('Python World!')# Call a Javascript functioneel.start('hello.html')# Start (this blocks and enters loop)If we run the Python script (python hello.py), then a browser window will open displayinghello.html, and we will see...Hello from Python World! Hello from Javascript World!...in the terminal, and...Hello from Javascript World! Hello from Python World!...in the browser console (press F12 to open).You will notice that in the Python code, the Javascript function is called before the browser window is even started - any early calls like this are queued up and then sent once the websocket has been established.Return valuesWhile we want to think of our code as comprising a single application, the Python interpreter and the browser window run in separate processes. This can make communicating back and forth between them a bit of a mess, especially if we always had to explicitlysendvalues from one side to the other.Eel supports two ways of retrievingreturn valuesfrom the other side of the app, which helps keep the code concise.CallbacksWhen you call an exposed function, you can immediately pass a callback function afterwards. This callback will automatically be called asynchrounously with the return value when the function has finished executing on the other side.For example, if we have the following function defined and exposed in Javascript:eel.expose(js_random);functionjs_random(){returnMath.random();}Then in Python we can retrieve random values from the Javascript side like so:defprint_num(n):print('Got this from Javascript:',n)# Call Javascript function, and pass explicit callback functioneel.js_random()(print_num)# Do the same with an inline lambda as callbackeel.js_random()(lambdan:print('Got this from Javascript:',n))(It works exactly the same the other way around).Synchronous returnsIn most situations, the calls to the other side are to quickly retrieve some piece of data, such as the state of a widget or contents of an input field. In these cases it is more convenient to just synchronously wait a few milliseconds then continue with your code, rather than breaking the whole thing up into callbacks.To synchronously retrieve the return value, simply pass nothing to the second set of brackets. So in Python we would write:n=eel.js_random()()# This immediately returns the valueprint('Got this from Javascript:',n)You can only perform synchronous returns after the browser window has started (after callingeel.start()), otherwise obviously the call with hang.In Javascript, the language doesn't allow us to block while we wait for a callback, except by usingawaitfrom inside anasyncfunction. So the equivalent code from the Javascript side would be:asyncfunctionrun(){// Inside a function marked 'async' we can use the 'await' keyword.letn=awaiteel.py_random()();// Must prefix call with 'await', otherwise it's the same syntaxconsole.log("Got this from Python: "+n);}run();Asynchronous PythonEel is built on Bottle and Gevent, which provide an asynchronous event loop similar to Javascript. A lot of Python's standard library implicitly assumes there is a single execution thread - to deal with this, Gevent can "monkey patch" many of the standard modules such astime.This monkey patching is done automatically when you callimport eel. If you need monkey patching you shouldimport gevent.monkeyand callgevent.monkey.patch_all()beforeyouimport eel. Monkey patching can interfere with things like debuggers so should be avoided unless necessary.For most cases you should be fine by avoiding usingtime.sleep()and instead using the versions provided bygevent. For convenience, the two most commonly needed gevent methods,sleep()andspawn()are provided directly from Eel (to save importingtimeand/orgeventas well).In this example...importeeleel.init('web')defmy_other_thread():whileTrue:print("I'm a thread")eel.sleep(1.0)# Use eel.sleep(), not time.sleep()eel.spawn(my_other_thread)eel.start('main.html',block=False)# Don't block on this callwhileTrue:print("I'm a main loop")eel.sleep(1.0)# Use eel.sleep(), not time.sleep()...we would then have three "threads" (greenlets) running;Eel's internal thread for serving the web folderThemy_other_threadmethod, repeatedly printing"I'm a thread"The main Python thread, which would be stuck in the finalwhileloop, repeatedly printing"I'm a main loop"Building distributable binary with PyInstallerIf you want to package your app into a program that can be run on a computer without a Python interpreter installed, you should usePyInstaller.Configure a virtualenv with desired Python version and minimum necessary Python packagesInstall PyInstallerpip install PyInstallerIn your app's folder, runpython -m eel [your_main_script] [your_web_folder](for example, you might runpython -m eel hello.py web)This will create a new folderdist/Valid PyInstaller flags can be passed through, such as excluding modules with the flag:--exclude module_name. For example, you might runpython -m eel file_access.py web --exclude win32com --exclude numpy --exclude cryptographyWhen happy that your app is working correctly, add--onefile --noconsoleflags to build a single executable fileConsult thedocumentation for PyInstallerfor more options.Microsoft EdgeFor Windows 10 users, Microsoft Edge (eel.start(.., mode='edge')) is installed by default and a useful fallback if a preferred browser is not installed. See the examples:A Hello World example using Microsoft Edge:examples/01 - hello_world-Edge/Example implementing browser-fallbacks:examples/07 - CreateReactApp/eel_CRA.py
acrcloudclient
acrcloudACR Cloud Python ClientAPI ACRCloud Dochttps://docs.acrcloud.com/reference/console-apiACRClient Examples UsageList and remove audios that are created on 2021fromacrcloudimportACRCloudClientimportosACR_TOKEN=os.getenv("ACR_TOKEN")deflog(bucket,msg):"""Local log function"""print(f"B:{bucket.name}-{msg}")acr=ACRCloudClient(ACR_TOKEN)forbucketinacr.list_buckets(search="my-bucket-"):print("Processing bucket:",bucket.name)audios=bucket.list_audios(order="asc")forainaudios:if"2021"ina.created_at:a.delete()log(bucket,f"Delete audio:{a.id}-{a.created_at}")else:log(bucket,"Not found any audio from 2021")breakEnvironment VariablesACR_CLOUD_TOKEN =Roadmap planHandle date fields with datetime objectHandle TOKEN through default environment variablesPublish to pip
acre
acre
acrilib
Table of ContentsOverviewProgramming IdomsthreadedSingleton and NamedSingletonSequenceTimedSizedRotatingHandlerDecoratorsData TypesMediatorsetup toolsChange HistoryOverviewacrilibis a python library providing useful programming patterns and tools.acrilibstarted as Acrisel’s internal idioms and utilities for programmers. The main key is that this library is completely independent. It does not use any external packages beside what provided by Python.It includes:programming idioms that are repeatedly used by programmers.helpers functions for logging and other utilities.We decided to contribute this library to Python community as a token of appreciation to what this community enables us.We hope that you will find this library useful and helpful as we find it.If you have comments or insights, please don’t hesitate to contact us [email protected] Idomsthreadeddecorator for methods that can be executed as a thread. RetriveAsycValue callable class used in the example below provide means to access results. One can provide their own callable to pass results.examplefromacrisimportthreaded,RetriveAsycValuefromtimeimportsleepclassThreadedExample(object):@threadeddefproc(self,id_,num,stall):s=numwhilenum>0:print("%s:%s"%(id_,s))num-=1s+=stallsleep(stall)print("%s:%s"%(id_,s))returnsexample outputprint("starting workers")te1=ThreadedExample().proc('TE1',3,1)te2=ThreadedExample().proc('TE2',3,1)print("collecting results")te1_callback=RetriveAsycValue('te1')te1.addCallback(te1_callback)te2_callback=RetriveAsycValue('te2')te2.addCallback(te2_callback)print('joining t1')te1.join()print('joined t1')print('%scallback result:%s'%(te1_callback.name,te1_callback.result))result=te1.syncResult()print('te1 syncResult :%s'%result)result=te2.syncResult()print('te2 syncResult :%s'%result)print('%scallback result:%s'%(te2_callback.name,te2_callback.result))will produce:startingworkersTE1:3TE2:3collectingresultsjoiningt1TE1:4TE2:4TE1:5TE2:5TE1:6TE2:6joinedt1te1callbackresult:6te1syncResult:6te2syncResult:6te2callbackresult:6Singleton and NamedSingletonmeta class that creates singleton footprint of classes inheriting from it.Singleton examplefromacrisimportSingletonclassSequence(Singleton):step_id=0def__call__(self):step_id=self.step_idself.step_id+=1returnstep_idexample outputA=Sequence()print('A',A())print('A',A())B=Sequence()print('B',B())will produce:A0A1B2NamedSingleton examplefromacrisimportSingletonclassSequence(NamedSingleton):step_id=0def__init__(self,name=''):self.name=namedef__call__(self,):step_id=self.step_idself.step_id+=1returnstep_idexample outputA=Sequence('A')print(A.name,A())print(A.name,A())B=Sequence('B')print(B.name,B())will produce:A0A1B0Sequencemeta class to produce sequences. Sequence allows creating different sequences using name tags.examplefromacrisimportSequenceA=Sequence('A')print('A',A())print('A',A())B=Sequence('B')print('B',B())A=Sequence('A')print('A',A())print('A',A())B=Sequence('B')print('B',B())example outputA0A1B0A2A3B1TimedSizedRotatingHandlerTBDDecoratorsUseful decorators for production and debug.traced_methodlogs entry and exit of function or method.fromacrisimporttraced_methodtraced=traced_method(print,print_args=True,print_result=True)classOper(object):def__init__(self,value):self.value=valuedef__repr__(self):returnstr(self.value)@traceddefmul(self,value):self.value*=valuereturnself@traceddefadd(self,value):self.value+=valuereturnselfo=Oper(3)print(o.add(2).mul(5).add(7).mul(8))would result with the following output:[add][entering][args:(2)][kwargs:{}][trace_methods.py.Oper(39)][add][exiting][timespan:0:00:00.000056][result:5][trace_methods.py.Oper(39)][mul][entering][args:(5)][kwargs:{}][trace_methods.py.Oper(34)][mul][exiting][timespan:0:00:00.000010][result:25][trace_methods.py.Oper(34)][add][entering][args:(7)][kwargs:{}][trace_methods.py.Oper(39)][add][exiting][timespan:0:00:00.000007][result:32][trace_methods.py.Oper(39)][mul][entering][args:(8)][kwargs:{}][trace_methods.py.Oper(34)][mul][exiting][timespan:0:00:00.000008][result:256][trace_methods.py.Oper(34)]256Data Typesvaries derivative of Python data typesMergeChainedDictSimilar to ChainedDict, but merged the keys and is actually derivative of dict.a={1:11,2:22}b={3:33,4:44}c={1:55,4:66}d=MergedChainedDict(c,b,a)print(d)Will output:{1:55,2:22,3:33,4:66}MediatorClass interface to generator allowing query of has_next()ExamplefromacrisimportMediatordefyrange(n):i=0whilei<n:yieldii+=1n=10m=Mediator(yrange(n))foriinrange(n):print(i,m.has_next(3),next(m))print(i,m.has_next(),next(m))Example Output0True01True12True23True34True45True56True67True78False89False9Traceback(mostrecentcalllast):File"/private/var/acrisel/sand/acris/acris/acris/example/mediator.py",line19,in<module>print(i,m.has_next(),next(m))File"/private/var/acrisel/sand/acris/acris/acris/acris/mediator.py",line38,in__next__value=next(self.generator)StopIterationsetup toolsMethods to use in standard python environmentChange HistoryVersion 1.0Initial publication to open source
acrilog
Table of ContentsOverviewTimedSizedRotatingHandlerexampleMpLogger and LevelBasedFormatterexampleExample outputClarification of parametersnameproecess_keyfile_prefix and file_suffixfile_modeconsolidatekwargsChange Historyv0.9v1.0v1.1v2.0Next StepsOverviewacrilogis a python library encapsulating multiprocessing logging into practical use.acrilogstarted as Acrisel’s internal utility for programmers.It included:Time and size rotating handler.Multiprocessing logging queue serverThe library makes it easier to add logging in a multiprocessing environment where processes are split among multiple Python source codes.We decided to contribute this library to Python community as a token of appreciation to what this community enables us.We hope that you will find this library useful and helpful as we find it.If you have comments or insights, please don’t hesitate to contact us [email protected] TimedSizedRotatingHandler is combining TimedRotatingFileHandler with RotatingFileHandler. Usage as handler with logging is as defined in Python’s logging how-toexampleimportlogging# create loggerlogger=logging.getLogger('simple_example')logger.setLevel(logging.DEBUG)# create console handler and set level to debugch=logging.TimedRotatingFileHandler()ch.setLevel(logging.DEBUG)# create formatterformatter=logging.Formatter('%(asctime)s-%(name)s-%(levelname)s-%(message)s')# add formatter to chch.setFormatter(formatter)# add ch to loggerlogger.addHandler(ch)# 'application' codelogger.debug('debug message')logger.info('info message')logger.warn('warn message')logger.error('error message')logger.critical('critical message')MpLogger and LevelBasedFormatterMultiprocessor logger using QueueListener and QueueHandler It uses TimedSizedRotatingHandler as its logging handlerIt also uses acris provided LevelBasedFormatter which facilitate message formats based on record level. LevelBasedFormatter inherent from logging.Formatter and can be used as such in customized logging handlers.exampleWithin main processimporttimeimportrandomimportloggingfromacrisimportMpLoggerimportosimportmultiprocessingasmpdefsubproc(limit=1,logger_info=None):logger=MpLogger.get_logger(logger_info,name="acrilog.subproc",)foriinrange(limit):sleep_time=3/random.randint(1,10)time.sleep(sleep_time)logger.info("proc [%s]:%s/%s- sleep%4.4ssec"%(os.getpid(),i,limit,sleep_time))level_formats={logging.DEBUG:"[%(asctime)s][%(levelname)s][%(message)s][%(module)s.%(funcName)s(%(lineno)d) ]",'default':"[%(asctime)s][%(levelname)s][%(message)s]",}mplogger=MpLogger(logging_level=logging.DEBUG,level_formats=level_formats,datefmt='%Y-%m-%d,%H:%M:%S.%f')mplogger.start(name='main_process')logger=MpLogger.get_logger(mplogger.logger_info())logger.debug("starting sub processes")procs=list()forlimitin[1,1]:proc=mp.Process(target=subproc,args=(limit,mplogger.logger_info(),))procs.append(proc)proc.start()forprocinprocs:ifproc:proc.join()logger.debug("sub processes completed")mplogger.stop()Example output[2016-12-19,11:39:44.953189][DEBUG][startingsubprocesses][mplogger.<module>(45)][2016-12-19,11:39:45.258794][INFO][proc[932]:0/1-sleep0.3sec][2016-12-19,11:39:45.707914][INFO][proc[931]:0/1-sleep0.75sec][2016-12-19,11:39:45.710487][DEBUG][subprocessescompleted][mplogger.<module>(56)]Clarification of parametersnamenameidentifies the base name for logger. Note the this parameter is available in both MpLogger init method and in its start method.MpLogger init’snameargument is used for consolidated logger whenconsolidateis set. It is also used for private logger of the main process, if one not provided when callingstart()method.proecess_keyprocess_keydefines one or more logger record field that would be part of the file name of the log. In case it is used, logger will have a file per records’ process key. This will be in addition for a consolidated log, ifconsolidateis set.By default, MpLogger usesnameas the process key. If something else is provided, e.g.,processName, it will be concatenated tonameas postfix.file_prefix and file_suffixAllows to distinguish among sets of logs of different runs by setting one (or both) offile_prefixandfile_suffix. Usually, the use of PID and granular datetime as prefix or suffix would create unique set of logs.file_modefile_modelet program define how logs will be opened. In default, logs are open in append mode. Hense, history is collected and file a rolled overnight and by size.consolidateconsolidate, when set, will create consolidated log from all processing logs. Ifconsolidatedis set andstart()is called withoutname, consolidation will be done into the main process.kwargskwargsare named arguments that will passed to FileHandler. This include:file_mode='a', for RotatingFileHandler maxBytes=0, for RotatingFileHandler backupCount=0, for RotatingFileHandler and TimedRotatingFileHandler encoding='ascii', for RotatingFileHandler and TimedRotatingFileHandler delay=False, for TimedRotatingFileHandler when='h', for TimedRotatingFileHandler interval=1, TimedRotatingFileHandler utc=False, TimedRotatingFileHandler atTime=None, for TimedRotatingFileHandlerChange Historyv0.9added ability to pass logger_info to subprocess,exposed encoding parameterv1.0replacedforce_globalwithconsolidateto genrerate consolidated logaddnameargument to MpLogger.start(). This will return logger with that name for the main process.MpLogger.__init__()nameargument will be used for consolidated log.v1.1addfile_prefixandfile_suffixas MpLogger parameters.fix bug when logdir is None.v2.0added NwLogger starting a server logger with NwLoggerClientHandler for remote processes.Next StepsCluster support using TCP/IPLogging monitor and alert
acris
Table of ContentsOverviewProgramming IdomsthreadedSingleton and NamedSingletonSequenceTimedSizedRotatingHandlerMpLogger and LevelBasedFormatterDecoratorsData TypesResourcePoolVirtual ResourcePoolMediatorUtilitiescommdir.pybee.pycsv2xlsx.pymail.pyprettyxml.pysshcmdtouchmrunMisccamel2snake and snake2camelxlsx2rstChange HistoryVersion 2.3Version 2.2Version 3.0Overviewacrisis a python library providing useful programming patterns and tools.acrisstarted as Acrisel’s internal idioms and utilities for programmers.It included:programming idioms that are repeatedly used by programmers.utilities that helps programmers and administrators manage their environmentsWe decided to contribute this library to Python community as a token of appreciation to what this community enables us.We hope that you will find this library useful and helpful as we find it.If you have comments or insights, please don’t hesitate to contact us [email protected] IdomsthreadedNote:inherent from acrilib decorator for methods that can be executed as a thread. RetriveAsycValue callable class used in the example below provide means to access results. One can provide their own callable to pass results.examplefromacrisimportthreaded,RetriveAsycValuefromtimeimportsleepclassThreadedExample(object):@threadeddefproc(self,id_,num,stall):s=numwhilenum>0:print("%s:%s"%(id_,s))num-=1s+=stallsleep(stall)print("%s:%s"%(id_,s))returnsexample outputprint("starting workers")te1=ThreadedExample().proc('TE1',3,1)te2=ThreadedExample().proc('TE2',3,1)print("collecting results")te1_callback=RetriveAsycValue('te1')te1.addCallback(te1_callback)te2_callback=RetriveAsycValue('te2')te2.addCallback(te2_callback)print('joining t1')te1.join()print('joined t1')print('%scallback result:%s'%(te1_callback.name,te1_callback.result))result=te1.syncResult()print('te1 syncResult :%s'%result)result=te2.syncResult()print('te2 syncResult :%s'%result)print('%scallback result:%s'%(te2_callback.name,te2_callback.result))will produce:startingworkersTE1:3TE2:3collectingresultsjoiningt1TE1:4TE2:4TE1:5TE2:5TE1:6TE2:6joinedt1te1callbackresult:6te1syncResult:6te2syncResult:6te2callbackresult:6Singleton and NamedSingletonNote:inherent from acrilib meta class that creates singleton footprint of classes inheriting from it.Singleton examplefromacrisimportSingletonclassSequence(Singleton):step_id=0def__call__(self):step_id=self.step_idself.step_id+=1returnstep_idexample outputA=Sequence()print('A',A())print('A',A())B=Sequence()print('B',B())will produce:A0A1B2NamedSingleton examplefromacrisimportSingletonclassSequence(NamedSingleton):step_id=0def__init__(self,name=''):self.name=namedef__call__(self,):step_id=self.step_idself.step_id+=1returnstep_idexample outputA=Sequence('A')print(A.name,A())print(A.name,A())B=Sequence('B')print(B.name,B())will produce:A0A1B0SequenceNote:inherent from acrilib meta class to produce sequences. Sequence allows creating different sequences using name tags.examplefromacrisimportSequenceA=Sequence('A')print('A',A())print('A',A())B=Sequence('B')print('B',B())A=Sequence('A')print('A',A())print('A',A())B=Sequence('B')print('B',B())example outputA0A1B0A2A3B1TimedSizedRotatingHandleruse acrilog instead.MpLogger and LevelBasedFormatteruse acrilog instead.DecoratorsNote:inherent from acrilib Useful decorators for production and debug.traced_methodlogs entry and exit of function or method.fromacrisimporttraced_methodtraced=traced_method(print,print_args=True,print_result=True)classOper(object):def__init__(self,value):self.value=valuedef__repr__(self):returnstr(self.value)@traceddefmul(self,value):self.value*=valuereturnself@traceddefadd(self,value):self.value+=valuereturnselfo=Oper(3)print(o.add(2).mul(5).add(7).mul(8))would result with the following output:[add][entering][args:(2)][kwargs:{}][trace_methods.py.Oper(39)][add][exiting][timespan:0:00:00.000056][result:5][trace_methods.py.Oper(39)][mul][entering][args:(5)][kwargs:{}][trace_methods.py.Oper(34)][mul][exiting][timespan:0:00:00.000010][result:25][trace_methods.py.Oper(34)][add][entering][args:(7)][kwargs:{}][trace_methods.py.Oper(39)][add][exiting][timespan:0:00:00.000007][result:32][trace_methods.py.Oper(39)][mul][entering][args:(8)][kwargs:{}][trace_methods.py.Oper(34)][mul][exiting][timespan:0:00:00.000008][result:256][trace_methods.py.Oper(34)]256Data TypesNote:inherent from acrilib varies derivative of Python data typesMergeChainedDictSimilar to ChainedDict, but merged the keys and is actually derivative of dict.a={1:11,2:22}b={3:33,4:44}c={1:55,4:66}d=MergedChainedDict(c,b,a)print(d)Will output:{1:55,2:22,3:33,4:66}ResourcePoolResource pool provides program with interface to manager resource pools. This is used as means to funnel processing.ResourcePoolRequestor object can be used to request resource set resides in multiple pools.ResourcePoolRequestors object manages multiple requests for multiple resources.Sync Exampleimporttimefromacrisimportresource_poolasrpfromacrisimportThreadedimportqueuefromdatetimeimportdatetimeclassMyResource1(rp.Resource):passclassMyResource2(rp.Resource):passrp1=rp.ResourcePool('RP1',resource_cls=MyResource1,policy={'resource_limit':2,}).load()rp2=rp.ResourcePool('RP2',resource_cls=MyResource2,policy={'resource_limit':1,}).load()@Threaded()defworker_awaiting(name,rp):print('[%s]%sgetting resource'%(str(datetime.now()),name))r=rp.get()print('[%s]%sdoing work (%s)'%(str(datetime.now()),name,repr(r)))time.sleep(4)print('[%s]%sreturning%s'%(str(datetime.now()),name,repr(r)))rp.put(*r)r1=worker_awaiting('>>> w11-direct',rp1)r2=worker_awaiting('>>> w21-direct',rp2)r3=worker_awaiting('>>> w22-direct',rp2)r4=worker_awaiting('>>> w12-direct',rp1)Sync Example Output[2016-12-1113:06:14.659569]>>>w11-directgettingresource[2016-12-1113:06:14.659640]>>>w11-directdoingwork([Resource(name:MyResource1)])[2016-12-1113:06:14.659801]>>>w21-directgettingresource[2016-12-1113:06:14.659834]>>>w21-directdoingwork([Resource(name:MyResource2)])[2016-12-1113:06:14.659973]>>>w22-directgettingresource[2016-12-1113:06:14.660190]>>>w12-directgettingresource[2016-12-1113:06:14.660260]>>>w12-directdoingwork([Resource(name:MyResource1)])[2016-12-1113:06:18.662362]>>>w11-directreturning[Resource(name:MyResource1)][2016-12-1113:06:18.662653]>>>w21-directreturning[Resource(name:MyResource2)][2016-12-1113:06:18.662826]>>>w12-directreturning[Resource(name:MyResource1)][2016-12-1113:06:18.662998]>>>w22-directdoingwork([Resource(name:MyResource2)])[2016-12-1113:06:22.667149]>>>w22-directreturning[Resource(name:MyResource2)]Async Exampleimporttimefromacrisimportresource_poolasrpfromacrisimportThreadedimportqueuefromdatetimeimportdatetimeclassMyResource1(rp.Resource):passclassMyResource2(rp.Resource):passrp1=rp.ResourcePool('RP1',resource_cls=MyResource1,policy={'resource_limit':2,}).load()rp2=rp.ResourcePool('RP2',resource_cls=MyResource2,policy={'resource_limit':1,}).load()classCallback(object):def__init__(self,notify_queue):self.q=notify_queuedef__call__(self,resources=None):self.q.put(resources)@Threaded()defworker_callback(name,rp):print('[%s]%sgetting resource'%(str(datetime.now()),name))notify_queue=queue.Queue()r=rp.get(callback=Callback(notify_queue))ifnotr:print('[%s]%sdoing work before resource available'%(str(datetime.now()),name,))print('[%s]%swaiting for resources'%(str(datetime.now()),name,))ticket=notify_queue.get()r=rp.get(ticket=ticket)print('[%s]%sdoing work (%s)'%(str(datetime.now()),name,repr(r)))time.sleep(2)print('[%s]%sreturning (%s)'%(str(datetime.now()),name,repr(r)))rp.put(*r)r1=worker_callback('>>> w11-callback',rp1)r2=worker_callback('>>> w21-callback',rp2)r3=worker_callback('>>> w22-callback',rp2)r4=worker_callback('>>> w12-callback',rp1)Async Example Output[2016-12-1113:08:24.410447]>>>w11-callbackgettingresource[2016-12-1113:08:24.410539]>>>w11-callbackdoingwork([Resource(name:MyResource1)])[2016-12-1113:08:24.410682]>>>w21-callbackgettingresource[2016-12-1113:08:24.410762]>>>w21-callbackdoingwork([Resource(name:MyResource2)])[2016-12-1113:08:24.410945]>>>w22-callbackgettingresource[2016-12-1113:08:24.411227]>>>w22-callbackdoingworkbeforeresourceavailable[2016-12-1113:08:24.411273]>>>w12-callbackgettingresource[2016-12-1113:08:24.411334]>>>w22-callbackwaitingforresources[2016-12-1113:08:24.411452]>>>w12-callbackdoingwork([Resource(name:MyResource1)])[2016-12-1113:08:26.411901]>>>w11-callbackreturning([Resource(name:MyResource1)])[2016-12-1113:08:26.412200]>>>w21-callbackreturning([Resource(name:MyResource2)])[2016-12-1113:08:26.412505]>>>w22-callbackdoingwork([Resource(name:MyResource2)])[2016-12-1113:08:26.416130]>>>w12-callbackreturning([Resource(name:MyResource1)])[2016-12-1113:08:28.416001]>>>w22-callbackreturning([Resource(name:MyResource2)])Requestor Exampleimporttimefromacrisimportresource_poolasrpfromacrisimportThreadedimportqueuefromdatetimeimportdatetimeclassMyResource1(rp.Resource):passclassMyResource2(rp.Resource):passrp1=rp.ResourcePool('RP1',resource_cls=MyResource1,policy={'resource_limit':2,}).load()rp2=rp.ResourcePool('RP2',resource_cls=MyResource2,policy={'resource_limit':2,}).load()classCallback(object):def__init__(self,notify_queue):self.q=notify_queuedef__call__(self,ready=False):self.q.put(ready)@Threaded()defworker_callback(name,rps):print('[%s]%sgetting resource'%(str(datetime.now()),name))notify_queue=queue.Queue()callback=Callback(notify_queue,name=name)request=rp.Requestor(request=rps,callback=callback)ifrequest.is_reserved():resources=request.get()else:print('[%s]%sdoing work before resource available'%(str(datetime.now()),name,))print('[%s]%swaiting for resources'%(str(datetime.now()),name,))notify_queue.get()resources=request.get()print('[%s]%sdoing work (%s)'%(str(datetime.now()),name,repr(resources)))time.sleep(2)print('[%s]%sreturning (%s)'%(str(datetime.now()),name,repr(resources)))request.put(*resources)r1=worker_callback('>>> w11-callback',[(rp1,1),])r2=worker_callback('>>> w21-callback',[(rp1,1),(rp2,1)])r3=worker_callback('>>> w22-callback',[(rp1,1),(rp2,1)])r4=worker_callback('>>> w12-callback',[(rp1,1),])Requestor Example Output[2016-12-1306:27:54.924629]>>>w11-callbackgettingresource[2016-12-1306:27:54.925094]>>>w21-callbackgettingresource[2016-12-1306:27:54.925453]>>>w22-callbackgettingresource[2016-12-1306:27:54.926188]>>>w12-callbackgettingresource[2016-12-1306:27:54.932922]>>>w11-callbackdoingwork([Resource(name:MyResource1)])[2016-12-1306:27:54.933709]>>>w12-callbackdoingwork([Resource(name:MyResource1)])[2016-12-1306:27:54.938425]>>>w22-callbackdoingworkbeforeresourceavailable[2016-12-1306:27:54.938548]>>>w22-callbackwaitingforresources[2016-12-1306:27:54.939256]>>>w21-callbackdoingworkbeforeresourceavailable[2016-12-1306:27:54.939267]>>>w21-callbackwaitingforresources[2016-12-1306:27:56.936881]>>>w11-callbackreturning([Resource(name:MyResource1)])[2016-12-1306:27:56.937543]>>>w12-callbackreturning([Resource(name:MyResource1)])[2016-12-1306:27:56.947615]>>>w22-callbackdoingwork([Resource(name:MyResource2),Resource(name:MyResource1)])[2016-12-1306:27:56.948587]>>>w21-callbackdoingwork([Resource(name:MyResource2),Resource(name:MyResource1)])[2016-12-1306:27:58.949812]>>>w22-callbackreturning([Resource(name:MyResource2),Resource(name:MyResource1)])[2016-12-1306:27:58.950064]>>>w21-callbackreturning([Resource(name:MyResource2),Resource(name:MyResource1)])Virtual ResourcePoolLike ResourcePool, VResourcePool manages resources. The main difference between the two is that ResourcePool manages physical resource objects. VResourcePool manages virtual resources (VResource) that only represent physical resources. VResources can not be activated or deactivated.One unique property VResourcePool enables is that request could be returned by quantity.Virtual Requestors Exampleimporttimefromacrisimportvirtual_resource_poolasrpfromacris.threadedimportThreadedfromacris.mploggerimportcreate_stream_handlerimportqueuefromdatetimeimportdatetimeclassMyResource1(rp.Resource):passclassMyResource2(rp.Resource):passrp1=rp.ResourcePool('RP1',resource_cls=MyResource1,policy={'resource_limit':2,}).load()rp2=rp.ResourcePool('RP2',resource_cls=MyResource2,policy={'resource_limit':1,}).load()classCallback(object):def__init__(self,notify_queue,name=''):self.q=notify_queueself.name=namedef__call__(self,received=False):self.q.put(received)requestors=rp.Requestors()@Threaded()defworker_callback(name,rps):print('[%s]%sgetting resource'%(str(datetime.now()),name))notify_queue=queue.Queue()callback=Callback(notify_queue,name=name)request_id=requestors.reserve(request=rps,callback=callback)ifnotrequestors.is_reserved(request_id):print('[%s]%sdoing work before resource available'%(str(datetime.now()),name,))notify_queue.get()resources=requestors.get(request_id)print('[%s]%sdoing work (%s)'%(str(datetime.now()),name,repr(resources)))time.sleep(1)print('[%s]%sreturning (%s)'%(str(datetime.now()),name,repr(resources)))requestors.put_requested(rps)r2=worker_callback('>>> w21-callback',[(rp1,1),(rp2,1)])r1=worker_callback('>>> w11-callback',[(rp1,1),])r3=worker_callback('>>> w22-callback',[(rp1,1),(rp2,1)])r4=worker_callback('>>> w12-callback',[(rp1,1),])Virtual Requestor Example Output[2016-12-1614:27:53.224110]>>>w21-callbackgettingresource[2016-12-1614:27:53.224750]>>>w11-callbackgettingresource[2016-12-1614:27:53.225567]>>>w22-callbackgettingresource[2016-12-1614:27:53.226220]>>>w12-callbackgettingresource[2016-12-1614:27:53.237146]>>>w11-callbackdoingwork([Resource(name:MyResource1)])[2016-12-1614:27:53.238361]>>>w12-callbackdoingworkbeforeresourceavailable[2016-12-1614:27:53.241046]>>>w21-callbackdoingworkbeforeresourceavailable[2016-12-1614:27:53.242350]>>>w22-callbackdoingwork([Resource(name:MyResource1),Resource(name:MyResource2)])[2016-12-1614:27:54.238443]>>>w11-callbackreturning([Resource(name:MyResource1)])[2016-12-1614:27:54.246868]>>>w22-callbackreturning([Resource(name:MyResource1),Resource(name:MyResource2)])[2016-12-1614:27:54.257040]>>>w12-callbackdoingwork([Resource(name:MyResource1)])[2016-12-1614:27:54.259858]>>>w21-callbackdoingwork([Resource(name:MyResource1),Resource(name:MyResource2)])[2016-12-1614:27:55.258659]>>>w12-callbackreturning([Resource(name:MyResource1)])[2016-12-1614:27:55.262741]>>>w21-callbackreturning([Resource(name:MyResource1),Resource(name:MyResource2)])MediatorNote:inherent from acrilib Class interface to generator allowing query of has_next()ExamplefromacrisimportMediatordefyrange(n):i=0whilei<n:yieldii+=1n=10m=Mediator(yrange(n))foriinrange(n):print(i,m.has_next(3),next(m))print(i,m.has_next(),next(m))Example Output0True01True12True23True34True45True56True67True78False89False9Traceback(mostrecentcalllast):File"/private/var/acrisel/sand/acris/acris/acris/example/mediator.py",line19,in<module>print(i,m.has_next(),next(m))File"/private/var/acrisel/sand/acris/acris/acris/acris/mediator.py",line38,in__next__value=next(self.generator)StopIterationUtilitiescommdir.pyusage:commdir.py[-h][--dir1DIR1][--dir2DIR2][--quiet][--out[REPORT]][--follow][--detailed][--sync-cmd][--merge][--total][--ignore[PATTERN[PATTERN...]]]Reportsdifferencesindirectorystructureandcontent.commdir.pywillexitwith0ifdirectoriesfoundthesame.otherwise,itwillexitwith1.optionalarguments:-h,--helpshowthishelpmessageandexit--dir1DIR1sourcefolderforthecomparison--dir2DIR2targetfolderforthecomparison--quietavoidwritinganyreportout,default:False--out[REPORT]filetowritereportto,default:stdout--followfollowlinkswhenwalkingfolders,default:False--detailedprovidedetailedfileleveldiff,default:False--sync-cmdprovidecommandsthatwouldaligndirsandfiles,default:False--mergewhensync-cmd,sethowdiffcommandswouldberesolved,default:dir1isbase.--totaloutputssummary.--ignore[PATTERN[PATTERN...]]patterntoignoreexample:pythoncommdir.py--dir1my_folder--dir2other_folder--ignore__pycache__.*DS_Storecommdir.py also provides access to its underlined function commdir:commdir(dir1,dir2,ignore=[],detailed=False,followlinks=False,quiet=False,bool_result=True)compares two directory structures and their files.commdir walks through two directories, dir1 and dir2. While walking, it aggregates information on the difference between the two structures and their content.If bool_result is True, commdir will return True if difference was found. When False, it would return a DiffContent namedtuple with the following fields:diff (boolean)folders_only_in_dir1 (list)folders_only_in_dir2 (list)files_only_in_dir1 (list)files_only_in_dir2 (list)diff_files (list)diff_detail (list)Args:dir1, dir2: two directories structure to compare. ignore: list of regular expression strings to ignore, when directory is ignored, all its sub folders are ignored too. detailed: if set, will generate detailed file level comparison. followlinks: if set, symbolic links will be followed. quiet: if set, information will not be printed to stdio. bool_result: instruct how the function would respond to caller (True: boolean or False: DiffContent)commdir example output----------------------------foldersonlyinother_folder----------------------------static/admin/fontsstatic/admin/js/vendorstatic/admin/js/vendor/jquerystatic/admin/js/vendor/xregexp-----------------------filesonlyinmy_folder-----------------------docs/._example.rstdocs/._user_guide.rst--------------------------filesonlyinother_folder--------------------------static/admin/css/fonts.cssstatic/admin/fonts/LICENSE.txtstatic/admin/fonts/README.txtffstatic/admin/img/LICENSEstatic/admin/js/vendor/jquery/jquery.jsstatic/admin/js/vendor/jquery/jquery.min.jsstatic/admin/js/vendor/xregexp/xregexp.min.js----------------filesdifferent:----------------.pydevprojectui/settings/prod.pyui/wsgi.pypersonalenv.xml--------Summary:--------Foldersonlyinmy_folder:0Filesonlyinmy_folder:2Foldersonlyinother_folder:4Filesonlyinother_folder:7Filesdifferent:4bee.pyutility to run commands on multiple hosts and collect responses.usage:bee.py[-h]-cCOMMAND[-pPARALLEL]-tHOST[-uUSERNAME][--sudo-userUSERNAME][--keep-log]Sendssshcommandtomultipledestinations.optionalarguments:-h,--helpshowthishelpmessageandexit-cCOMMAND,--commandCOMMANDcommandtoexecuteoversshchannel-pPARALLEL,--parallelPARALLELnumberofparallelsessiontoopen-tHOST,--targetHOSTdestinationhosttorunagainst-uUSERNAME,--userUSERNAMEusertouseforsshauthentication--sudo-userUSERNAMEsudousertousetoruncommands--keep-logindicatesbeetokeephostlogsinsteadofdeletingcsv2xlsx.pyconverts multiple CSV file to XLSX file. Each CSV file will end on its own sheet.usage:csv2xlsx.py[-h][-dDELIMITER][-oOUTFILE]CSV[CSV...]CreatesExcelfilefromoneormoreCSVfiles.IfmultipleCSVareprovided,theywiullbemappedtoseparatedsheets.If"-"isprovided,inputwillbeacquirefromstdin.positionalarguments:CSVcsvfilestomergeinxlsx;if-,stdinisassumedoptionalarguments:-h,--helpshowthishelpmessageandexit-dDELIMITER,--delimiterDELIMITERselectdelimitercharacter-oOUTFILE,--outOUTFILEoutputxlsxfilenamemail.pysend mail utility and function APIusage:mail.py[-h][-aATTACHMENT][-oFILE]-sSUBJECT[-bBODY][-fMAILFROM][-cCC]-tRECIPIENTSendthecontentsofadirectoryasaMIMEmessage.Unlessthe-ooptionisgiven,theemailissentbyforwardingtoyourlocalSMTPserver,whichthendoesthenormaldeliveryprocess.YourlocalmachinemustberunninganSMTPserver.optionalarguments:-h,--helpshowthishelpmessageandexit-aATTACHMENT,--attachATTACHMENTMailthecontentsofthespecifieddirectoryorfile,Onlytheregularfilesinthedirectoryaresent,andwedon't recurse to subdirectories.-oFILE,--outputFILEPrintthecomposedmessagetoFILEinsteadofsendingthemessagetotheSMTPserver.-sSUBJECT,--subjectSUBJECTSubjectforemailmessage(required).-bBODY,--bodyBODYBobytextforthemessage(optional).-fMAILFROM,--mailfromMAILFROMThevalueoftheFrom:header(optional);ifnotprovided$USER@$HOSTNAMEwillbeuseassender-cCC,--maliccCCThevalueoftheCC:header(optional)-tRECIPIENT,--mailtoRECIPIENTATo:headervalue(atleastonerequired)prettyxml.pyReformat XML in hierarchical structure.usage:pretty-xml.py[-h][-oOUTFILE][XML[XML...]]PrettyprintsXMLfilethatisnotpretty.positionalarguments:XMLXMLfilestoprettyprint;if-ornoneprovided,stdinisassumedoptionalarguments:-h,--helpshowthishelpmessageandexit-oOUTFILE,--outOUTFILEoutputfilename;defaultstostdoutsshcmdRuns single shh command on remote hostdefsshcmd(cmd,host,password,)Args:cmd:commandtoexecutehost:remotehosttorunonpassword:user's password on remote hosttouchUNIX like touch with ability to create missing folders.touch(path,times=None,dirs=False)Args:path:totouchtimes:a2-tupleoftheform(atime,mtime)whereeachmemberisanintorfloatexpressingseconds.defaultstocurrenttime.dirs:ifset,createmissingfoldersmrunRuns UNIX command on multiple directories.usage:mrun.py[-h][--cwd[DIR[DIR...]]][--exceptionTAG][--nostop][--verbose]...Runcommandinmultipledirectories.Example:mrun--cwddir1dir2--gitadd..positionalarguments:cmdcommandtorun.optionalarguments:-h,--helpshowthishelpmessageandexit--cwd[DIR[DIR...]]pathwherecommandshouldcdto;orfilethatcongainglistofdirectoriestooperateon.--exceptionTAGtagexceptionmessage.--nostopcontinueeveniffailedtoruninoneplace.--verbose,-vprintmessagesasitgoes.Misccamel2snake and snake2camelcamel2snake(name) and snake2camel(name) will convert name from camel to snake and from snake to camel respectively.xlsx2rstxlsx2rst is a utility and function to convert xlsx to restructuredtext.usage:xlsx2rst.py[-h][-oRST][-s[SHEET[SHEET...]]][--start-row[NUMBER]][--end-row[NUMBER]][--start-col[NUMBER]][--end-col[NUMBER]][-r[NUMBER]][--one-file]XLSXConvertsxlsxworkbookintorestructuredtextformatpositionalarguments:XLSXxlsxfilestoconvertoptionalarguments:-h,--helpshowthishelpmessageandexit-oRST,--outputRSTdestinationrstfile-s[SHEET[SHEET...]],--sheet[SHEET[SHEET...]]listofsheets;defaulttoallavailablesheets--start-row[NUMBER]tablestartrow,defaultsto1--end-row[NUMBER]tablestartcol,defaultsto1--start-col[NUMBER]tablestartrow,defaultsto0--end-col[NUMBER]tablestartcol,defaultsto0-r[NUMBER],--header[NUMBER]headerrowcount--one-filewhenset,singlefileiscreatedChange HistoryVersion 2.3Improvement in how threaded passes result.Add xlsx2rst utility.Fix bug with MpLogger multiprocessing queue (changed to use Manager().)Version 2.2MpLogger was change to have single log instead of two (error and debug).MpLogger add new arguments: name, console, force_global, etc.Version 3.0MpLogger moved to acrilog projectSome functions moved to acrilib projectAdded mrun for execute command on multiple directories (for git operations)
acrl
almgren-chrissDeep reinforcement learning for optimal execution of portfolio transactions.InstallationpipinstallacrlUsagefromcollectionsimportdequeimportnumpyasnpimportacrlasscafromacrl.agentimportAgentenv=sca.MarketEnvironment()agent=Agent(state_size=env.observation_space_dimension(),action_size=env.action_space_dimension(),random_seed=0,)liquidation_time=60n_trades=60risk_aversion=1e-6episodes=10000shortfall_hist=np.array([])shortfall_deque=deque(maxlen=100)forepisodeinrange(episodes):current_state=env.reset(seed=episode,liquid_time=liquidation_time,num_trades=n_trades,lamb=risk_aversion,)env.start_transactions()foriinrange(n_trades+1):action=agent.act(current_state,add_noise=True)new_state,reward,done,info=env.step(action)agent.step(current_state,action,reward,new_state,done)current_state=new_stateifinfo.done:shortfall_hist=np.append(shortfall_hist,info.implementation_shortfall)shortfall_deque.append(info.implementation_shortfall)breakif(episode+1)%100==0:print("\rEpisode [{}/{}]\tAverage Shortfall: ${:,.2f}".format(episode+1,episodes,np.mean(shortfall_deque)))print("\nAverage Implementation Shortfall: ${:,.2f}\n".format(np.mean(shortfall_hist)))
acrm
Arch Linux Custom Repository ManagerThis program is intended to manage an Arch Linux custom repository hosted on a server accessible throughrsync(for example, on a VPS or a NAS with a/path/to/wwwfolder served over the web by a web server).It works by synchronizing the remote repository on the local machine usingrsync, and then mainly usesrepo-addto manage this repository to finally synchronizing back to the remote server.It requires to run on an Arch Linux distribution, and simply behaves as a wrapper around some common programs:ProgramFrom packageUsed forunamecore/coreutilsDetecting the architecture of the local machinersyncextra/rsyncSynchronizing the repositorytarcore/tarReading inside the repository databaseGnuPGcore/gnupgSigning the packages and the repositoryrepo-addcore/pacmanManaging the packages in the repository[!NOTE] The CLI of this program is made withcleo, used bypoetry.Global optionsOptionShortRequiredDescriptionhostHyesSpecify the host on which the repository is hostedremote_rootryesThe path to the repository, on the remote hostuserunoThe remote user who owns the repositoryDefaults to the current local userrepositorydnoThe name of the repositoryDefaults to the name of the directory[!IMPORTANT] Currently, some options are required by the CLI as it is the only way to pass information to the ACRM.CommandslsList all the packages with their version in the repository.acrm-Hmy_vps_or_nas-r/home/vps_user/path/to/my/repositoryls+------ repository ------+ | Package name | version | +--------------+---------+ | acrm | 0.1.0-1 | +--------------+---------+
acro
ACRO: Tools for the Automatic Checking of Research OutputsStatistical agencies and other custodians of secure facilities such as Trusted Research Environments (TREs) routinely require the checking of research outputs for disclosure risk. This can be a time-consuming and costly task, requiring skilled staff.ACRO (Automatic Checking of Research Outputs) is an open source tool for automating the statistical disclosure control (SDC) of research outputs. ACRO assists researchers and output checkers by distinguishing between research output that is safe to publish, output that requires further analysis, and output that cannot be published because of substantial disclosure risk.It does this by providing a light-weight 'skin' that sits over well-known analysis tools, in a variety of languages researchers might use. This adds functionality to:identify potentially disclosive outputs against a range of commonly used disclosure tests;suppress outputs where required;report reasons for suppression;produce simple summary documents TRE staff can use to streamline their workflow.See the projectwikifor details.Coding standardsAre also described in the projectwikiThis work was funded by UK Research and Innovation under Grant Number MC_PC_23006 as part of Phase 1 of the DARE UK (Data and Analytics Research Environments UK) programme (https://dareuk.org.uk/), delivered in partnership with Health Data Research UK (HDR UK) and Administrative Data Research UK (ADR UK). The specific project was Semi-Automatic Checking of Research Outputs (SACRO).
acrobotics
AcroboticsQuickly test motion planning ideas is the goal, and Python seems like a great language for rapid prototyping. There are great libraries for robot simulation and related task, but installing them is can be a hassle and very dependent on operating system and python version. The drawback is that I have to write a lot of stuff myself. I'm not sure if it is useful to do this. But it will be fun and I will learn a bunch.This library provides robot kinematics and collision checking for serial kinematic chains. The idea is that this library can be easily swapped by another one providing the same functionality.The acro part comes fromACROa robotics research group at KU Leuven in Belgium.InstallationpipinstallacroboticsOr for developmentgitclonehttps://github.com/JeroenDM/acrobotics.gitcdacrobotics pythonsetup.pydevelopNo Windows support for the moment becausepython-fclis not supported. :( In the future I will possibly switch topybullet. In the meantime, usewindows subsystem for linux. MacOS is not tested yet.Gettings started(Code for example below:examples/getting_started.py)This library has three main tricks.Robot kinematicsT = robot.fk(joint_values)IKSolution = robot.ik(T)Forward kinematics are implemented in a genericRobotKinematicsclass.importacroboticsasabrobot=ab.Kuka()joint_values=[0.1,0.2,0.3,0.4,0.5,0.6]T_fk=robot.fk(joint_values)Analytical inverse kinematics only for specific robots:ik_solution=robot.ik(T_fk)# T_fk is a numpy 4x4 arrayprint(f"Inverse kinematics successful?{ik_solution.success}")forqinik_solution.solutions:print(q)Inversekinematicssuccessful?True[0.1-1.09497272.841592652.877788280.79803563-1.99992985][0.1-1.09497272.84159265-0.26380438-0.798035631.1416628][0.10.20.30.40.50.6][0.10.20.3-2.74159265-0.5-2.54159265]Collision checkingbool = robot.is_in_collision(joint_values, planning_scene)First create a planning scene with obstacles the robot can collide with.fromacrolib.geometryimporttranslationtable=ab.Box(2,2,0.1)T_table=translation(0,0,-0.2)obstacle=ab.Box(0.2,0.2,1.5)T_obs=translation(0,0.5,0.55)scene=ab.Scene([table,obstacle],[T_table,T_obs])Then create a list of robot configurations for wich you want to check collision with the planning scene.importnumpyasnpq_start=np.array([0.5,1.5,-0.3,0,0,0])q_goal=np.array([2.5,1.5,0.3,0,0,0])q_path=np.linspace(q_start,q_goal,10)And then you could do:print([robot.is_in_collision(q,scene)forqinq_path])[False,False,False,False,True,True,True,True,False,False]Visualizationrobot.plot(axes_handle, joint_values)robot.animate_path(figure_handle, axes_handle, joint_path)fromacrolib.plottingimportget_default_axes3dfig,ax=get_default_axes3d()scene.plot(ax,c="green")robot.animate_path(fig,ax,q_path)More detailsThere's a more in depth explanation in the jupyter-notebooks in the examples folder.Most of the usefull stuff can be imported similar to common numpy usage:importacroboticsasabFor more advanced classes, such asRobotto create a custom robot, you have to explicitly import them:fromacrobotics.robotimportRobotfromacrobotics.linkimportDHLink,JointType,LinkAnd motion planning?The package implements a basic sampling-based and optimization-based planner. Examples on how to use them can be found in the test folder, intest_planning_sampling_based.pyandtest_planning_optimization_based.py. However, there is a non-trivial amount of setting types you have to supply to get it working. These appeared after a major refactor in an attempt to make to code more maintainable, but we went a bit overboard in the settings department...
acrocord
A python API for managing postgresql databaseInstall and setupInstall third party packages:sudoaptinstallpython3-devlibpq-devunixodbc-devTo install the packagepython3-mpipinstallacrocorduse python in the proper environment e.g. in conda powershellSetup database configuration and connectionSSL connection:if SSL connection is required, put the certificates given by administrator in/home/$USER/.postgresqlcreate the folder if it does not already existDefault connection configuration can be saved inconnections.cfgin the folder/home/$USER/.postgresql/for linux user or typicallyC:\Users\$USER\.postgresqlfor windows user.Example ofconnections.cfg:[connection-name]user=USERNAMEdbname=DATABASENAMEport=PORThost=HOSTssh=Falsepassword=PASSWORD[!TIP]thehostfield does not recognize ssh alias, use ip addresstheportfield is typically 5432 or 5433the name of the database isdbnameThen in python the connection can directly be instantiate using the keywordconnection-namefromacrocordimportConnectDatabasedb=ConnectDatabase()db.connect(connection="connection-name")Alternatively, you can use the following syntaxfromacrocordimportConnectDatabasedb=ConnectDatabase()connection=dict(user="USERNAME",print_sql_cmd=True,dbname="DATABASENAME",port="PORT",host="HOST",ssh=False)db.connect(print_sql_cmd=True,connection=connection)Simple usageimportpandasaspd# create schema (i.e. an independent database: requires privileges)# write table in schema# read table as pandas dataframedb.create_schema("SCHEMA")db.write_table(pd.DataFrame(1,index=[1,2,3],columns=[1,2,3]),"SCHEMA.NAME")db.read_table("SCHEMA.NAME")[!CAUTION]If the password is trivial (for local connection), add password field to the dictionaryconnectionPassword field can be added inconnections.cfgfileIf no password is provided python will open an log in windowNo password is needed with ssl connectionOther topicsDeploy database: install postresgql server and create database on premiseManage spatial data using postgis: how to install postgis and manipulate spatial dataAuthorEurobios Mews labs
acrod
Automatic Computation for Robot Design (ACRoD)DescriptionThis repository is dedicated to develop functions for automatic computations for designing robotic manipulators.Currently available functionsJacobian formulation for planar and spatial manipulators around a given end-effector point. (This is useful in performing optimisation of Jacobian-based performance parameters of any non-redundant robot directly from its robot-topology matrix)Statement of need:Jacobian formulation is highly used in dimensional synthesis of robotic manipulators, which deals with optimal design of robot's dimensional parameters (link-lengths and joint-orientations). For a given topological structure, formulating Jacobian around a given end-effector point for designing optimal dimensional parameters would require only the topological information, as every other step can be automated. Formulating Jacobian for serial manipulators is easy but for parallel manipulators and serial-parallel hybrid manipulators it is more complicated and often tedious. ACRoD automates this task of formulating Jacobian for a given end-effector point by running all the required steps in the background. The targetted audience includes researchers and engineers working on performing dimensional synthesis of manipulators (especially for multiple manipultors in bulk for comparison) to compute optimal dimensions for operation around a single end-effector point, and those needing to verify the DOF of a topological structure (especially for those special cases of kinematic mechanisms where Chebychev–Grübler–Kutzbach criterion fails to accurately determine the DOF) by analysing the Jacobian.InstallationThe package can be installed from PyPI by using the following command via terminal.pipinstallacrodUsageJacobian for robotic manipulators2R Planar Serial Robot (as an example)The topological information of a robot is to be specified by using its robot-topology matrix, as definedhere. For a planar 2R serial manipulator (as shown in the above figure), the robot topology matrix is given by$$\left[\begin{matrix} 9 & 1 & 0 \\ 1 & 9 & 1 \\ 0 & 1 & 9 \end{matrix}\right]$$The corresponding Jacobian function can be formulated as follows.Firstly, the required functions are imported as shown below.fromacrod.jacobianimportJacobianfromnumpyimportarrayThe robot-topology matrix for 3R planar serial manipulator is defined and jacobian information is processed via the imported jacobian class as follows.M=array([[9,1,0],[1,9,1],[0,1,9]])jac=Jacobian(M,robot_type='planar')Jacobian function is generated as shown below.jacobian_function=jac.get_jacobian_function()In the process of generating the above jacobian function, other attributes of the jacobian object also are updated. Symbolic Jacobian matrices can be extracted from the attributes. Since this is a serial robot, the matrix $J_a$ itself would be the Jacobian matrix of the manipulator. The matrix $J_a$ is extracted fromJaattribute of the jacobian object as follows.symbolic_jacobian=jac.Jasymbolic_jacobianIn an ipynb file of JupyterLab, the above code would produce the following output.$$\left[\begin{matrix}- a_{y} + r_{(1,2)y} & - a_{y} + r_{(2,3)y} \\ a_{x} - r_{(1,2)x} & a_{x} - r_{(2,3)x} \\ 1 & 1\end{matrix}\right]$$The above Jacobian is based on the notations defined and describedhere.Active joint velocities, in the corresponding order, can be viewed by running the following lines.active_joint_velocities=jac.active_joint_velocities_symbolicactive_joint_velocitiesIn an ipynb file of JupyterLab, the above code would produce the following output.$$\left[\begin{matrix}\dot{\theta}_{(1,2)} \\ \dot{\theta}_{(2,3)}\end{matrix}\right]$$Robot dimensional parameters can be viewed by running the below line.robot_dimensional_parameters=jac.parameters_symbolicrobot_dimensional_parametersIn an ipynb file of JupyterLab, the above code would produce the following output.$$\left[\begin{matrix}r_{(1,2)x} \\ r_{(1,2)y} \\ r_{(2,3)x} \\ r_{(2,3)y}\end{matrix}\right]$$Robot end-effector parameters can be viewed by running the below line.robot_endeffector_parameters=jac.endeffector_variables_symbolicrobot_endeffector_parametersIn an ipynb file of JupyterLab, the above code would produce the following output.$$\left[\begin{matrix}a_{x} \\ a_{y}\end{matrix}\right]$$Sample computation of Jacobian for the configuration corresponding to the parameters shown below:End-effector point: $\textbf{a}=\hat{i}+2\hat{j}$Locations of joints: $\textbf{r}_{(1,2)}=3\hat{i}+4\hat{j}$ and $\textbf{r}_{(2,3)}=5\hat{i}+6\hat{j}$For the given set of dimensional parameters of the robot, the numerical Jacobian can be computed as follows. Firstly, we need to gather the configuration parameters in Python list format, in a particular order. The robot dimensional parameters fromjac.parameters_symbolicare found (as shown earlier) to be in the order of $r_{(1,2)x}$, $r_{(1,2)y}$, $r_{(2,3)x}$ and $r_{(2,3)y}$. Hence the configuration parameters are to be supplied in the same order, as a list. Thus, the computation can be performed as shown below.end_effector_point=[1,2]configuration_parameters=[3,4,5,6]jacobian_at_the_given_configuration=jacobian_function(end_effector_point,configuration_parameters)jacobian_at_the_given_configurationThe output produced by running the above code, is shown below.array([[2,4],[-2,-4],[1,1]])Mathematical concepts behind formulating the Jacobian can be foundhere.Dimensional SynthesisFor dimensional synthesis, at least a performance parameter is required. One commonly used performance parameter in dimensional synthesis is the condition number. From the above Jacobian function, the condition number can be found by computing the ratio of maximum singular value and minimum singular value. This condition number has the bounds $(1,\infty)$. When the condition number is 1, that signifies the best performance in the context of condition number. The computation of condition number from a given Jacobian can be achieved by the code shown below:fromnumpy.linalgimportsvddefcondition_number_func(jacobian_matrix):_,singular_values,_=svd(jacobian_matrix)condition_number=singular_values.max()/singular_values.min()returncondition_numberFor reference if we take the joint at the fixed link to be at the origin, the dimensional synthesis for optimal performance around the end-effector point $\textbf{a}=\hat{i}+2\hat{j}$ can be performed by the code shown below:fromscipy.optimizeimportminimizefromnumpyimporthstack,onesend_effector_point=[1,2]base_reference_point=[0,0]r12=base_reference_pointjac_fun=lambday:jacobian_function(end_effector_point,hstack((base_reference_point,y)))condition_number=lambdaz:condition_number_func(jac_fun(z))initial_guess=ones(len(jac.parameters)-len(base_reference_point))res=minimize(condition_number,initial_guess)r23=res.xThe link lengths $l_2$ and $l_3$ are given by $l_2 = \lVert \textbf{r}_{12}-\textbf{r}_{23} \rVert$ and $l_3 = \lVert\textbf{r}_{23}-\textbf{a}\rVert$. By using the code below, the link lengths of 2R robot can be computed.fromnumpy.linalgimportnorml2=norm(r23-r12)l3=norm(r23-end_effector_point)print(l2,l3,res.fun)Output:3.46410161532893172.2360679761553771.0000000007904777The above output shows that for $l_2=3.464$ and $l_3=2.236$, the robot has the condition number approximately equal to $1.0$, which signifies optimal performance.ExamplesSome examples (along with their mathematical derivations) can be foundhere.Full DocumentationFor full documentation, visit the documentation pagehere.Community GuidelinesFor contribution to the software:In order to contribute to the software, please consider using thepull request featureof GitHub.For reporting issues with the software:For reporting issues or problems, please useissues.For support:For any further support (including installation, usage, etc.), feel free to contact via suneeshjacob-at-gmail-dot-com.
acrolib
InstallationFirst install cython, wichsetup.pyneeds to build the package.pipinstallcythonIn some cases you also have to install additional dependencies.sudoaptinstallpython3-dev pipinstallwheelUsing pipThen install the package.pipinstallacrolibFrom sourcegitclonehttps://github.com/JeroenDM/acrolib.gitcdacrolib pythonsetup.pybuild pythonsetup.pyinstallIf you want to edit the package and test the changes, you can replace the last line with:pythonsetup.pydevelopAcrolibGeneral utilities for writing motion planning algorithms atACRO. This library is aimed at miscellaneous functions and classes that cannot be grouped in a larger package.Dynamic ProgrammingSolve a specific type of Deterministic Markov Decision Process. It uses a value function that must be minimized instead of maximized. It assumes a sequential linear graph structure.QuaternionExtension to thepyquaternionpackage.SamplingA sampler class to generate uniform random or deterministic samples. Deterministic samples are generated using aHalton Sequence.
acrome
OverviewThis library provides easy-to-use Python modules for interacting with Acrome Robotics products.ModulesController ModuleThis module provides a hardware abstraction layer between Acrome Controller board and application code and enables users to develop their own applications without hassle of implementing a communication protocol or struggling with any timing requirements. This module also provides safety limitations for Acrome Robotics products and users don't have to worry about any mechanical or electrical limit of the products while working on their application code.The controller module provides 6 different classes for interacting with 5 different products and the Acrome Controller board itself.Controller ClassThis class provides an interface with the Acrome Controller board. For basic communication checks and configuration via 4 different methods.__init__(self, portname="/dev/serial0", baudrate=115200)Return:NoneThis is the constructor of the Controller class.portnameargument is the serial/COM port of the host computer which is connected to the Acrome Controller board. Since the board is designed with Raspberry Pi in mind, default portname is/dev/serial0to provide out of the box support for Raspberry Pi.baudrateargument must not be changed by the user since different baudrates are not supported by the hardware, yet.ping(self)Return:booleanThis method provides a basic ping functionality between the board and the host computer as the name suggests. If the communication succeeded method returns true, otherwise false.reboot(self)Return:NoneThis method immediately reboots the Acrome Controller board when called.enter_bootloader(self)Return:NoneWhen this method called, the Acrome Controller board boots into the embedded bootloader to provide a firmware update. When bootloader activated, the board does not respond to any other command rather than specific instruction for bootloader operation.get_latest_version(self)Return:string / NoneThis method returns the latest firmware version available as a string with a 'v' suffix. (Example: v0.1.0)fetch_fw_binary(self, version='')Return:booleanThis method fetches the given firmware version from related repository. When version argument is not given by the user, fetches the latest version available. User must provide version information as a string and with a suffix 'v'. Returns True on success.update_fw_binary(self, baudrate=115200)Return:NoneThis method initiates the firmware download procedure. This procedure must not be interrupted since it may brick the hardware. Baudrate argument can be selected between 1200 and 115200. Update procedure with low baudrates may take some time. Serial port in use on the host computer must support EVEN parity to work properly. When used with a Raspberry Pi, ttyAMA0 should be used as the serial port since ttyS0 does not support parity bits.get_board_info(self)Return:dictThis method returns a dictionary that contains information about the underlaying hardware configuration and status. Since gathering that information interrupts the any other operation at the hardware, calling it in any control loop might affect the system performance and should be avoided.OneDOF ClassThis class provides an interface with the OneDOF via Acrome Controller.__init__(self, portname="/dev/serial0", baudrate=115200)Return:NoneThis is the constructor of the OneDOF class. Please refer to the Controller class constructor for argument descriptions.set_speed(self, speed)Return:NoneThis method provides an interface to set speed of the OneDOF motor. Available range is from -1000 to 1000.enable(self)Return:NoneThis method enables the power stage of the OneDOF motor and should be called prior to setting speed.reset_encoder_mt(self)Return:NoneThis method resets the encoder of the DC motor on the OneDOF.reset_encoder_shaft(self)Return:NoneThis method resets the encoder on the shaft of OneDOF.update(self)Return:NoneThis method syncronizes the variables both on host computer and hardware side. Should be called prior to read of any attribute or called after any write/set operation to make latest values available immediately.motor_encThis attribute returns the current value of encoder on the DC motor.Note:This attribute might be always 0 according to your product configuration.shaft_encThis attribute returns the current value of encoder on the OneDOF shaft.imuThis attribute returns the current roll, pitch and yaw values in degrees in a form of Python list.Note:This attribute is only available on the products that shipped with an BNO055 Absolute Orientation Sensor. Products with MPU6050 IMU is not supported yet and will return 0.BallBeam ClassThis class provides an interface with Ball and Beam via Acrome Controller.__init__(self, portname="/dev/serial0", baudrate=115200)Return:NoneThis is the constructor of the BallBeam class. Please refer to the Controller class constructor for argument descriptions.set_servo(self, servo)Return:NoneThis method provides an interface to set angle of the servo motor on Ball and Beam. Available range is from -1000 to 1000.update(self)Return:NoneThis method syncronizes the variables both on host computer and hardware side. Should be called prior to read of any attribute or called after any write/set operation to make latest values available immediately.positionThis attribute returns the current value of the ball position on the beam.BallBalancingTable ClassThis class provides an interface with Ball Balancing Table via Acrome Controller.__init__(self, portname="/dev/serial0", baudrate=115200)Return:NoneThis is the constructor of the BallBalancingTable class. Please refer to the Controller class constructor for argument descriptions.set_servo(self, x, y)Return:NoneThis method provides an interface to set angles of the servo motors on Ball Balancing Table. Available range is from -1000 to 1000 for each axis.update(self)Return:NoneThis method syncronizes the variables both on host computer and hardware side. Should be called prior to read of any attribute or called after any write/set operation to make latest values available immediately.positionThis attribute returns a list that contains the current coordinates (x, y) of the ball position on the touch screen.Delta ClassThis class provides an interface with Delta Robot via Acrome Controller.__init__(self, portname="/dev/serial0", baudrate=115200)Return:NoneThis is the constructor of the Delta class. Please refer to the Controller class constructor for argument descriptions.set_motors(self, motors)Return:NoneThis method provides an interface to set angles of the motors on Delta Robot. Available range is from 310 to 810 for each motor.motorsargument must be a list of 3 integers.pick(self, magnet)Return:NoneThis method controls the state of electromagnet which is attached to the Delta Robot.magnetargument is a boolean and when set toTrue, enables the magnet to pick the coin and when set toFalse, disables the magnet to release it.update(self)Return:NoneThis method syncronizes the variables both on host computer and hardware side. Should be called prior to read of any attribute or called after any write/set operation to make latest values available immediately.positionThis attribute returns a list of 3 integers that contains the current values of the motor positions. List elements are Motor 1 Position, Motor 2 Position, and Motor 3 Position respectively.Stewart, StewartEncoder and StewartEncoderHR ClassesThese classes provides an interface with Stewart Platforms via Acrome Controller. While Stewart Platform uses analog position feedback, StewartEncoder and StewartEncoderHR uses incremental encoders for position feedback. StewartEncoder and StewartEncoderHR only differs in communication structure. StewartEncoderHR provides 32 bits wide encoder resolution while StewartEncoder provides only 16 bits. 16 bits encoder resolution is enough for 4" and 8" versions of Stewart Platforms and no need to bloat serial communication with extra 16 bits of data per encoder.__init__(self, portname="/dev/serial0", baudrate=115200)Return:NoneThis is the constructor of the Stewart class. Please refer to the Controller class constructor for argument descriptions.enable(self)Return:NoneThis method enables the power stages of the Stewart Platform motors and should be called prior to setting speed.reset_encoder(self, motor_num=[1,2,3,4,5,6])Return:NoneThis method resets the encoder of the motor at the given index to 0.Note:This method is only available in StewartEncoder and StewartEncoderHR classes.set_motors(self, motors)Return:NoneThis method provides an interface to set speeds of the motors on Stewart Platform. Available range is from -1000 to 1000 for each motor.motorsargument must be a list of 6 integers.update(self)Return:NoneThis method syncronizes the variables both on host computer and hardware side. Should be called prior to read of any attribute or called after any write/set operation to make latest values available immediately.positionThis attribute returns a list of 6 integers that contains the current values of the motor positions. List elements are ordered as starting from Motor 1 Position to Motor 6 Position.imuThis attribute returns the current roll, pitch and yaw values in degrees in a form of Python list.Note:This attribute is only available on the products that shipped with an BNO055 Absolute Orientation Sensor. Products with MPU6050 IMU is not supported yet and will return 0.
acrome-smd
Python LibraryOverviewThis library provides easy-to-use Python modules and methods for interfacing with Acrome Smart Motor Driver products.Embrace the world of motor control with simplicity using our SMD Python Library. Designed specifically for controlling SMDs, this library provides a seamless experience no matter your skill level in how you control motors.Whether your project requires basic speed adjustments or precise position control, quickly and easily leverage the flexibility of Python to effortlessly implement a wide variety of motor control strategies.SMD Python Library takes your projects to the next level by offering seamless integration with SMD Sensor modules. With this library, you can increase the functionality and efficiency of your project by effortlessly collecting data from SMD sensor modules via SMD.Develop your projects with "Acrome Smart Motor Drivers" and a computer that can run your Python code.You can reach the Acrome Smart Motors Drivers documentationhere.InstallationTo useAcrome Smart Motor Driverswith python library, follow the installation steps below. Library is compatible with Python 3.x and can be installed on both Windows and Linux systems.PrerequisitesBefore you begin, make sure you have the following prerequisites:Python 3.x:Python Official WebsiteInstallationWindowsOpen a Command Prompt with administrative privileges.Install SMD library usingpip(Python package manager) by running the following command:pipinstallacrome-smdWait for the installation to complete. Pip willmnmnmnmn automatically download and install the library along with any required dependencies.LinuxOpen a terminal.Install SMD library using pip (Python package manager) by running the following command:pipinstallacrome-smdWait for the installation to complete. Pip will automatically download and install SMD Library along with any required dependencies.VerificationTo verify that SMD library has been successfully installed, open a Python interpreter and run the following command:importsmdimportsmd.redIf no errors are raised, the installation was successful.UpgradingTo upgrade SMD Library to the latest version, you can use the following pip command:pipinstallacrome-smdUsageImport the SMD Library: First, import the SMD library at the beginning of your Python script:fromsmd.redimport*Initialize SMD:Create an instance of the Master class by initializing it with the appropriate settings. This instance represents your SMD and allows you to control it.ID=0# Set the ID of your SMART MOTOR DRIVERSerialPort='/dev/ttyUSB0'# Replace with your specific serial port ( for ex 'COM3'.)baudrate=115200# Set the baud rate for serial communicationmyMaster=Master(SerialPort,baudrate)#create a master objectprint(master.scan())#prints ID list of connected SMDsfromsmd.redimport*importtimeMASTER_PORT="/dev/ttyUSB0"#depending on operating system, port, etc. may vary depending on themaster=Master(MASTER_PORT)#creating master objectprint(master.scan())#prints ID list of connected SMDsID=master.attached()[0]#getting ID of first SMD from scanned ones.#ID = 0 You can use directly this if it has never been changed before.Configure SMD:#rpm and cpr values are depend on the motor you use.master.set_shaft_rpm(ID,10000)master.set_shaft_cpr(ID,64)#starts autotune for setting PID values of control algorithmsmaster.pid_tuner(ID)You can configure and use theSMDusing specific methods belonging to the master class, just like in the code above.You can access all sample codes fromhere. Please read full documentation to use all features of aSMDFirmware UpdateThe following methods provide users with ability to update firmware of their SMDs. To use these methods users must have an internet connection.Users should not disconnect power from the device or it may break the device.get_latest_fw_version(self)Return:Latest firmware versionThis method gets the latest firmware version from the Github servers.update_fw_version(self, id: int, version='')Return:True if the firmware is updatedThis method updates the firmware version with respect to given version string and ID.idargument is the device ID of the connected driver.versionargument is the version to be updated. If version string is not given, driver is updated to the latest version available on Github.ControlPID Tune and Control ParametersThe control modes on the SMD operate with PID control. Therefore, correctly tuning the P, I, and D constants is crucial for accurate control. The device features an autotune capability to automatically set these values. Alternatively, users can manually input these values if desired.AutotuneTo utilize the autotune feature on the device, it's essential to ensure that the motor is in a freely rotatable position. This is because the card continuously rotates the motor during the autotuning process.Following this, the next step is to input the motor's CPR (Counts Per Revolution) and RPM (Revolutions Per Minute) values into the card using the provided methods below. Failing to do this accurately may result in incorrect calculations.set_shaft_cpr(self, id: int, cpr: float)Return:NoneThis method sets the count per revolution (CPR) of the motor output shaft.idargument is the device ID of the connected driver.cprargument is the CPR value of the output shaftset_shaft_rpm(self, id: int, rpm: float)Return:NoneThis method sets the revolution per minute (RPM) value of the output shaft at 12V rating.idargument is the device ID of the connected driver.rpmargument is the RPM value of the output shaft at 12VAfter completing these steps, you should initiate the tuning process using thepid_tuner()method. Please note that immediately after calling this method, the motors will start rotating with varying speeds.pid_tuner(self, id: int)Return:NoneThis method starts a PID tuning process. Shaft CPR and RPM valuesmustbe configured beforehand. If CPR and RPM values are not configured, motors will not spin.idargument is the device ID of the connected driver.Once thepid_tuner()method is initiated, the state of the torque (whether it's enabled or not) does not affect motor operation. There is no need to use theenable_torque()function.An Example of Autotunefromsmd.redimport*importtimeMASTER_PORT="/dev/ttyUSB0"#depending on operating system, port, etc. may vary depending on themaster=Master(MASTER_PORT)#creating master objectprint(master.scan())#prints ID list of connected SMDsID=master.attached()[0]#getting ID of first SMD from scanned ones. You can use directly ID = 0 if it has never been changed before.master.set_shaft_rpm(ID,10000)#rpm and cpr values are depend on the motor you use.master.set_shaft_cpr(ID,64)master.pid_tuner(ID)#starts autotune for setting PID values of control algorithmsSetting PID ValuesManual input of the necessary constants for PID control is also possible. For this, separate P, I, and D constants should be configured for each control mode. Please note that each mode utilizes its own set of constants to control the motor. There are dedicated methods for configuring these constants for each control mode.set_control_parameters_position(self, id: int, p=None, i=None, d=None, db=None, ff=None, ol=None)Return:NoneThis method sets the control block parameters for position control mode. Only assigned parameters are written, `None`'s are ignored. The default max output limit is 950. `id` argument is the device ID of the driver. `p` argument is the the proportional gain. Defaults to None. `i` argument is the integral gain. Defaults to None. `d` argument is the derivative gain. Defaults to None. `db` argument is the deadband (of the setpoint type) value. Defaults to None. `ff` argument is the feedforward value. Defaults to None. `ol` argument is the maximum output limit. Defaults to None.set_control_parameters_velocity(self, id: int, p=None, i=None, d=None, db=None, ff=None, ol=None)Return:NoneThis method sets the control block parameters for velocity control mode. Only assigned parameters are written, `None`'s are ignored. The default max output limit is 950. `id` argument is the device ID of the driver. `p` argument is the the proportional gain. Defaults to None. `i` argument is the integral gain. Defaults to None. `d` argument is the derivative gain. Defaults to None. `db` argument is the deadband (of the setpoint type) value. Defaults to None. `ff` argument is the feedforward value. Defaults to None. `ol` argument is the maximum output limit. Defaults to None.set_control_parameters_torque(self, id: int, p=None, i=None, d=None, db=None, ff=None, ol=None)Return:NoneThis method sets the control block parameters for torque control mode. Only assigned parameters are written, `None`'s are ignored. The default max output limit is 950. `id` argument is the device ID of the driver. `p` argument is the the proportional gain. Defaults to None. `i` argument is the integral gain. Defaults to None. `d` argument is the derivative gain. Defaults to None. `db` argument is the deadband (of the setpoint type) value. Defaults to None. `ff` argument is the feedforward value. Defaults to None. `ol` argument is the maximum output limit. Defaults to None.Getting PID Values and Control valuesThe P, I, and D constants and other values entered for control modes can be obtained. This can be achieved by using the methods provided below.get_control_parameters_position(self, id: int)Return:Returns the list [P, I, D, Feedforward, Deadband, OutputLimit]This method gets the position control block parameters.idargument is the device ID of the driver.get_control_parameters_velocity(self, id: int)Return:Returns the list [P, I, D, Feedforward, Deadband, OutputLimit]This method gets the velocity control block parameters.idargument is the device ID of the driver.get_control_parameters_torque(self, id: int)Return:Returns the list [P, I, D, Feedforward, Deadband, OutputLimit]This method gets the torque control block parameters.idargument is the device ID of the driver.you can see the PID values after then autotune with code below.fromsmd.redimport*importtimeMASTER_PORT="/dev/ttyUSB0"master=Master(MASTER_PORT)#creating master objectprint(master.scan())ID=0#ID of the SMD connected and autotuned.print(master.get_control_parameters_position(ID))print(master.get_control_parameters_velocity(ID))Brushed DC Motor ControlsThe SMD Red has 4 control modes:PWM Control:This mode provides power to a brushed DC motor using PWM signals.Position Control:In this mode, the brushed motor moves to the desired positions using information from the encoder.Velocity Control:This mode ensures that the motor rotates at the desired speed using data from the encoder.Torque Control:This mode allows the motor to apply a specific torque by drawing the desired current.Except for thePWM Control mode, all of these control modes operate with PID control. Therefore, it is essential to configure the PID values before starting the motors in these control modes. Without proper PID tuning, the motors may not work at all or may not perform as desired. You can find the necessary information for setting PID values in thePID Tunesection of the documentation.Control MethodsRegardless of which control mode you choose to use, there are two essential methods that you need to be aware of. One is theset_operation_mode()method, which allows you to select the motor control mode you want to use. The other isenable_torque(), which enables or disables the motor rotation.set_operation_mode(self, id: int, mode: OperationMode)Return:NoneThis method sets the operation mode of the driver. Operation mode may be one of the following:OperationMode.PWM,OperationMode.Position,OperationMode.Velocity,OperationMode.Torque.idargument is the device ID of the connected driver.enable_torque(self, id: int, en: bool)Return:NoneThis method enables or disables power to the motor which is connected to the driver.idargument is the device ID of the connected driver.enargument is a boolean.Trueenables the torque while Falsedisables.PWM Controlset_duty_cycle(self, id: int, pct: float):Return:NoneThis method sets the duty cycle to the motor for PWM control mode in terms of percentage. Negative values will change the motor direction.idargument is the device ID of the driver.idargument is the duty cycle percentage.An Example of PWM Controlfromsmd.redimport*MASTER_PORT="COM10"master=Master(MASTER_PORT)#creating master objectprint(master.scan())ID=0master.set_operation_mode(ID,0)#sets the operating mode to 0 represents PWM control mode.master.set_duty_cycle(ID,50)#sets the duty cycle to 50 percentmaster.enable_torque(ID,True)#enables the motor torque to start rotatingPosition Controlset_position_limits(self, id: int, plmin: int, plmax: int)Return:NoneThis method sets the position limits of the motor in terms of encoder ticks. Default for min is -2,147,483,648 and for max is 2,147,483,647. The torque is disabled if the value is exceeded so a tolerence factor should be taken into consideration when setting these values.idargument is the device ID of the connected driver.plminargument is the minimum position limit.plmaxargument is the maximum position limit.get_position_limits(self, id: int)Return:Min and max position limitsThis method gets the position limits of the motor in terms of encoder ticks.idargument is the device ID of the connected driver.plminargument is the minimum position limit.plmaxargument is the maximum position limit.set_position(self, id: int, sp: int)Return:NoneThis method sets the desired setpoint for the position control in terms of encoder ticks.idargument is the device ID of the driver.spargument is the position control setpoint.get_position(self, id: int)Return:Current position of the motor shaftThis method gets the current position of the motor from the driver in terms of encoder ticks.idargument is the device ID of the driver.An Example of Position Controlfromsmd.redimport*MASTER_PORT="COM10"master=Master(MASTER_PORT)#creating master objectprint(master.scan())ID=0master.set_shaft_rpm(ID,10000)#rpm and cpr values are depend on the motor you use.master.set_shaft_cpr(ID,64)master.set_control_parameters_position(ID,10,0,8)#SMD ID, Kp, Ki, Kdmaster.set_operation_mode(ID,1)#sets the operating mode to 1 represents Position control mode.master.enable_torque(ID,True)#enables the motor torque to start rotatingwhileTrue:master.set_position(ID,5000)#sets the setpoint to 5000 encoder ticks.time.sleep(1.2)master.set_position(ID,0)#sets the setpoint to 0 encoder ticks. Motor goes to starttime.sleep(1.2)You should enter the PID values of Position Control Mode or just tune once the SMD at start. CPR and RPM values should be entered to SMD calculates the neseccary varaibles. If you don't then the motor cannot rotate.Velocity Controlset_velocity_limit(self, id: int, vl: int)Return:NoneThis method sets the velocity limit for the motor output shaft in terms of RPM. The velocity limit applies only in velocity mode. Default velocity limit is 65535.idargument is the device ID of the connected driver.vlargument is the new velocity limit (RPM).get_velocity_limit(self, id: int)Return:Velocity limitThis method gets the velocity limit from the driver in terms of RPM.idargument is the device ID of the connected driver.set_velocity(self, id: int, sp: int)Return:NoneThis method sets the desired setpoint for the velocity control in terms of RPM. `id` argument is the device ID of the driver.get_velocity(self, id: int)Return:Current velocity of the motor shaftThis method gets the current velocity of the motor output shaft from the driver in terms of RPM. `id` argument is the device ID of the driver.An Example of Velocity Controlfromsmd.redimport*MASTER_PORT="COM10"master=Master(MASTER_PORT)#creating master objectprint(master.scan())ID=0master.set_shaft_rpm(ID,10000)#rpm and cpr values are depend on the motor you use.master.set_shaft_cpr(ID,64)master.set_control_parameters_velocity(ID,10,1,0)#SMD ID, Kp, Ki, Kdmaster.set_operation_mode(ID,2)#sets the operating mode to 2 represents Velocity control mode.master.set_velocity(ID,2000)#sets the setpoint to 2000 RPM.master.enable_torque(ID,True)#enables the motor torque to start rotatingYou should enter the PID values of Position Control Mode or just tune once the SMD at start. CPR and RPM values should be entered to SMD calculates the neseccary varaibles. If you don't then the motor cannot rotate.Torque Controlset_torque_limit(self, id: int, tl: int)Return:NoneThis method sets the torque limit of the driver in terms of milliamps (mA).idargument is the device ID of the connected driver.tlargument is the new torque limit (mA).get_torque_limit(self, id: int)Return:Torque limit (mA)This method gets the torque limit from the driver in terms of milliamps (mA).idargument is the device ID of the connected driver.set_torque(self, id: int, sp: int)Return:NoneThis method sets the desired setpoint for the torque control in terms of milliamps (mA).idargument is the device ID of the driver.get_torque(self, id: int)Return:Current drawn from the motor (mA)This method gets the current drawn from the motor from the driver in terms of milliamps (mA).idargument is the device ID of the driver.An Example of Torque Controlfromsmd.redimport*MASTER_PORT="COM10"master=Master(MASTER_PORT)#creating master objectprint(master.scan())ID=0master.set_shaft_rpm(ID,10000)#rpm and cpr values are depend on the motor you use.master.set_shaft_cpr(ID,64)master.set_control_parameters_torque(ID,10,0.1,0)#SMD ID, Kp, Ki, Kd#master.set_torque_limit(220)master.set_operation_mode(ID,3)#sets the operating mode to 3 represents Torque control mode.master.set_torque(ID,80)#sets the setpoint to 80 mili amps(mA).master.enable_torque(ID,True)#enables the motor torque to start rotatingYou must enter the PID values of the Torque Control Mode. Since Auto tune does not produce these values, you must set them yourself.If you do not do this, the motor cannot rotate properly.Base methodsRed ClassMethods of theRedclass are used for the underlying logic of the Master class. As such, it is not recommended for users to callRedclass methods explicitly. Users may create instances of the class in order to attach to the master. Thus, only__init__constructor is given here.__init__(self, ID: int):This is the initalizer for Red class which represents an object of SMD (Smart Motor Drivers) driver.IDargument is the device ID of the created driver.Master Class__init__(self, portname, baudrate=115200)Return:NoneThis is the initializer for Master class which controls the serial bus.portnameargument is the serial/COM port of the host computer which is connected to the Acrome Smart Motor Drivers via Mastercard.baudrateargument specifies the baudrate of the serial port. User may change this value to something between 3.053 KBits/s and 12.5 MBits/s. However, it is up to the user to select a value which is supported by the user's host computer.update_driver_baudrate(self, id: int, br: int):Return:NoneThis method updates the baudrate of the driver, saves it to EEPROM and resets the driver board. Once the board is up again, the new baudrate is applied.idargument is the device ID of the connected driver.brargument is the user entered baudrate value. This value must be between 3.053 KBits/s and 12.5 MBits/s.get_driver_baudrate(self, id: int):Return:The baudrate of the driver with given IDThis method reads the baudrate of the driver in bps.idargument is the device ID of the connected driver.update_master_baudrate(self, br: int):Return:NoneThis method updates the baudrate of the host computer's serial port and should be called after changing the baudrate of the driver board to sustain connection.brargument is the user entered baudrate value. This value must be between 3.053 KBits/s and 12.5 MBits/s.attach(self, driver: Red):Return:NoneThis method attaches an instance of Red class to the master. If a device ID is not attached to the master beforehand, methods of the master class will not work on the given device ID.driverargument is an instance of the Red class. Argument must be an instance with a valid device ID.detach(self, id: int):Return:NoneThis method removes the driver with the given devic ID from thee master. Any future action to the removed device ID will fail unless it is re-attached.set_variables(self, id: int, idx_val_pairs=[], ack=False)Return:List of the acknowledged variables or NoneThis method updates the variables of the driver board with respect to given index/value pairs.idargument is the device ID of the connected driver.idx_val_pairsargument is a list, consisting of lists of parameter indexes and their value correspondents.get_variables(self, id: int, index_list: list)Return:List of the read variables or NoneThis method reads the variables of the driver board with respect to given index list.idargument is the device ID of the connected driver.index_listargument is a list with every element is a parameter index intended to read.set_variables_sync(self, index: Index, id_val_pairs=[])Return:List of the read variables or NoneThis method updates a specific variable of the multiple driver boards at once.indexargument is the parameter to be updated.id_val_pairsargument is a list, consisting of lists of device IDs and the desired parameter value correspondents.scan(self)Return:List of the connected driver device IDs.This method scans the serial port, detects and returns the connected drivers.reboot(self, id: int)Return:NoneThis method reboots the driver with given ID. Any runtime parameter or configuration which is not saved to EEPROM is lost after a reboot. EEPROM retains itself.idargument is the device ID of the connected driver.factory_reset(self, id: int)Return:NoneThis method clears the EEPROM config of the driver and restores it to factory defaults.idargument is the device ID of the connected driver.eeprom_write(self, id: int, ack=False)Return:NoneThis method clears the EEPROM config of the driver and restores it to factory defaults.idargument is the device ID of the connected driver.ping(self, id: int)Return:True or FalseThis method sends a ping package to the driver and returnsTrueif it receives an acknowledge otherwiseFalse.idargument is the device ID of the connected driver.reset_encoder(self, id: int)Return:NoneThis method resets the encoder counter to zero.idargument is the device ID of the connected driver.enter_bootloader(self, id: int)Return:NoneThis method puts the driver into bootloader. After a call to this function, firmware of the driver can be updated with a valid binary or hex file. To exit the bootloader, unplug - plug the driver from power or press the reset button.idargument is the device ID of the connected driver.get_driver_info(self, id: int)Return:Dictionary containing version infoThis method reads the hardware and software versions of the driver and returns as a dictionary.idargument is the device ID of the connected driver.update_driver_id(self, id: int, id_new: int)Return:NoneThis method updates the device ID of the driver temporarily.eeprom_write(self, id:int)method must be called to register the new device ID.idargument is the device ID of the connected driver.id_newargument is the new intended device ID of the connected driver.set_user_indicator(self, id: int)Return:NoneThis method sets the user indicator color on the RGB LED for 5 seconds. The user indicator color is cyan.idargument is the device ID of the connected driver.SMD ModulesSMD Modules BasicTo use SMD modules, you should initially utilize the following scanning function. This function returns which modules are connected to the SMD. Each module has a type and an ID, and through this scanning process, you can learn these properties of the connected modules. When the board is powered up for the first time, this scan is automatically performed once, but afterward, this command should be used manually.scan_sensors(self, id: int)Return:List of connected sensorsThis method scans and returns the sensor IDs which are currently connected to a driver.idargument is the device ID of the connected driver.Button Moduleget_button(self, id: int, index: Index)Return:Returns the button stateThis method gets the button module data with given index.idargument is the device ID of the driver.indexargument is the protocol index of the button module.Light Moduleget_light(self, id: int, index: Index):Return:Returns the ambient light measurement (in lux)This method gets the ambient light module data with given index.idargument is the device ID of the driver.indexargument is the protocol index of the ambient light module.Buzzer Moduleset_buzzer(self, id: int, index: Index, en: bool):Return:NoneThis method enables/disables the buzzer module with given index.idargument is the device ID of the driver.indexargument is the protocol index of the buzzer module.enargument enables or disables the buzzer. (Enable = 1, Disable = 0)Joystick Moduleget_joystick(self, id: int, index: Index):Return:Returns the joystick module analogs and button dataThis method gets the joystick module data with given index.idargument is the device ID of the driver.indexargument is the protocol index of the joystick module.Example of Joystick Module Usagefromsmd.redimport*importtimem=Master("/dev/ttyUSB0")m.attach(Red(0))m.scan_modules(0)# It continuously receives data from the joystick module.whileTrue:joystick=m.get_joystick(0,Index.Joystick_1)joystick_X=joystick[0]joystick_Y=joystick[1]joystick_button=joystick[2]Distance Moduleget_distance(self, id: int, index: Index):Return:Returns the distance from the ultrasonic distance module (in cm)This method gets the ultrasonic distance module data with given index.idargument is the device ID of the driver.indexargument is the protocol index of the ultrasonic distance module.QTR Moduleget_qtr(self, id: int, index: Index):Return:Returns qtr module data: [Left(bool), Middle(bool), Right(bool)]This method gets the qtr module data with given index.idargument is the device ID of the driver.indexargument is the protocol index of the qtr module.Servo Moduleset_servo(self, id: int, index: Index, val: int):Return:NoneThis method moves servo module to a desired position.idargument is the device ID of the driver.indexargument is the protocol index of the servo module.valargument is the value to write to the servo (0, 255).Potantiometer Moduleget_potantiometer(self, id: int, index: Index):Return:Returns the ADC conversion from the potantiometer moduleThis method gets the potantiometer module data with given index.idargument is the device ID of the driver.indexargument is the protocol index of the potantiometer module.RGB Led ModuleThe setRGB() method is used to control an RGB Led module by specifying the intensity or color values for each of the RGB components.set_rgb(self, id: int, index: Index, color: Colors):Return:NoneThis method sets the colour emitted from the RGB module.idargument is the device ID of the driver.indexargument is the protocol index of the RGB module.colorargument is the color for RGB from Colors class.Colors available in RGB sensor module :NO_COLOR,RED,GREEN,BLUE,WHITE,YELLOW,CYAN,MAGENTA,ORANGE,PURPLE,PINK,AMBER,TEAL,INDIGOThe method and colors can be used as in the example below for the RGB module.Example of RGB Module Usagefromsmd.redimport*importtimem=Master("/dev/ttyUSB0")m.attach(Red(0))m.scan_modules(0)m.set_rgb(0,Index.RGB_1,Colors.RED)time.sleep(0.5)m.set_rgb(0,Index.RGB_1,Colors.GREEN)time.sleep(0.5)m.set_rgb(0,Index.RGB_1,Colors.BLUE)time.sleep(0.5)m.set_rgb(0,Index.RGB_1,Colors.PURPLE)time.sleep(0.5)IMU Moduleget_imu(self, id: int, index: Index):Return:Returns roll, pitch anglesThis method gets the IMU module data (roll, pitch).idargument is the device ID of the driver.indexargument is the protocol index of the IMU module.Example of IMU Module Usagefromsmd.redimport*importtimem=Master("/dev/ttyUSB0")m.attach(Red(0))m.scan_modules(0)# It continuously receives data from the IMU module.whileTrue:IMU=m.get_imu(0,Index.IMU_1)roll=IMU[0]pitch=IMU[1]
acron
Lightweight scheduler for python asyncioBased on croniter to support the crontab syntax.InstallationInstalling acron.$pipinstallacronUsageTo get started you need at least one job. Use the top levelacron.runfunction for simple scheduling.importasyncioimportacronasyncdefdo_the_thing():print("Doing the thing")do_thing=acron.SimpleJob(name="Do the thing",schedule="0/1 * * * *",func=do_the_thing,)asyncio.run(acron.run({do_thing}))For more advanced use cases, theSchedulerclass can be used as async context manager. Callscheduler.wait()to keep it running forever. To submit jobs callscheduler.update_jobs(jobs)with the complete set of jobs.Running a simple example running a function every hour…importasyncioimportdataclassesfromacron.schedulerimportScheduler,[email protected](frozen=True)classThingData:foo:boolasyncdefdo_the_thing(data:ThingData):print(f"Doing the thing{data}")asyncdefrun_jobs_forever():do_thing=Job[ThingData](name="Do the thing",schedule="0/1 * * * *",data=ThingData(True),func=do_the_thing,)asyncwithScheduler()asscheduler:awaitscheduler.update_jobs({do_thing})awaitscheduler.wait()if__name__=="__main__":try:asyncio.run(run_jobs_forever())exceptKeyboardInterrupt:print("Bye.")Specifying a timezoneFor python 3.9+ you can use the standard library’szoneinfomodule to specify a timezone.importzoneinfoasyncwithScheduler(tz=zoneinfo.ZoneInfo("Europe/Berlin"))asscheduler:...For earlier python versions you can use a third party library likepytz.importpytzasyncwithScheduler(tz=pytz.timezone("Europe/Berlin"))asscheduler:...Job contextIt is possible to retrieve the context for the scheduled job from the running job function usingjob_context(). This returns aJobContextcontaining a reference to theScheduledJob. Thejob_context()function is implemented using contextvars to provide the correct context to the matching asyncio task.asyncdefmy_job_func():job_id=acron.job_context().scheduled_job.idjob_name=acron.job_context().scheduled_job.job.nameprint(f"Running job{job_id!r}, scheduled with id{job_id}")Local developmentThe project uses poetry to run the test, the linter and to build the artifacts.The easiest way to start working on acron is to use docker with the dockerfile included in the repository (manual usage of poetry is explained here:https://python-poetry.org/docs/).To use docker, first generate the docker image. Run this command from the top level directory in the repository:docker build -t acron-builder -f docker/Dockerfile .Now you can use it to build or run the linter/tests:$aliasacron-builder="docker run --rm -it -v$PWD/dist:/build/dist acron-builder"$acron-builderrunpytesttests=============================================================================================== test session starts ================================================================================================ platform linux -- Python 3.9.7, pytest-5.4.3, py-1.10.0, pluggy-0.13.1 rootdir: /build plugins: asyncio-0.15.1 collected 4 items tests/test_acron.py .... [100%] ================================================================================================ 4 passed in 0.04s =================================================================================================$acron-builderbuildBuilding acron (0.1.0) - Building sdist - Built acron-0.1.0.tar.gz - Building wheel - Built acron-0.1.0-py3-none-any.whl$lsdistacron-0.1.0-py3-none-any.whl acron-0.1.0.tar.gzDebuggingDebug logging can be enabled by setting theACRON_DEBUGenvironment variable toTRUE.
acronym
ACRONYM (Acronym CReatiON for You and Me)
acronym-alias
AcronymA smart alias management system to shorten your shell commands.Explore the docs »Report Bug·Request FeatureElevator pitchaliasis a POSIX shell command that replaces a single word with a string. Aslinuxizedescribes aboutalias:If you often find youself typing a long command on the terminal, then you will find bash aliases handy... Bash aliases are essentially shortcuts that can save you from having to remember long commands and eliminate a great deal of typing when you are working on the command line.So aliases are no doubt a boon for productive shell usage. The problemacronymattempts to solves is the difficulty keeping track of what aliases you've defined in your possibly long shell configuration, and the difficulty maintaining a consistent naming pattern.Acronymsolves this by greatly simplifying the process of defining new aliases in a standard and efficient way. Instead of having to edit your shell configuration, pick a memorable name that doesn't conflict with other aliases, and add thealiascommand, you would simply use theacronym addinvocation to automatically use the command's acronym, or seeusagefor greater versatility.DemonstrationThis demo showcases how a very long command withsudoand specific flags can be easily shortened to a two letter alias.https://user-images.githubusercontent.com/68311366/179607402-bbbd1114-0cf8-4aa3-b20d-1b6989ee0e26.mp4The best way to do this example withoutacronymin my opnion is:echo 'alias pu="sudo pacman -Syu --noconfirm --color=auto"' >> /path/to/aliases.shTo view them,cat /path/to/aliases.sh, where the output is in the formatalias a="b" alias m="n" alias x="y"And while this was the system I used before writing this tool,acronymallows for much needed abstraction. Note that theacronymcommands come pre-registered for convenience, so to add an alias isaa x, to change the alias name isac x with y, to remove it isar y, and to print your aliases in toml format isap, where the output is in the format[acronym] aa = "acronym add" ar = "acronym rm" ae = "acronym edit" ... [pacman] pu = "sudo pacman -Syu --noconfirm --colo=auto"InstallationInstall packageWith pip:pip install acronym-aliasWith AUR helper:yay -S acronymSource the aliases in shellrcWith install scriptacronym installManually edit rc (usepip show acronymto find install dir, which is either under~/.local/lib/...or/usr/lib/...). ~/.local/lib/python3.10/site-packages/acronym/data/aliases.shIf you're using zsh and want completion, add this line too:fpath+=(~/.local/share/zsh/site-functions)UsageUsage: acronym [OPTIONS] COMMAND [ARGS]... Note: The main file, aliases.toml, is structured as the following: [jupyter] jn = "jupyter notebook" jl = "jupyter lab" Where [jupyter] is the section, jn is the alias, and "jupyter notebook" is the command. Options: add ... --flags Include command line flags in auto-generated acronym. rm ... --section Delete whole sections instead of aliases from aliases.toml. -h, --help Show this message and exit. Commands: add Add provided CMD with auto-generated alias, or add multiple with comma seperation. Keywords: "CMD as ALIAS" to give custom ALIAS. "CMD under SECTION" to give custom SECTION for organization purposes. See usage examples for more explaination. rm Remove provided aliases. edit Directly edit aliases.toml with $EDITOR. change Change OLD alias name with NEW. suggest Suggest pre-defined aliases based on shell command history. print Pretty print given sections of aliases.toml, or print all contents if no args given. Usage Examples: Add "git reset --hard" as an acronymed alias (ignoring flags) $ acronym add git reset --hard gr = "git reset --hard" Add cmd (including flags) using "--flags" flag $ acronym add git reset --hard --flags grh = "git reset --hard" Add cmd with custom alias name "greset" using "as" keyword $ acronym add git reset --hard as greset greset = "git reset --hard" Add cmd under section "etc", instead of section "git" using "under" keyword $ acronym add git reset --hard under etc gr = "git reset --hard" Add multiple aliases by comma seperation (with same rules as above) $ acronym add git reset --hard --flags, jupyter notebook grh = "git reset --hard" jn = "jupyter notebook" Remove aliases "gc" and "asdf" $ acronym rm gc asdf Remove sections "jupyter" and "etc" $ acronym rm jupyter etc --section Edit the configuration file $ acronym edit Replace alias "gr" to "greset" without changing its command $ acronym change gr with greset Get suggestions for more aliases based on shell history file $ acronym suggest Print sections "pip" and "apt" $ acronym print pip apt [pip] ... [apt] ...
acronymmaker
AcronymMaker in PythonDescriptionAcronymMakercreates awesome acronyms for your projects. Let us briefly describe how it works with some vocabulary.Atokenis a set of words from which one word must appear in the acronym built byAcronymMaker. Said differently, there must be a letter in common between a word from the set and the built acronym. This letter may be either the first letter of a word in the token or any letter, depending on theletter selection strategythat is given toAcronymMaker.Additionally, tokens may beoptional. In this case,AcronymMakerwill try to match a letter from the words in the optional token to a letter in the acronym, but the acronym will still be accepted if it fails to do so.To find an acronym for a given sequence of tokens,AcronymMakeruses adictionary, i.e., a set of known words, in which it looks for acronyms. A word in the dictionary is said to beexplained(as an acronym) by the sequence of tokens if there is a letter in the word for each word in each (non-optional) token. In this case, we say that the letter isexplainedby the corresponding word.Moreover, there are two ways to explain a word as an acronym: either by following the order of the tokens in the specified sequence, or without considering this order.AcronymMakersupports both of them (independently).Finally, note that there may be unexplained letters in the acronym. Their number may be limited, by limiting both the number of consecutive unused letters and the number of overall unused letters in a word. If one of these limits is exceeded, then the word will not be considered as explained.RequirementsThis project provides a Python implementation ofAcronymMaker, you thus needPython 3on your computer to run it.You may installAcronymMakeron your computer along with all itsdependenciesthanks topipwith the following command line:python3-mpipinstallacronymmakerHow to useAcronymMakerThere are two ways to use the Python implementation ofAcronymMaker. This section describes both of them.Command-Line InterfaceAcronymMakercomes with a command-line interface that has the following usage:acronymmaker [-l {all,first}] [-m {ordered,ordered-greedy,unordered}] [-c <nb>] [-u <nb>] -d <dict> [<dict> ...]Let us now describe the parameters of the command line above.The parameter-l(--select-letters) allows specifying whether only thefirstletter orallthe letters of a word from a token may be used to explain a letter of the acronym.The parameter-m(--matching-strategy) allows specifying whether the tokens must be consideredorderedorunordered. The strategyordered-greedyalso considers the tokens in order, but using a more efficient algorithm that may however miss matching acronyms which would have been found by theorderedstrategy.The parameters-c(--max-consecutive-unused) and-u(--max-total-unused) allow specifying the maximum numbers of unused letters in the acronym, by limiting the number of consecutive and overall unexplained letters, respectively.The parameter-d(--dictionary) allows specifying the path to the dictionary file(s) from whichAcronymMakerwill look for acronyms. You may find such dictionarieshere. This is the only required parameter.Once the command-line application has started, a prompt asks you to enter your tokens, separated by blank spaces. Each token defines a set of words separated with slashes (/), and may end with a question mark (?) to specify that the token is optional. When you pressEnter, the matching acronyms are displayed in the console. You may then enter new sequences of tokens if you wish to find other acronyms, or you may exit the application with eitherCtrl-CorCtrl-D.Python APIYou may also want to directly interact with the Python API ofAcronymMaker. Theacronymmakerpackage provides the function and classes to programmatically set up an instance ofAcronymMakersimilarly to what is proposed for the command-line interface.First, there are two functions corresponding to the letter selection strategies. Both of them are in theacronymmaker.selectionmodule.fromacronymmaker.selectionimportselect_all_letters,select_first_letterThere are also three classes corresponding to the matching strategies. They are defined in theacronymmaker.matchingmodule.fromacronymmaker.matchingimportGreedyOrderedMatchingStrategyfromacronymmaker.matchingimportRegexBasedOrderedMatchingStrategyfromacronymmaker.matchingimportUnorderedMatchingStrategyAll these strategies define a constructor that takes as parameters the maximum number of consecutive unused letters and the maximum number of overall unused letters, as this can be seen in the example below.matching_strategy=UnorderedMatchingStrategy(max_consecutive_unused=3,max_total_unused=5)It is now possible to instantiate anAcronymMakerto create your acronyms. First, you need to importAcronymMaker.fromacronymmaker.makerimportAcronymMakerThen, you may instantiate anAcronymMakeras follows.my_acronyms=[]maker=AcronymMaker(select_all_letters,matching_strategy,my_acronyms.append)Themakerinitialized above will append tomy_acronymsall the acronyms it will identify. You may of course provide any callback function as third parameter to the constructor ofAcronymMaker. The only requirement for this function is that it must take as parameter an instance ofAcronym(from theacronymmaker.matchingmodule). You can for instance print it, display it on a GUI, etc.Then, you need to tell to the instance ofAcronymMakerwhat are the words that are authorized as acronyms (a.k.a. the "dictionary"). We provide a set of dictionary astext files, but you can of course use your own set of words.To add new words, you can either add them one at a time, or all of them at once.maker.add_known_word('foo')maker.add_known_words(['bar','baz'])Then, you need to provide the list of the tokens for which to find an acronym. To this end, theTokenBuilder, defined inacronymmaker.token, makes easier the creation of aToken.builder=TokenBuilder(select_all_letters)builder.add_word('foo')builder.add_word('bar')builder.set_optional(True)token=builder.build()Once you have built all your tokens, put them in a list, saytokens. Finally, pass this list as parameter to thefind_acronymmethod ofmaker. It will try to explain each word of the dictionary, and will invoke the callback function specified when creating the instance ofAcronymMakereach time it successfully explains a word with the correspondingAcronyminstance.maker.find_acronyms(tokens)To deal with the instances ofAcronymthat are produced by this method and stored in the listmy_acronyms(in this example), you may be particularly interested in the following methods:get_word()gives the word that is explained as an acronym, with each explained letter upper-cased.get_explanations()gives the list of explanations of the acronym, i.e., all the possible combinations of words in the tokens that explain the word as an acronym. Moreover, each letter corresponding to an explained letter of the acronym are upper-cased.
acros
test api
acrosort-tex
acrosort-texacrosort-texis a Python Command Line App to sort your acronyms in your.texby their shortform.InstallationYou can install acrosort-tex using pip (note the underscore):pipinstallacrosort_texUsageTo useacrosort-tex, you first need to create a.texfile with a list of acronyms (see inexamplesfor an example file).It doesn't matter if there are other TeX commands before or after theacronymblock.To sort the acronyms, run the following command:acrosort<input_file.tex><output_file.tex>For example:acrosortexamples/List_Of_Abbreviations.texsorted_acronyms.texThis will create a new.texfile calledsorted_acronyms.texwith the sorted acronyms, while everything else isn't touched.It will also find the longest key to set the width of the shortform column in the acronym block.Licenseacrosort_texis licensed under the MIT License. See the LICENSE file for more information.
across
No description available on PyPI.
across-py
AcrossAcross is the fastest, cheapest and most secure cross-chain bridge. It is a system that uses UMA contracts to quickly move tokens across chains. This contains various utilities to support applications on across.How to useGet suggested fees from online APIUse across official API to get suggested fees.>>>importacross>>>a=across.AcrossAPI()>>>a.suggested_fees("0x7f5c764cbc14f9669b88837ca1490cca17c31607",10,1000000000){'slowFeePct':'43038790000000000','instantFeePct':'5197246000000000'}Fee CalculatorCalculates lp fee percentages when doing a transfer.fromacross.fee_calculatorimport(calculate_apy_from_utilization,calculate_realized_lp_fee_pct,)fromacross.utilsimporttoBNWeirate_model={"UBar":toBNWei("0.65"),"R0":toBNWei("0.00"),"R1":toBNWei("0.08"),"R2":toBNWei("1.00"),}interval={"utilA":0,"utilB":toBNWei(0.01),"apy":615384615384600,"wpy":11830749673498}apy_fee_pct=calculate_apy_from_utilization(rate_model,interval["utilA"],interval["utilB"])assertapy_fee_pct==interval["apy"]realized_lp_fee_pct=calculate_realized_lp_fee_pct(rate_model,interval["utilA"],interval["utilB"])assertrealized_lp_fee_pct==interval["wpy"]LP Fee CalculatorGet lp fee calculations by timestamp.fromacrossimportLpFeeCalculatorfromweb3importWeb3provider=Web3.WebsocketProvider("{YOUR-PROVIDER-ADDRESS}")calculator=LpFeeCalculator(provider)token_address="0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2"# WETH on mainnetbridge_pool_address="0x7355Efc63Ae731f584380a9838292c7046c1e433"# WETH BridgePool on mainnetamount="1000000000000000000"# 1 ETHtimestamp=1645000000# timestamp in secondspercent=calculator.get_lp_fee_pct(token_address,bridge_pool_address,amount,timestamp)print(percent)How to build and testInstall poetry and install the dependencies:pip3installpoetry poetryinstall# testpython-munittest# local install and testpip3installtwine python3-mtwineupload--repositorytestpypidist/* pip3install--index-urlhttps://test.pypi.org/simple/--no-depsacross
acrosure-sdk
# Acrosure Python SDK![Acrosure](./static/Acrosure-color.png)Python version 2 and 3 SDK for connecting with Acrosure Insurance Gateway## InstallationInstall via pip:`pip install acrosure_sdk`## Requirements* Python 2.7.1+* `requests` library## Getting StartedImport AcrosureClient into your project.```pythonfrom acrosure_sdk import AcrosureClient```Instantiate with an API key from [Acrosure Dashboard](https://dashboard.acrosure.com).```pythonacrosure_client = AcrosureClient(token = '<your_api_key>')```## Basic UsageAcrosureClient provides several objects such as `application`, `product`, etc. and associated APIs.Any data will be inside an response object with `data` key, along with meta data, such as:```json{"data": { ... },"status": "ok",...}```### Application#### GetGet application with specified id.```pythonapplication = acrosure_client.application.get('<application_id>')```#### CreateCreate an application.```pythoncreated_application = acrosure_client.application.create(productId = '<product_id>', # requiredbasic_data = {},package_options = {},additional_data = {},package_code = '<package_code>',attachments = [])```#### UpdateUpdate an application.```pythonupdatedApplication = acrosure_client.application.update(application_id = '<application_id>', # requiredbasic_data = {},package_options = {},additional_data = {},package_code = '<package_code>',attachments = [])```#### Get packagesGet current application available packages.```pythonpackages = acrosure_client.application.get_packages('<application_id>')```#### Select packageSelect package for current application.```pythonupdated_application = acrosure_client.application.select_package(application_id = '<application_id>',package_code = '<package_code>')```#### Get packageGet selected package of current application.```pythoncurrent_package = acrosure_client.application.get_package('<application_id>')```#### Get 2C2P hashGet 2C2P hash.```pythonreturned_hash = acrosure_client.application.get_2c2p_hash(application_id = '<application_id>',args = '<arguments>')```#### SubmitSubmit current application.```pythonsubmitted_application = acrosure_client.application.submit('<application_id>')```#### ConfirmConfirm current application._This function needs secret API key._```pythonconfirmed_application = acrosure_client.application.confirm('<application_id>')```#### ListList your applications (with or without query).```pythonapplications = acrosure_client.application.list(query)```### Product#### GetGet product with specified id.```pythonproduct = acrosure_client.product.get('<product_id>')```#### ListList your products (with or without query).```tproducts = acrosure_client.product.list(query)```### Policy#### GetGet policy with specified id.```pythonpolicy = acrosure_client.policy.get('<policy_id>')```#### ListList your policies (with or without query).```pythonpolicies = acrosure_client.policy.list(query)```### Data#### GetGet values for a handler (with or without dependencies, please refer to Acrosure API Document).```python// Without dependenciesvalues = acrosure_client.data.get(handler = '<some_handler>')// With dependenciesvalues = acrosure_client.data.get(handler = '<some_handler>',dependencies = ['<dependency_1>', '<dependency_2>'])```### Team#### Get infoGet current team information.```pythonteam_info = acrosure_client.team.get_info()```### Other functionality#### Verify webhook signatureVerify webhook signature by specify signature and raw data string```pythonis_signature_valid = acrosure_client.verify_webhook(signature = '<signature>',data = '<raw_data>')```## Advanced UsagePlease refer to [this document](https://github.com/Acrosure/acrosure-python-sdk/wiki/Acrosure-Python-SDK) for AcrosureClient usage.And refer to [Acrosure API Document](https://docs.acrosure.com/docs/api-overall.html) for more details on Acrosure API.## Associated Acrosure API endpoints### Application```/applications/get/applications/list/applications/create/applications/update/applications/get-packages/applications/get-package/applications/select-package/applications/submit/applications/confirm/applications/get-hash```### Product```/products/get/products/list```### Policy```/policies/get/policies/list```### Data```/data/get```### Team```/teams/get-info```
acrt
No description available on PyPI.
acru-l
AWS Cloud Resource Utils - Library (ACRU-L)PronouncedAh-crew-el (ə-kroo͞′l)An open source framework for collecting and reusing AWS CDK constructs and stacks.Why?The problem with infrastructure as code ...Monorepos... Snowflake code...Confounding application source code with devopsA strict interface and reuse patternsInstallationpoetry add -D acru-lpip install acru-lUsageCore ConceptsResources - Extended constructsServices - Collections of Resources that build a service interfaceStacks - Collections of ServicesResourcesExtended constructs with set defaultsServicesTBDStacksTBDAppsTBD
acru-l-toolkit
ACRU-L ToolkitPronouncedAh-crew-el (ə-kroo͞′l)ToolkitPartner library toACRU-L.This project houses utilities that makes sense to use both in production application code (e.g. handler base classes) and in the main ACRU-L project, which is intended to only be used in development / devops enviroments.
acrv-datasets
~Please note this is only abetarelease at this stage~ACRV Datasets: dataset integration for Best of ACRV projectsNote: support will be added soon for datasets that require end-users accept of licensing agreementsThe ACRV Datasets package is a light wrapper for generically managing datasets. The package supports any dataset, as long as it has a public URL. We emphasise that we do not own the datasets accessed through this package, we simply provide easy access and integration for projects like theBest of ACRV codebases.Datasets are defined in a YAML file, and there is full support for grouping sub-datasets together. For example,'coco'can be used to refer to 13 different COCO datasets with a single identifier. You can also easily add your own datasets simply by editing the same datasets YAML file. Once added, datasets can be downloaded and accessed from Python with simple function calls.Our code is free to use, and licensed under BSD-3. If you use any datasets in your work, you must appropriately referencethe original dataset authors! Please seedataset referencesbelow.Installing the ACRV Datasets packageWe offer the following methods for installing the ACRV Datasets package:Through our Conda and Pip packages: single command installs the package and Python dependences (these are equivalent as there are no system dependencies)Directly from source: allows easy editing and extension of our code, but you take care of building and all dependenciesConda and PipThe ACRV Datasets package has no system dependencies, so installation is the same for both Conda & Pip package management systems.For Pip, simply install via:u@pc:~$ pip install acrv-datasetsInstallation via Conda is the same once you haveConda installedon your system, and are inside aConda environment. From there, simply run:u@pc:~$ conda install acrv-datasetsFrom sourceInstalling from source is very similar to thepipmethod above due to the package only containing Python code. Simply clone the repository, enter the directory, and install viapip:u@pc:~$ pip install -e .Note: the editable mode flag (-e) is optional, but allows you to immediately use any changes you make to the code in your local Python ecosystem.Downloading & accessing datasetsThis package exposes a simple Python interface that automatically handles downloading, extracting, and accessing datasets. All of this complexity is hidden behind a single user action: getting datasets. For example to "get" the NYU dataset:importacrv_datasetsasadnyu_location=ad.get_datasets(['nyu'])When callingget_datasets(), the dataset will be downloaded and extracted if it doesn't already exist. For example the exact same call above works if you don't already have the'nyu'dataset, it will just block and report progress while it gathers the dataset.Datasets are stored in a default directory, which can be configured via the following code:importacrv_datasetsasadad.set_datasets_directory('/mnt/hdd/acrv_datasets')From this point on, all dataset operations would be performed in the/mnt/hdd/acrv_datasetsdirectory. If no location has been set, a default will be used which is printed in yellow before all operations. You can also explicitly override the dataset directory for single operations:importacrv_datasetsasadad.get_datasets(['nyu'],'mnt/hdd2/other_location')You can see a live list of supported datasets, and access a dictionary containing each dataset's details, with the following code:importacrv_datasetsasaddetails=ad.supported_datasets()The module can also be accessed directly from the command line using thepython3 -m acrv_datasets ...syntax. Equivalent commands for the above Python are shown below:u@pc:~$ python3 -m acrv_datasets --datasets nyuu@pc:~$ python3 -m acrv_datasets --set-default-datasets-directory /mnt/hdd/acrv_datasetsu@pc:~$ python3 -m acrv_datasets --datasets nyu --datasets-directory /mnt/hdd/acrv_datasetsu@pc:~$ python3 -m acrv_datasets --supported-datasetsThere is also a help flag which documents the supported syntax:u@pc:~$ python3 -m acrv_datasets --helpAdding your own datasetsNew datasets can be added by making additions to the'datasets.yaml'file. All that is needed is a unique dataset identifier, and a public URL.A detailed description of the syntax for adding new datasets is provided at the top of the file:Datasets are listed in named groups. The group name is the top level key, the dataset name is the second level key, and the public URL is the second level value. The group name & dataset name combine to form a unique dataset identifier.For example, the following would specify a 2014 & 2021 version of my dataset called 'my_dataset' (with the unique identifiers 'my_dataset/2014' & 'my_dataset/2021' respectively):my_dataset:2014:https://my_dataset.hosting/2014.tgz2021:https://my_dataset.hosting/2021.tgzFor brevity the dataset name can be omitted if there is only 1 dataset in a group. For example, the following gives a dataset with the identifier 'my_simple_dataset':my_simple_dataset:https://my_dataset.hosting/simply.tgzDataset referencesWe again emphasise that you are required to meet all of the licensing terms of the specific dataset if you wish to use the dataset in your own work (we merely provide simplified access).Below is a list of all datasets identifiers currently available grouped by their owner, with a link provided. Please follow the owner's citation instructions if using their datasets in your research:NYUv2:nyuPascal VOC:vocSBD:sbdCOCO:coco,coco/train2014,coco/val2014,coco/train2014,coco/val2014,coco/annotations_trainval2014,coco/test2015,coco/train2017,coco/val2017,coco/annotations_trainval2017,coco/captions,coco/vqa_questions_train,coco/vqa_questions_val,coco/vqa_questions_test,coco/vqa_annotations_train,coco/vqa_annotations_valGloVe:gloveTODO???:trainval36
acrylamid
Acrylamid is a mixture ofnanoc,PyblosxomandPelicanlicensed under BSD Style, 2 clauses. It is actively developed athttps://github.com/posativ/acrylamid/.Why?it is reallyfastdue incremental buildssupport forJinja2andMakotemplatesmanyMarkdownextensions and customreStructuredTextdirectivesMathML, enhanced typography and hyphenation using soft-hyphensOh, and it can also generate a static blog with articles, static pages, tags, RSS/Atom feeds (also per tag), article listing and a sitemap.Why the name “Acrylamid”?I’m studying bioinformatics and I was experimenting with Acrylamide at this time. I’m really bad at naming. If you have a better name, please tell me! Two requirements: reasonably speakable and tab-completion after 3 characters.OverviewWith Acrylamid you can write your weblog entries with your editor of choice in Markdown, reStructuredText or textile. With several content filters you can pimp your HTML (typography, math, hyphenation). Acrylamid provides a very sophisticated CLI and integrates perfectly with any DVCes. It generates completely static HTML you can host everywhere.supported markup languagesMarkdownand additional extensions (support forMathML, deletion/insertion, sub- and supscript, syntax highlighting …)reStructuredTextwith directives for syntax highlighting and youtube video embeddingtextile,discount, all dialects supported bypandocand plain HTMLYou miss one? Extend Acrylamid inless than 30 LoC!other filterssupport for Jinja2 and Mako directly in postings (before they get processed)typography(andsmartypants)TeX hyphenationsummarize abilityacronym detectionthat automatically replace acronyms and abbreviationsblogging featuresyou like theYAML front matterfromJekyllornanoc? First choice in Acrylamid!coming fromPelican? Acrylamid has also support for metadata in the native format of Markdown, reStructuredText and even Pandoc.support for translations (oh, and did I mention the language dependend hyphenation feature?).a few HTML5 themes, seeTheming.internal webserver with automatic compiling when something has changed.assets management, includingLESSandSASSconversion.uni-directional PingBack support.static site search.what is missingNo comments. You have to useDisqusorthis approach.Quickstarteasy_install -U acrylamidThis installs Acrylamid withJinja2as templating engine. ForMakouseeasy_install-Uacrylamid[mako]. This installs two additional but not required dependencies:Markdownandtranslitcodec. To get a list of all supported modules, head over toadditional supported modules.If you rather use non-ascii characters, you’re better off with:easy_install -U acrylamid python-magic unidecodeInitialize the base structure, editconf.pyandlayouts/and compile with:$ acrylamid init myblog # --mako, defaults to --jinja2 create myblog/conf.py ... $ cd myblog/ $ acrylamid compile && acrylamid view create [0.05s] output/articles/index.html create [0.37s] output/2012/die-verwandlung/index.html create [0.00s] output/index.html create [0.00s] output/tag/die-verwandlung/index.html create [0.00s] output/tag/franz-kafka/index.html create [0.03s] output/atom/index.html create [0.04s] output/rss/index.html create [0.00s] output/sitemap.xml create output/style.css 9 new, 0 updated, 0 skipped [0.72s] * Running on http://127.0.0.1:8000/Real World Examples?Practicing web development– Mark van Lent [source]mecker. mecker. mecker.– Martin Zimmermann [source]Groovematic– Isman Firmansyah [source]Christoph Polcin– Christoph Polcin [source,theme]CommandsSeecommandsfor a detailed overview.$ acrylamid --help usage: acrylamid [-h] [-v] [-q] [-C] [--version] ... positional arguments: init initializes base structure in DIR compile compile blog view fire up built-in webserver autocompile automatic compilation and serving new create a new entry check run W3C or validate links deploy run task import import content from URL or FILE info short summary ping notify ressources optional arguments: -h, --help show this help message and exit -v, --verbose more verbose -q, --quiet less verbose -C, --no-color disable color --version show program's version number and exitNeed Help?Join#acrylamidonFreenode! If you found a bug, please report it onGitHub Issues. The project has also a mailing list [Archive], just send an email [email protected] you have subscribed .
acryl-datahub
Theacryl-datahubpackage contains a CLI and SDK for interacting with DataHub, as well as an integration framework for pulling/pushing metadata from external systems.See theDataHub docs.
acryl-datahub-actions
⚡ DataHub Actions FrameworkWelcome to DataHub Actions! The Actions framework makes responding to realtime changes in your Metadata Graph easy, enabling you to seamlessly integrateDataHubinto a broader events-based architecture.For a detailed introduction, check out theoriginal announcementof the DataHub Actions Framework at the DataHub April 2022 Town Hall. For a more in-depth look at use cases and concepts, check outDataHub Actions Concepts.QuickstartTo get started right away, check out theDataHub Actions QuickstartGuide.PrerequisitesThe DataHub Actions CLI commands are an extension of the basedatahubCLI commands. We recommend first installing thedatahubCLI:python3-mpipinstall--upgradepipwheelsetuptools python3-mpipinstall--upgradeacryl-datahub datahub--versionNote that the Actions Framework requires a version ofacryl-datahub>= v0.8.34InstallationNext, simply install theacryl-datahub-actionspackage from PyPi:python3-mpipinstall--upgradepipwheelsetuptools python3-mpipinstall--upgradeacryl-datahub-actions datahubactionsversionConfiguring an ActionActions are configured using a YAML file, much in the same way DataHub ingestion sources are. An action configuration file consists of the followingAction Pipeline Name (Should be unique and static)Source ConfigurationsTransform + Filter ConfigurationsAction ConfigurationPipeline Options (Optional)DataHub API configs (Optional - required for select actions)With each component being independently pluggable and configurable.# 1. Required: Action Pipeline Name name: <action-pipeline-name> # 2. Required: Event Source - Where to source event from. source: type: <source-type> config: # Event Source specific configs (map) # 3a. Optional: Filter to run on events (map) filter: event_type: <filtered-event-type> event: # Filter event fields by exact-match <filtered-event-fields> # 3b. Optional: Custom Transformers to run on events (array) transform: - type: <transformer-type> config: # Transformer-specific configs (map) # 4. Required: Action - What action to take on events. action: type: <action-type> config: # Action-specific configs (map) # 5. Optional: Additional pipeline options (error handling, etc) options: retry_count: 0 # The number of times to retry an Action with the same event. (If an exception is thrown). 0 by default. failure_mode: "CONTINUE" # What to do when an event fails to be processed. Either 'CONTINUE' to make progress or 'THROW' to stop the pipeline. Either way, the failed event will be logged to a failed_events.log file. failed_events_dir: "/tmp/datahub/actions" # The directory in which to write a failed_events.log file that tracks events which fail to be processed. Defaults to "/tmp/logs/datahub/actions". # 6. Optional: DataHub API configuration datahub: server: "http://localhost:8080" # Location of DataHub API # token: <your-access-token> # Required if Metadata Service Auth enabledExample: Hello WorldAn simple configuration file for a "Hello World" action, which simply prints all events it receives, is# 1. Action Pipeline Name name: "hello_world" # 2. Event Source: Where to source event from. source: type: "kafka" config: connection: bootstrap: ${KAFKA_BOOTSTRAP_SERVER:-localhost:9092} schema_registry_url: ${SCHEMA_REGISTRY_URL:-http://localhost:8081} # 3. Action: What action to take on events. action: type: "hello_world"We can modify this configuration further to filter for specific events, by adding a "filter" block.# 1. Action Pipeline Name name: "hello_world" # 2. Event Source - Where to source event from. source: type: "kafka" config: connection: bootstrap: ${KAFKA_BOOTSTRAP_SERVER:-localhost:9092} schema_registry_url: ${SCHEMA_REGISTRY_URL:-http://localhost:8081} # 3. Filter - Filter events that reach the Action filter: event_type: "EntityChangeEvent_v1" event: category: "TAG" operation: "ADD" modifier: "urn:li:tag:pii" # 4. Action - What action to take on events. action: type: "hello_world"Running an ActionTo run a new Action, just use theactionsCLI commanddatahub actions -c <config.yml>Once the Action is running, you will seeAction Pipeline with name '<action-pipeline-name>' is now running.Running multiple ActionsYou can run multiple actions pipeline within the same command. Simply provide multiple config files by restating the "-c" command line argument.For example,datahub actions -c <config-1.yaml> -c <config-2.yaml>Running in debug modeSimply append the--debugflag to the CLI to run your action in debug mode.datahub actions -c <config.yaml> --debugStopping an ActionJust issue a Control-C as usual. You should see the Actions Pipeline shut down gracefully, with a small summary of processing results.Actions Pipeline with name '<action-pipeline-name' has been stopped.Supported EventsTwo event types are currently supported. Read more about them below.Entity Change Event V1Metadata Change Log V1Supported Event SourcesCurrently, the only event source that is officially supported iskafka, which polls for events via a Kafka Consumer.Kafka Event SourceSupported ActionsBy default, DataHub supports a set of standard actions plugins. These can be found inside the foldersrc/datahub-actions/plugins.Some pre-included Actions includeHello WorldExecutorDevelopmentBuild and TestNotice that we support all actions command using a separatedatahub-actionsCLI entry point. Feel free to use this during development.# Build datahub-actions module ./gradlew datahub-actions:build # Drop into virtual env cd datahub-actions && source venv/bin/activate # Start hello world action datahub-actions actions -c ../examples/hello_world.yaml # Start ingestion executor action datahub-actions actions -c ../examples/executor.yaml # Start multiple actions datahub-actions actions -c ../examples/executor.yaml -c ../examples/hello_world.yamlDeveloping a TransformerTo develop a new Transformer, check out theDeveloping a Transformerguide.Developing an ActionTo develop a new Action, check out theDeveloping an Actionguide.ContributingContributing guidelines follow those of themain DataHub project. We are accepting contributions for Actions, Transformers, and general framework improvements (tests, error handling, etc).ResourcesCheck out theoriginal announcementof the DataHub Actions Framework at the DataHub April 2022 Town Hall.LicenseApache 2.0
acryl-datahub-airflow-plugin
Datahub Airflow PluginSeethe DataHub Airflow docsfor details.
acryl-datahub-classify
datahub-classifyPredict InfoTypes forDataHub.Installationpython3 -m pip install --upgrade acryl-datahub-classifyAPIpredict_infotypesThis API populates infotype proposal(s) for each input column by using metadata, values & confidence level threshold. Following are the input and output contractAPI InputAPI expects following parameters in the outputcolumn_infos- This is a list of ColumnInfo objects. Each ColumnInfo object contains metadata (col_name, description, datatype, etc) and values of a column.confidence_level_threshold- If the infotype prediction confidence is greater than the confidence threshold then the prediction is considered as a proposal. This is the common threshold for all infotypes.global_config- This dictionary contains configuration details about all supported infotypes. Refer sectionInfotype Configurationfor more information.infotypes- This is a list of infotypes that is to be processed. This is an optional argument, if specified then it will override the default list of all supported infotypes. If user is interested in only few infotypes then this list can be specified with correct infotype names. Infotype names are case sensitive.minimum_values_threshold- Minimum number of column values required for processing. This is an optional argument, default is 50.API OutputAPI returns a list of ColumnInfo objects of length same as input ColumnInfo objects list. A populated list of Infotype proposal(s), if any, is added in the ColumnInfo object itself with a variable name asinfotype_proposals. The infotype_proposals list contains InfotypeProposal objects which has following informationinfotype- A proposed infotype name.confidence_level- Overall confidence of the infotype proposal.debug_info- confidence score of each prediction factor involved in the overall confidence score calculation. Refer sectionDebug Informationfor more information.Convention:Ifinfotype_proposalslist is non-empty then it indicates that there is at least one infotype proposal with confidence greater thanconfidence_level_threshold.Infotype ConfigurationInfotype configuration is a dictionary with all infotypes at root level key. Each infotype has following configurable parameters (value of each parameter is a dictionary)Prediction_Factors_and_Weights- This is a dictionary that specifies the weight of each prediction factor which will be used in the final confidence calculation. Following are the prediction factorsNameDescriptionDatatypeValuesExcludeName- optional exact match list for column names to exclude from classification for this info_typeName- regex list which is to be matched against column nameDescription- regex list which is to be matched against column descriptionDatatype- list of datatypes to be matched against column datatypeValues- this dictionary contains following informationprediction_type- values evaluation model (regex/library)regex- regex list which is to be matched against column valueslibrary- library name which is to be used to evaluate column valuesSample Infotype Configuration Dictionary{'<Infotype1>':{'Prediction_Factors_and_Weights':{'Name':0.4,'Description':0,'Datatype':0,'Values':0.6},'Name':{'regex':[<regexpatterns>]},'Description':{'regex':[<regexpatterns>]},'Datatype':{'type':[<listofdatatypes>]},'Values':{'prediction_type':'regex/library','regex':[<regexpatterns>],'library':[<libraryname>]}},'<Infotype2>':{......}}Debug InformationA debug information is associated with each infotype proposal, it provides details about confidence score from each prediction factor involved in overall confidence score calculation. This is a dictionary with following four prediction factors as keyNameDescriptionDatatypeValues{'Name':0.4,'Description':0.2,'Values':0.6,'Datatype':0.3}Supported InfotypesBelow Infotypes are supported out of the box.AgeGenderPerson Name / Full NameEmail AddressPhone NumberStreet AddressCredit-Debit Card NumberInternational Bank Account NumberVehicle Identification NumberUS Social Security NumberIpv4 AddressIpv6 AddressSwift CodeUS Driving License NumberRegex based custom infotypes are supported. Specify custom infotype configuration in format mentionedhere.AssumptionsIf value prediction factor weight is non-zero (indicating values should be used for infotype inspection) then a minimum 50 non-null column values should be present.DevelopmentSet up your Python environmentcddatahub-classify ../gradlew:datahub-classify:installDev# OR pip install -e ".[dev]"sourcevenv/bin/activateRunnning testspytesttests/--capture=no--log-cli-level=DEBUGSanity check code before committing# Assumes: pip install -e ".[dev]" and venv is activatedblacksrc/tests/ isortsrc/tests/ flake8src/tests/ mypysrc/tests/Build and Test../gradlew:datahub-classify:buildYou can also run these steps via the gradle build:../gradlew:datahub-classify:lint ../gradlew:datahub-classify:lintFix ../gradlew:datahub-classify:testQuick
acryl-datahub-cloud
No description available on PyPI.
acryl-datahub-tc
Introduction to Metadata IngestionFind Integration SourceIntegration OptionsDataHub supports bothpush-basedandpull-basedmetadata integration.Push-based integrations allow you to emit metadata directly from your data systems when metadata changes, while pull-based integrations allow you to "crawl" or "ingest" metadata from the data systems by connecting to them and extracting metadata in a batch or incremental-batch manner. Supporting both mechanisms means that you can integrate with all your systems in the most flexible way possible.Examples of push-based integrations includeAirflow,Spark,Great ExpectationsandProtobuf Schemas. This allows you to get low-latency metadata integration from the "active" agents in your data ecosystem. Examples of pull-based integrations include BigQuery, Snowflake, Looker, Tableau and many others.This document describes the pull-based metadata ingestion system that is built into DataHub for easy integration with a wide variety of sources in your data stack.Getting StartedPrerequisitesBefore running any metadata ingestion job, you should make sure that DataHub backend services are all running. You can either run ingestion via theUIor via theCLI. You can reference the CLI usage guide given there as you go through this page.Core ConceptsSourcesPlease see ourIntegrations pageto browse our ingestion sources and filter on their features.Data systems that we are extracting metadata from are referred to asSources. TheSourcestab on the left in the sidebar shows you all the sources that are available for you to ingest metadata from. For example, we have sources forBigQuery,Looker,Tableauand many others.Metadata Ingestion Source StatusWe apply a Support Status to each Metadata Source to help you understand the integration reliability at a glance.: Certified Sources are well-tested & widely-adopted by the DataHub Community. We expect the integration to be stable with few user-facing issues.: Incubating Sources are ready for DataHub Community adoption but have not been tested for a wide variety of edge-cases. We eagerly solicit feedback from the Community to streghten the connector; minor version changes may arise in future releases.: Testing Sources are available for experiementation by DataHub Community members, but may change without notice.SinksSinks are destinations for metadata. When configuring ingestion for DataHub, you're likely to be sending the metadata to DataHub over either theREST (datahub-sink)or theKafka (datahub-kafka)sink. In some cases, theFilesink is also helpful to store a persistent offline copy of the metadata during debugging.The default sink that most of the ingestion systems and guides assume is thedatahub-restsink, but you should be able to adapt all of them for the other sinks as well!RecipesA recipe is the main configuration file that puts it all together. It tells our ingestion scripts where to pull data from (source) and where to put it (sink).:::tip Name your recipe with.dhub.yamlextension likemyrecipe.dhub.yamlto use vscode or intellij as a recipe editor with autocomplete and syntax validation.Make sure yaml plugin is installed for your editor:For vscode installRedhat's yaml pluginFor intellij installofficial yaml plugin:::Sinceacryl-datahubversion>=0.8.33.2, the default sink is assumed to be a DataHub REST endpoint:Hosted at "http://localhost:8080" or the environment variable${DATAHUB_GMS_URL}if presentWith an empty auth token or the environment variable${DATAHUB_GMS_TOKEN}if present.Here's a simple recipe that pulls metadata from MSSQL (source) and puts it into the default sink (datahub rest).# The simplest recipe that pulls metadata from MSSQL and puts it into DataHub# using the Rest API.source:type:mssqlconfig:username:sapassword:${MSSQL_PASSWORD}database:DemoData# sink section omitted as we want to use the default datahub-rest sinkRunning this recipe is as simple as:datahubingest-crecipe.dhub.yamlor if you want to override the default endpoints, you can provide the environment variables as part of the command like below:DATAHUB_GMS_URL="https://my-datahub-server:8080"DATAHUB_GMS_TOKEN="my-datahub-token"datahubingest-crecipe.dhub.yamlA number of recipes are included in theexamples/recipesdirectory. For full info and context on each source and sink, see the pages described in thetable of plugins.Note that one recipe file can only have 1 source and 1 sink. If you want multiple sources then you will need multiple recipe files.Handling sensitive information in recipesWe automatically expand environment variables in the config (e.g.${MSSQL_PASSWORD}), similar to variable substitution in GNU bash or in docker-compose files. For details, seehttps://docs.docker.com/compose/compose-file/compose-file-v2/#variable-substitution. This environment variable substitution should be used to mask sensitive information in recipe files. As long as you can get env variables securely to the ingestion process there would not be any need to store sensitive information in recipes.Basic Usage of CLI for ingestionpipinstall'acryl-datahub[datahub-rest]'# install the required plugindatahubingest-c./examples/recipes/mssql_to_datahub.dhub.ymlThe--dry-runoption of theingestcommand performs all of the ingestion steps, except writing to the sink. This is useful to validate that the ingestion recipe is producing the desired metadata events before ingesting them into datahub.# Dry rundatahubingest-c./examples/recipes/example_to_datahub_rest.dhub.yml--dry-run# Short-formdatahubingest-c./examples/recipes/example_to_datahub_rest.dhub.yml-nThe--previewoption of theingestcommand performs all of the ingestion steps, but limits the processing to only the first 10 workunits produced by the source. This option helps with quick end-to-end smoke testing of the ingestion recipe.# Previewdatahubingest-c./examples/recipes/example_to_datahub_rest.dhub.yml--preview# Preview with dry-rundatahubingest-c./examples/recipes/example_to_datahub_rest.dhub.yml-n--previewBy default--previewcreates 10 workunits. But if you wish to try producing more workunits you can use another option--preview-workunits# Preview 20 workunits without sending anything to sinkdatahubingest-c./examples/recipes/example_to_datahub_rest.dhub.yml-n--preview--preview-workunits=20ReportingBy default, the cli sends an ingestion report to DataHub, which allows you to see the result of all cli-based ingestion in the UI. This can be turned off with the--no-default-reportflag.# Running ingestion with reporting to DataHub turned offdatahubingest-c./examples/recipes/example_to_datahub_rest.dhub.yaml--no-default-reportThe reports include the recipe that was used for ingestion. This can be turned off by adding an additional section to the ingestion recipe.source:# source configssink:# sink configs# Add configuration for the datahub reporterreporting:-type:datahubconfig:report_recipe:falseTransformationsIf you'd like to modify data before it reaches the ingestion sinks – for instance, adding additional owners or tags – you can use a transformer to write your own module and integrate it with DataHub. Transformers require extending the recipe with a new section to describe the transformers that you want to run.For example, a pipeline that ingests metadata from MSSQL and applies a default "important" tag to all datasets is described below:# A recipe to ingest metadata from MSSQL and apply default tags to all tablessource:type:mssqlconfig:username:sapassword:${MSSQL_PASSWORD}database:DemoDatatransformers:# an array of transformers applied sequentially-type:simple_add_dataset_tagsconfig:tag_urns:-"urn:li:tag:Important"# default sink, no config neededCheck out thetransformers guideto learn more about how you can create really flexible pipelines for processing metadata using Transformers!Using as a library (SDK)In some cases, you might want to construct Metadata events directly and use programmatic ways to emit that metadata to DataHub. In this case, take a look at thePython emitterand theJava emitterlibraries which can be called from your own code.Programmatic PipelineIn some cases, you might want to configure and run a pipeline entirely from within your custom Python script. Here is an example of how to do it.programmatic_pipeline.py- a basic mysql to REST programmatic pipeline.DevelopingSee the guides ondeveloping,adding a sourceandusing transformers.CompatibilityDataHub server uses a 3 digit versioning scheme, while the CLI uses a 4 digit scheme. For example, if you're using DataHub server version 0.10.0, you should use CLI version 0.10.0.x, where x is a patch version. We do this because we do CLI releases at a much higher frequency than server releases, usually every few days vs twice a month.For ingestion sources, any breaking changes will be highlighted in therelease notes. When fields are deprecated or otherwise changed, we will try to maintain backwards compatibility for two server releases, which is about 4-6 weeks. The CLI will also print warnings whenever deprecated options are used.
acryl-executor
Acryl ExecutorRemote execution agent used for running DataHub tasks, such as ingestion powered through the UI.python3-mvenv--upgrade-depsvenvsourcevenv/bin/activate pip3install.NotesBy default, this library comes with a set of default task implementations:RUN_INGEST TaskSubprocessProcessIngestionTask- Executes a metadata ingestion run by spinning off a subprocess. Supports ingesting from a particular version, and with a specific plugin (based on the platform type requested)InMemoryIngestionTask- Executes a metadata ingestion run using the datahub library in the same process. Not great for production where we can see strange dependency conflicts when certain packages are executed together. Use this for testing, as it has no ability to check out a specific DataHub Metadata Ingestion plugin.
acrylic
Have you ever wanted a simple and intuitive way to work with colors in python? Then this library is for you!acrylicis a python package that you can use to manage colors, convert between different color formats, and work with color schemes and palettes.Currently supported color formats are:rgb,hsl,hsv,ryb,hex,nameSmall example:fromacrylicimportColor,RANDOM# Define a color using rgb valuesorange=Color(rgb=[247,177,79])# Use saturation from that color to create a new random color with hsvrandom_color=Color(hsv=[RANDOM,orange.hsv.s,98])# Print the random color's value in hexprint(random_color.hex)# Output: '#50FAF0'check outmore examplesbelow.acrylicalso has support forcolor schemes, support for more color schemes and functions to generate color palettes will be added in the future.complementary=cyan.scheme(Schemes.COMPLEMENTARY)shades=cyan.scheme(Schemes.SHADES)color_palette=[cyan,*complementary,*shades]More about color schemeshereHow to Installacryliccan be installed using pip:pipinstallacrylicIt has no dependencies and works with Python >=3.6DocumentationDefining ColorsYou can create a new color like this:fromacrylicimportColorcyan=Color(rgb=[83,237,229])The same syntax can be used to give input in any of the supported color formats. Currently supported formats arergb,hsv,hsl,hexandryb. Example:color=Color(rgb=[127,255,212])color=Color(hsl=[160,100,75])color=Color(hsv=[160,50,100])color=Color(hex='#7fffd4')color=Color(name='aquamarine')color=Color(ryb=[0,77,128])All values forrgbandrybshould be between0-255The value of hue forhsvandhslshould be between0.0-360.0and the other two components should be between0.0-100.0.Values forhexshould be a string representing 6-digit hex numberValues fornameshould be a string representing a valid CSS3 color nameConverting between color formatsAny instance ofColor()is automatically converted to every supported color format when its created, so there is no need to manually convert from one format to another. For any color, no matter how it was created, you can get its value in any format like this:cyan=Color(rgb=[83,237,229])print(cyan.rgb)print(cyan.hsv)print(cyan.hsl)print(cyan.hex)print(cyan.name)print(cyan.ryb)This makes converting from sayrgbtohslas easy as doing:hsl_values=Color(rgb=[83,237,229]).hslAccessing values of colorsWhen accessing these attributes for a color, it returns the values back as anamedtupleinstance. This behaves exactly as a normaltuplewould, but has an added benefit that its values can be accessed directly via the dot notation. Example:>>>cyan=Color(rgb=[83,237,229])>>>cyan.rgb# returns a namedtuple containing the valuesRgb(r=83,g=237,b=229)>>>[xforxincyan.rgb]# can be iterated over like a normal tuple[83,237,229]>>>cyan.rgb[1]# items can be accessed via index237>>>r,g,b=cyan.rgb# items can be unpacked>>>cyan.rgb.r,cyan.rgb.g# items can also be accessed via their name(83,237)Additional ways to define a colorIn addition to the default way to create a color,Color()offers additional methods that would enhance your ability to create colors.For example, to create a random color:fromacrylicimportColor,RANDOMrandom_color=Color(rgb=RANDOM)Creating a color with a random hue, but fixed saturation and value:random_hue=Color(hsv=[RANDOM,65,95])(for aesthetically pleasing random colors, checkexample 2below)Any of the components can be given as a list of 2 values like[a, b]instead of a single value. When given a range, a valuea <= value <= bwill randomly be picked for that component. For example to create a cyan color where saturation is randomly picked between 30 to 70:random_cyan=Color(hsv=[176,(30,70),95])Giving both values for a range isn't required, you can use this to just set the upper or lower limit by setting the other half toRANDOM:random_cyan=Color(hsv=[176,(RANDOM,70),95])random_cyan=Color(hsv=[176,(30,RANDOM),95])Note: Immutability and HashibilityAll instances of colors are immutable, meaning their values can't be changed once they are defined. This means that each instance ofColor()represents a specific color and will always represent that color. If you feel the need to modify a color, this can easily be done as:old_color=Color(rgb=[83,237,229])# change hue, but not saturation or valuenew_color=Color(hsv=[230,old_color.hsv.s,old_color.hsv.v])All instances of colors are also hashable. They can be safely used as keys fordict()s and can be added toset()to efficiently find unique colors or to test membership.>>>colors={Color(hex='#7fffd4'):'Can be used in dict() keys!'}>>>Color(name='aquamarine')incolorsTrue>>>colors[Color(rgb=[127,255,212])]'Can be used in dict() keys!'As a result of colors being immutable and hashable, colors that represent the sameRGBvalues will always be unambiguously equal to each other. This prevents a lot of bugs that can randomly appear when working with floathsv/hslvalues and removes the inconsistencies in the conversion algorithm that converts betweenrgbandhsv/hsl. An example that demonstrates this:>>>Color(hsl=[236.94,9.29,84.54])==Color(hsl=[240.0,8.86,84.51])TrueThis results inTruebecause both of thesehslvalues map to the samergbvalue(212, 212, 219)and thus represent the same color.Color schemesTheColor()class also provides some convenience functions to work with color schemes. In the future, these would also be used to build color palettes. For now, the corresponding colors from a color scheme for a specific color can be generated like this:fromacrylicimportColor,Schemescyan=Color(rgb=[83,237,229])complementary_color=cyan.scheme(Schemes.COMPLEMENTARY)cyan_triads=cyan.scheme(Schemes.TRIADIC)cyan_shades=cyan.scheme(Schemes.SHADES)Taking inspiration from traditional art where most of these color schemes originated from, these are calculated using theryb(red-yellow-blue) color wheel by default. To use thergb(red-green-blue) color wheel instead you can passin_rgb=Trueto the.scheme()function.For a list of all the available color schemes and their explanations, checkthis page.Example UsecasesCreate a color usingRGB, use its saturation to create a new color, and print its value as a hex string:orange=Color(rgb=[247,177,79])cyan=Color(hsv=[176.5,orange.hsv.s,98])print(cyan.hex)# Output: '#50FAF0'Generating random aesthetically pleasing colors, which for example can be used to color the default profile pictures for users of an appdefaesthetic_color():returnColor(hsl=[RANDOM,(65,RANDOM),(60,75)])(If you have ever tried generating random colors by randomizingrgbvalues, you would know how badly that works)Finding unique colors:test_set=set([Color(rgb=[61,245,245]),Color(hex='#3DF5F5'),Color(hsl=[180,89.8,60]),Color(hsl=[179.8,90.2,60.1])])print(test_set)# Output: {Color(rgb=(61, 245, 245))}The set contains only one color as all those colors map to the samergbvalues.Sorting all the pixels in an image horizontally by hue:fromacrylicimportColorfromPILimportImageorignal_image=Image.open('test.jpg')sorted_img=orignal_image.copy()pixels=orignal_image.load()foryinrange(sorted_image.height):row=[Color(rgb=pixels[n,y])forninrange(sorted_image.width)]sorted_row=sorted(row,key=lambdac:c.hsl.h)forx,cinenumerate(sorted_row):sorted_image.putpixel((x,y),c.rgb)This example also illustrates how easy it is to integrateacrylicwith other libraries and seamlessly switch betweenrgbandhslContributionsAll contributions toacrylicare welcome and appreciated! Ways in which you can contribute are:Report an issue (here)Raise a pull request (here)Request new featuresSpread the word aboutacrylic!LicenseMIT License: Copyright (c) 2020 - 2022 ArshLicense.txt
acryl-iceberg-legacy
Iceberg PythonIceberg is a python library for programatic access to iceberg table metadata as well as data access. The intention is to provide a functional subset of the java library.Getting StartedIceberg python is currently in development, for development and testing purposes the best way to install the library is to perform the following steps:git clone https://github.com/apache/iceberg.git cd iceberg/python_legacy pip install -e .TestingTesting is done using tox. The config can be found intox.iniwithin the python directory of the iceberg project.# simply run tox from within the python dir toxGet in TouchEmail:[email protected] a github incident
acryl-PyHive
PyHivePyHive is a collection of PythonDB-APIandSQLAlchemyinterfaces forPrestoandHive.UsageDB-APIfrompyhiveimportpresto# or import hive or import trinocursor=presto.connect('localhost').cursor()cursor.execute('SELECT * FROM my_awesome_data LIMIT 10')printcursor.fetchone()printcursor.fetchall()DB-API (asynchronous)frompyhiveimporthivefromTCLIService.ttypesimportTOperationStatecursor=hive.connect('localhost').cursor()cursor.execute('SELECT * FROM my_awesome_data LIMIT 10',async=True)status=cursor.poll().operationStatewhilestatusin(TOperationState.INITIALIZED_STATE,TOperationState.RUNNING_STATE):logs=cursor.fetch_logs()formessageinlogs:printmessage# If needed, an asynchronous query can be cancelled at any time with:# cursor.cancel()status=cursor.poll().operationStateprintcursor.fetchall()In Python 3.7asyncbecame a keyword; you can useasync_instead:cursor.execute('SELECT * FROM my_awesome_data LIMIT 10',async_=True)SQLAlchemyFirst install this package to register it with SQLAlchemy (seesetup.py).fromsqlalchemyimport*fromsqlalchemy.engineimportcreate_enginefromsqlalchemy.schemaimport*# Prestoengine=create_engine('presto://localhost:8080/hive/default')# Trinoengine=create_engine('trino://localhost:8080/hive/default')# Hiveengine=create_engine('hive://localhost:10000/default')logs=Table('my_awesome_data',MetaData(bind=engine),autoload=True)printselect([func.count('*')],from_obj=logs).scalar()# Hive + HTTPS + LDAP or basic Authengine=create_engine('hive+https://username:password@localhost:10000/')logs=Table('my_awesome_data',MetaData(bind=engine),autoload=True)printselect([func.count('*')],from_obj=logs).scalar()Note: query generation functionality is not exhaustive or fully tested, but there should be no problem with raw SQL.Passing session configuration# DB-APIhive.connect('localhost',configuration={'hive.exec.reducers.max':'123'})presto.connect('localhost',session_props={'query_max_run_time':'1234m'})trino.connect('localhost',session_props={'query_max_run_time':'1234m'})# SQLAlchemycreate_engine('presto://user@host:443/hive',connect_args={'protocol':'https','session_props':{'query_max_run_time':'1234m'}})create_engine('trino://user@host:443/hive',connect_args={'protocol':'https','session_props':{'query_max_run_time':'1234m'}})create_engine('hive://user@host:10000/database',connect_args={'configuration':{'hive.exec.reducers.max':'123'}},)# SQLAlchemy with LDAPcreate_engine('hive://user:password@host:10000/database',connect_args={'auth':'LDAP'},)RequirementsInstall usingpip install 'pyhive[hive]'for the Hive interface andpip install 'pyhive[presto]'for the Presto interface.pip install 'pyhive[trino]'for the Trino interfacePyHive works withPython 2.7 / Python 3For Presto: Presto installFor Trino: Trino installFor Hive:HiveServer2daemonChangelogSeehttps://github.com/dropbox/PyHive/releases.ContributingPlease fill out the Dropbox Contributor License Agreement athttps://opensource.dropbox.com/cla/and note this in your pull request.Changes must come with tests, with the exception of trivial things like fixing comments. See .travis.yml for the test environment setup.Notes on project scope:This project is intended to be a minimal Hive/Presto client that does that one thing and nothing else. Features that can be implemented on top of PyHive, such integration with your favorite data analysis library, are likely out of scope.We prefer having a small number of generic features over a large number of specialized, inflexible features. For example, the Presto code takes an arbitraryrequests_sessionargument for customizing HTTP calls, as opposed to having a separate parameter/branch for eachrequestsoption.TestingRun the following in an environment with Hive/Presto:./scripts/make_test_tables.sh virtualenv --no-site-packages env source env/bin/activate pip install -e . pip install -r dev_requirements.txt py.testWARNING: This drops/creates tables namedone_row,one_row_complex, andmany_rows, plus a database calledpyhive_test_database.Updating TCLIServiceThe TCLIService module is autogenerated using aTCLIService.thriftfile. To update it, thegenerate.pyfile can be used:python generate.py <TCLIServiceURL>. When left blank, the version for Hive 2.3 will be downloaded.
acryl-sqlglot
SQLGlot is a no-dependency SQL parser, transpiler, optimizer, and engine. It can be used to format SQL or translate between20 different dialectslikeDuckDB,Presto/Trino,Spark/Databricks,Snowflake, andBigQuery. It aims to read a wide variety of SQL inputs and output syntactically and semantically correct SQL in the targeted dialects.It is a very comprehensive generic SQL parser with a robusttest suite. It is also quiteperformant, while being written purely in Python.You can easilycustomizethe parser,analyzequeries, traverse expression trees, and programmaticallybuildSQL.Syntaxerrorsare highlighted and dialect incompatibilities can warn or raise depending on configurations. However, it should be noted that SQL validation is not SQLGlot’s goal, so some syntax errors may go unnoticed.Learn more about SQLGlot in the APIdocumentationand the expression treeprimer.Contributions are very welcome in SQLGlot; read thecontribution guideto get started!Table of ContentsInstallVersioningGet in TouchFAQExamplesFormatting and TranspilingMetadataParser ErrorsUnsupported ErrorsBuild and Modify SQLSQL OptimizerAST IntrospectionAST DiffCustom DialectsSQL ExecutionUsed ByDocumentationRun Tests and LintBenchmarksOptional DependenciesInstallFrom PyPI:pip3install"sqlglot[rs]"# Without Rust tokenizer (slower):# pip3 install sqlglotOr with a local checkout:make installRequirements for development (optional):make install-devVersioningGiven a version numberMAJOR.MINOR.PATCH, SQLGlot uses the following versioning strategy:ThePATCHversion is incremented when there are backwards-compatible fixes or feature additions.TheMINORversion is incremented when there are backwards-incompatible fixes or feature additions.TheMAJORversion is incremented when there are significant backwards-incompatible fixes or feature additions.Get in TouchWe'd love to hear from you. Join our communitySlack channel!FAQI tried to parse SQL that should be valid but it failed, why did that happen?You need to specify the dialect to read the SQL properly, by default it is SQLGlot's dialect which is designed to be a superset of all dialectsparse_one(sql, dialect="spark"). If you tried specifying the dialect and it still doesn't work, please file an issue.I tried to output SQL but it's not in the correct dialect!You need to specify the dialect to write the sql properly, by default it is in SQLGlot's dialectparse_one(sql, dialect="spark").sql(dialect="spark").I tried to parse invalid SQL and it should raise an error but it worked! Why didn't it validate my SQL.SQLGlot is not a validator and designed to be very forgiving, handling things like trailing commas.ExamplesFormatting and TranspilingEasily translate from one dialect to another. For example, date/time functions vary between dialects and can be hard to deal with:importsqlglotsqlglot.transpile("SELECT EPOCH_MS(1618088028295)",read="duckdb",write="hive")[0]'SELECT FROM_UNIXTIME(1618088028295 / 1000)'SQLGlot can even translate custom time formats:importsqlglotsqlglot.transpile("SELECT STRFTIME(x, '%y-%-m-%S')",read="duckdb",write="hive")[0]"SELECT DATE_FORMAT(x, 'yy-M-ss')"As another example, let's suppose that we want to read in a SQL query that contains a CTE and a cast toREAL, and then transpile it to Spark, which uses backticks for identifiers andFLOATinstead ofREAL:importsqlglotsql="""WITH baz AS (SELECT a, c FROM foo WHERE a = 1) SELECT f.a, b.b, baz.c, CAST("b"."a" AS REAL) d FROM foo f JOIN bar b ON f.a = b.a LEFT JOIN baz ON f.a = baz.a"""print(sqlglot.transpile(sql,write="spark",identify=True,pretty=True)[0])WITH`baz`AS(SELECT`a`,`c`FROM`foo`WHERE`a`=1)SELECT`f`.`a`,`b`.`b`,`baz`.`c`,CAST(`b`.`a`ASFLOAT)AS`d`FROM`foo`AS`f`JOIN`bar`AS`b`ON`f`.`a`=`b`.`a`LEFTJOIN`baz`ON`f`.`a`=`baz`.`a`Comments are also preserved on a best-effort basis when transpiling SQL code:sql="""/* multilinecomment*/SELECTtbl.cola /* comment 1 */ + tbl.colb /* comment 2 */,CAST(x AS INT), # comment 3y -- comment 4FROMbar /* comment 5 */,tbl # comment 6"""print(sqlglot.transpile(sql,read='mysql',pretty=True)[0])/* multilinecomment*/SELECTtbl.cola/* comment 1 */+tbl.colb/* comment 2 */,CAST(xASINT),/* comment 3 */y/* comment 4 */FROMbar/* comment 5 */,tbl/* comment 6 */MetadataYou can explore SQL with expression helpers to do things like find columns and tables:fromsqlglotimportparse_one,exp# print all column references (a and b)forcolumninparse_one("SELECT a, b + 1 AS c FROM d").find_all(exp.Column):print(column.alias_or_name)# find all projections in select statements (a and c)forselectinparse_one("SELECT a, b + 1 AS c FROM d").find_all(exp.Select):forprojectioninselect.expressions:print(projection.alias_or_name)# find all tables (x, y, z)fortableinparse_one("SELECT * FROM x JOIN y JOIN z").find_all(exp.Table):print(table.name)Read theast primerto learn more about SQLGlot's internals.Parser ErrorsWhen the parser detects an error in the syntax, it raises a ParseError:importsqlglotsqlglot.transpile("SELECT foo( FROM bar")sqlglot.errors.ParseError: Expecting ). Line 1, Col: 13. select foo( FROM bar ~~~~Structured syntax errors are accessible for programmatic use:importsqlglottry:sqlglot.transpile("SELECT foo( FROM bar")exceptsqlglot.errors.ParseErrorase:print(e.errors)[{'description':'Expecting )','line':1,'col':16,'start_context':'SELECT foo( ','highlight':'FROM','end_context':' bar','into_expression':None,}]Unsupported ErrorsPrestoAPPROX_DISTINCTsupports the accuracy argument which is not supported in Hive:importsqlglotsqlglot.transpile("SELECT APPROX_DISTINCT(a, 0.1) FROM foo",read="presto",write="hive")APPROX_COUNT_DISTINCTdoesnotsupportaccuracy'SELECT APPROX_COUNT_DISTINCT(a) FROM foo'Build and Modify SQLSQLGlot supports incrementally building sql expressions:fromsqlglotimportselect,conditionwhere=condition("x=1").and_("y=1")select("*").from_("y").where(where).sql()'SELECT * FROM y WHERE x = 1 AND y = 1'You can also modify a parsed tree:fromsqlglotimportparse_oneparse_one("SELECT x FROM y").from_("z").sql()'SELECT x FROM z'There is also a way to recursively transform the parsed tree by applying a mapping function to each tree node:fromsqlglotimportexp,parse_oneexpression_tree=parse_one("SELECT a FROM x")deftransformer(node):ifisinstance(node,exp.Column)andnode.name=="a":returnparse_one("FUN(a)")returnnodetransformed_tree=expression_tree.transform(transformer)transformed_tree.sql()'SELECT FUN(a) FROM x'SQL OptimizerSQLGlot can rewrite queries into an "optimized" form. It performs a variety oftechniquesto create a new canonical AST. This AST can be used to standardize queries or provide the foundations for implementing an actual engine. For example:importsqlglotfromsqlglot.optimizerimportoptimizeprint(optimize(sqlglot.parse_one("""SELECT A OR (B OR (C AND D))FROM xWHERE Z = date '2021-01-01' + INTERVAL '1' month OR 1 = 0"""),schema={"x":{"A":"INT","B":"INT","C":"INT","D":"INT","Z":"STRING"}}).sql(pretty=True))SELECT("x"."a"<>0OR"x"."b"<>0OR"x"."c"<>0)AND("x"."a"<>0OR"x"."b"<>0OR"x"."d"<>0)AS"_col_0"FROM"x"AS"x"WHERECAST("x"."z"ASDATE)=CAST('2021-02-01'ASDATE)AST IntrospectionYou can see the AST version of the sql by callingrepr:fromsqlglotimportparse_oneprint(repr(parse_one("SELECT a + 1 AS z")))Select(expressions=[Alias(this=Add(this=Column(this=Identifier(this=a,quoted=False)),expression=Literal(this=1,is_string=False)),alias=Identifier(this=z,quoted=False))])AST DiffSQLGlot can calculate the difference between two expressions and output changes in a form of a sequence of actions needed to transform a source expression into a target one:fromsqlglotimportdiff,parse_onediff(parse_one("SELECT a + b, c, d"),parse_one("SELECT c, a - b, d"))[Remove(expression=Add(this=Column(this=Identifier(this=a,quoted=False)),expression=Column(this=Identifier(this=b,quoted=False)))),Insert(expression=Sub(this=Column(this=Identifier(this=a,quoted=False)),expression=Column(this=Identifier(this=b,quoted=False)))),Keep(source=Identifier(this=d,quoted=False),target=Identifier(this=d,quoted=False)),...]See also:Semantic Diff for SQL.Custom DialectsDialectscan be added by subclassingDialect:fromsqlglotimportexpfromsqlglot.dialects.dialectimportDialectfromsqlglot.generatorimportGeneratorfromsqlglot.tokensimportTokenizer,TokenTypeclassCustom(Dialect):classTokenizer(Tokenizer):QUOTES=["'",'"']IDENTIFIERS=["`"]KEYWORDS={**Tokenizer.KEYWORDS,"INT64":TokenType.BIGINT,"FLOAT64":TokenType.DOUBLE,}classGenerator(Generator):TRANSFORMS={exp.Array:lambdaself,e:f"[{self.expressions(e)}]"}TYPE_MAPPING={exp.DataType.Type.TINYINT:"INT64",exp.DataType.Type.SMALLINT:"INT64",exp.DataType.Type.INT:"INT64",exp.DataType.Type.BIGINT:"INT64",exp.DataType.Type.DECIMAL:"NUMERIC",exp.DataType.Type.FLOAT:"FLOAT64",exp.DataType.Type.DOUBLE:"FLOAT64",exp.DataType.Type.BOOLEAN:"BOOL",exp.DataType.Type.TEXT:"STRING",}print(Dialect["custom"])<class '__main__.Custom'>SQL ExecutionOne can even interpret SQL queries using SQLGlot, where the tables are represented as Python dictionaries. Although the engine is not very fast (it's not supposed to be) and is in a relatively early stage of development, it can be useful for unit testing and running SQL natively across Python objects. Additionally, the foundation can be easily integrated with fast compute kernels (arrow, pandas). Below is an example showcasing the execution of a SELECT expression that involves aggregations and JOINs:fromsqlglot.executorimportexecutetables={"sushi":[{"id":1,"price":1.0},{"id":2,"price":2.0},{"id":3,"price":3.0},],"order_items":[{"sushi_id":1,"order_id":1},{"sushi_id":1,"order_id":1},{"sushi_id":2,"order_id":1},{"sushi_id":3,"order_id":2},],"orders":[{"id":1,"user_id":1},{"id":2,"user_id":2},],}execute("""SELECTo.user_id,SUM(s.price) AS priceFROM orders oJOIN order_items iON o.id = i.order_idJOIN sushi sON i.sushi_id = s.idGROUP BY o.user_id""",tables=tables)user_idprice14.023.0See also:Writing a Python SQL engine from scratch.Used BySQLMeshFugueibismysql-mimicQuerybookQuokkaSplinkDocumentationSQLGlot usespdocto serve its API documentation.A hosted version is on theSQLGlot website, or you can build locally with:make docs-serveRun Tests and Lintmake style # Only linter checks make unit # Only unit tests make check # Full test suite & linter checksBenchmarksBenchmarksrun on Python 3.10.12 in seconds.Querysqlglotsqlglotrssqlfluffsqltreesqlparsemoz_sql_parsersqloxidetpch0.00944 (1.0)0.00590 (0.625)0.32116 (33.98)0.00693 (0.734)0.02858 (3.025)0.03337 (3.532)0.00073 (0.077)short0.00065 (1.0)0.00044 (0.687)0.03511 (53.82)0.00049 (0.759)0.00163 (2.506)0.00234 (3.601)0.00005 (0.073)long0.00889 (1.0)0.00572 (0.643)0.36982 (41.56)0.00614 (0.690)0.02530 (2.844)0.02931 (3.294)0.00059 (0.066)crazy0.02918 (1.0)0.01991 (0.682)1.88695 (64.66)0.02003 (0.686)7.46894 (255.9)0.64994 (22.27)0.00327 (0.112)Optional DependenciesSQLGlot usesdateutilto simplify literal timedelta expressions. The optimizer will not simplify expressions like the following if the module cannot be found:x+interval'1'month
acryo
acryoacryois an extensible cryo-EM/ET toolkit for Python.The purpose of this library is to make data analysis of cryo-EM/ET safer, efficient, reproducible and customizable for everyone. Scientists can avoid the error-prone CLI-based data handling, such as writing out the results to the files every time and manage all the result just by the file names.📘 DocumentationInstallUse pippipinstallacryo-UFrom sourcegitclonegit+https://github.com/hanjinliu/acryo.gitcdacryo pipinstall-e.FeaturesOut-of-core and parallel processing during subtomogram averaging/alignment to make full use of CPU.Extensible and ready-to-use alignment models.Manage subtomogram loading tasks from single or multiple tomograms in the same API.Tomogram and tilt series simulation.Masked PCA clustering.Code SnippetimportpolarsasplfromacryoimportSubtomogramLoader,Molecules# acryo objectsfromacryo.tiltimportsingle_axis# missing wedge modelfromacryo.pipeimportsoft_otsu# data input pipelines# construct a loaderloader=SubtomogramLoader.imread("path/to/tomogram.mrc",molecules=Molecules.from_csv("path/to/molecules.csv"),)# filter out bad alignment in polars wayloader_filt=loader.filter(pl.col("score")>0.7)# averagingavg=loader_filt.average(output_shape=(48,48,48))# alignmentaligned_loader=loader.align(template=avg,# use the average as templatemask=soft_otsu(sigma=2,radius=2),# apply soft-Otsu to template to make the masktilt=single_axis((-45,45),axis="y"),# range of tilt series degrees.cutoff=0.5,# lowpass filtering cutoffmax_shifts=(4,4,4),# search space limits)
aCrypt
aCrypt - PythonCiphering made easy.FeaturesEncode a string into numbers using a secret keyDecode encoded strings into their original form using a secret keyGenerate a valid aCrypt keyUsageUsageParamsReturn Valuecreate_key()None: Noneint: keycipher()str: message, int: keystr: ciphereddecipher()str: message, int: keystr: decipheredExamplesLets say you have private JSON data that you need to encode:# lets imagine this is the datadata={"username":"ilovecats","password":"meow123"}Of course though, writing this un-encoded data can be very unsafe. So the only way to protect it is to cipher it.fromjsonimportdumps,loadsfromosimportgetenvdata={"username":"ilovecats","password":"meow123"}# turn the JSON into a string we can encodedata=dumps(data)# using a key we can store in our environment variables, we can cipher itencoded_data=cipher(data,os.getenv("CIPHER_KEY"))# you can also keep the key in a variable, thought this is unsafe as users can just use the key# voila! The data has been ciphered:print(encoded_data)And to decipher the data and use it, you can do the reverse:fromjsonimportdumps,loadsfromosimportgetenv# decipher the datadecoded_data=decipher(encoded_data,os.getenv("CIPHER_KEY"))# turn the string back into json datadata=loads(decoded_data)# now you can use it!print("Your username is"+data.username+"!")CreditsThank you to the repl.it community for providing such amazing services for free.Thank you to Atticus Kuhn for pointing out safety concerns on the project.Thank you to AmazingMech2418 (https://repl.it/@AmazingMech2418), for showing me the world of cryptography.Thank you StealthHydra179 (https://repl.it/@StealthHydra179), for being the only person who cared about programming in my school.Thank you Giothecoder (https://repl.it/@Giothecoder), for being there when I needed you most.Change Log0.0.1 - Cipher was added, deciphering was unfinished0.0.2 - Deciphering finished with lots of bugs0.0.3 - Atticus Kuhn pointed out a safety bug, and thus it was patched0.0.4 - Bugs fixes0.0.5 - Added Credits0.0.6 - AmazingMech2418 pointed out huge safety feature that should be added0.0.7 - Bugs fixes0.0.8 - Test Version0.0.9 - Test Version0.1.0 - Update 0.0.6 was revisited and implemented0.1.1 - Added changelog0.1.3 - Bug fixes0.1.4 - Made key generation more efficient0.1.5 - Added examples
acs
NOTE: NOT FOR PRODUCTION USEPlease note these scripts are intended to allow experimentation with Azure Container Service. They are not intended for production use.A set of convenience scripts for creating and testing ACS clusters. These scripts can also be helpful in working out how to use the REST API interfaces for managing applicaitons on an ACS cluster.# UsageSee the [documentation](http://rgardler.github.io/acs-cli).# Development## PrerequisitesPython 3apt-get install python[PIP](https://pip.pypa.io/en/stable/installing/)Azure CLI installed and configured to access the test subscription * install Node and NPM *sudo npm install azure-cli -g## PreparingTo install all libraries and development dependencies:` sudo pip install-e. sudo pip install-e.[test] `## General UseYou can useacs –helpfor basic help, or see the [documentation](http://rgardler.github.com/acs-cli).# Developing## Adding a commandTo add a top level command representing a new feature follow the these steps (in this example the new command is calledFoo:Add the commandfooand its description to the “Commands” section of the docstring for acs/cli.pyCopyacs/commands/command.tmpltoacs/commands/foo.py* Add the subcommands and options to the docstring of the foo.py file * Implement each command in a method using the same name as the commandAdd foo.py import toacs/commands/__init__.pyCopytests/command/test_command.tmpltotest/command/test_foo.py* Implement the testsRun the tests withpython setup.py testand iterate as necessaryInstall the package withpython setup.py install## Adding a subcommandSubcommands are applied to commands, to add a subcommand do the following:Add the subcommand to the docstring of the relevant command class (e.g. foo.bar)Add a method with the same name as the subcommandAdd a testRun the tests withpython setup.py testand iterate as necessaryInstall the package withpython setup.py install## TestingRun tests using [py.test:](http://pytest.org/latest) and [coverage](https://pypi.python.org/pypi/pytest-cov):` python setup.py test `Note, by default this does not run the slow tests (like creating the cluster and installing features. You must therefore first have run the full suite of tests at least once. You can do this with:` py.test--runslow`## ReleasingCut a release and publish to the [Python Package Index](https://pypi.python.org/pypi) install install [twine](http://pypi.python.org/pypi/twine. and then run:` python setup.py sdist bdist_wheel twine upload dist/* `This will build both a surce tarball and a wheel build, which will run on all platforms.### Updating DocumentationTo build and pucblish the documentsation:` cd docs makegh-pagescd .. `
acsaver
acfunsdk - AcSaveracfunsdk是非官方的AcFun弹幕视频网Python库。acsaver是acfunsdk的附属组件,提供内容离线保存支持。‼需要ffmpeg主要用于下载视频。建议去官网下载https://ffmpeg.org/download.html可执行文件ffmpegffprobe需要加入到环境变量,或复制到运行根目录。依赖库依赖: 包含在requirements.txt中acfunsdk>=0.9.5下载及html页面渲染:filetype>=1.1jinja2>=3.1pillow>=9.1命令行及输出控制:rich>=12.5click>=8.1内置+修改: 位于utils文件夹内ffmpeg_progress_yieldAbout Me♂ 整点大香蕉🍌
acs-axiom
A prototype utility for validating/applying metadata templates for scientific data.
acs-cli
What is it?A command line tool for accessing Alfresco Content Services repository servers through the public REST APIs.The motivation for building this tool is two-fold: firstly as an interesting way for me to learn python; and secondly it’s the tool I always wish existed. The code probably isn’t verypythonicor well organised, but hopefully this will get better :-)InstallationUse (python3) pip to install:pip3 install acs-cliTo try this out in docker with a self-destructing temporary container:mward@holly:~$ docker run -it --rm ubuntu:16.04 root@f957e9b7154f:/# apt update && apt install -y python3 python3-pip root@f957e9b7154f:/# pip3 install acs-cliWarningThis is a proof of concept and must be consideredalphaquality software at best.Getting helpYou can ask for help at the program, API/command or subcommand levels, for example:# Get help on the program: $ acs --help # Get help on using the sites API: $ acs sites --help # Get help on using list-sites: $ acs sites list-sites --help # Get help on using the login command: $ acs login --helpShell tab completionTab completion can be enabled by adding the following to your.bashrc:eval "$(register-python-argcomplete acs)"Example usageWithout any arguments, you may log in tohttp://localhost:8080/alfrescousing the username ‘admin’ and will be prompted for a password.mward@holly:~$ acs login Logging in admin to http://localhost:8080/alfresco Password: mward@holly:acs-cli$Use the--usernameor--passwordoptions to log in with different credentials:mward@holly:~$ acs login --username=asmith --password=ban4n4@!Once logged in, APIs may be exercised by using the general format:acs api-collection api-command [options...] <arguments...>Here we see site creation:mward@holly:~$ acs sites create-site --id accounting --title 'Accounting Collaboration' --description 'Site for collaboration relating to the accounting process' --visibility PRIVATE { "entry": { "id": "accounting", "guid": "ee6d721d-e3b0-4299-a51f-afd4b59bfece", "visibility": "PRIVATE", "preset": "site-dashboard", "description": "Site for collaboration relating to the accounting process", "title": "Accounting Collaboration", "role": "SiteManager" } }…and here we see the people API being used to create a person entity:mward@holly:~$ acs people create-person --id bsmith --first-name Brian --email [email protected] --password password --json-data '{ "lastName":"Smith", "properties":{"papi:jabber":"[email protected]"} }' { "entry": { "id": "bsmith", "company": {}, "lastName": "Smith", "aspectNames": [ "papi:comms" ], "firstName": "Brian", "properties": { "papi:jabber": "[email protected]" }, "enabled": true, "email": "[email protected]", "emailNotificationsEnabled": true } }Note: the custom propertypapi:jabberhas previously been enabled in this example, by installing a custom dynamic model into the repository server. The custom model’s properties/aspects are not normally available.The--json-dataproperty can carry an arbitrary JSON payload to be sent to the REST API endpoint. You can mix and match this with the convenient named arguments (e.g.--email), however if a key is supplied in both methods then an error will be raised.All API operations accept the--queryoption to specify a JMESPath expression. Here for example, we choose to only display theidandemailfields of the returnedentryobject:mward@holly:~$ acs people get-person --person-id=jbloggs --query 'entry.[id,email]' [ "jbloggs", "[email protected]" ]And here, we use the--queryoption to viewid,firstNameandemailofeachentry in the list of people:mward@holly:~$ acs people list-people --query='list.entries[].entry.[id,firstName,email]' [ [ "admin", "Administrator", "[email protected]" ], [ "guest", "Guest", null ], [ "jbloggs", "Joe", "[email protected]" ] ]Anylistoperation that may be paged can be used with the--max-itemsand--skip-countoptions, used here to show two results after skipping the first 4. This may be thought of as showing thethirdpage of results.mward@holly:~$ acs people list-people --query='list.entries[].entry.[firstName]' --max-items=2 --skip-count=4 [ [ "Joe10" ], [ "Joe11" ] ]Thesiteslist-sitesAPI command may be used to list “sites”. This is a paged API and here we use it without the--max-itemsand--skip-countoptions which default to 10 and 0 respectively:mward@holly:~$ acs sites list-sites --query='list.entries[].entry' [ { "title": "accounts", "role": "SiteManager", "guid": "80dbd63c-3dbf-4005-bd16-e324fa8b4517", "id": "accounts", "visibility": "PUBLIC", "preset": "site-dashboard" }, { "title": "Sample: Web Site Design Project", "guid": "b4cff62a-664d-4d45-9302-98723eac1319", "id": "swsdp", "visibility": "PUBLIC", "description": "This is a Sample Alfresco Team site.", "preset": "site-dashboard" } ]In this example, we create a folder within the “My Files” folder for jbloggs:mward@holly:~$ acs nodes create-node --node-id=-my- --node-type=cm:folder --name=my_notes --json-data '{"properties":{"cm:title":"My daily notes"}}' { "entry": { "createdByUser": { "displayName": "Joe Bloggs", "id": "jbloggs" }, "modifiedAt": "2017-04-07T13:36:55.848+0000", "id": "190a4896-1492-4142-9cd3-7f80d8012514", "createdAt": "2017-04-07T13:36:55.848+0000", "modifiedByUser": { "displayName": "Joe Bloggs", "id": "jbloggs" }, "properties": { "cm:title": "My daily notes" }, "name": "my_notes", "aspectNames": [ "cm:titled", "cm:auditable" ], "isFile": false, "isFolder": true, "parentId": "29dd6a63-da4c-4f96-8edb-ad9808fa198b", "nodeType": "cm:folder" } }The alias-my-is used for the node where the child folder will be created.The common server-side filtering and projection API parameters are supported, using similarly named command line options such as--include,--fieldsand--where. For example, we can use the where clause to view site membership of PRIVATE sites, and restrict the returned fields toroleandid:mward@holly:~$ acs sites list-site-memberships --person-id admin --where="(visibility='PUBLIC')" --fields=id,role { "list": { "pagination": { "count": 19, "totalItems": 19, "hasMoreItems": false, "maxItems": 100, "skipCount": 0 }, "entries": [ { "entry": { "role": "SiteManager", "id": "swsdp" } }, { "entry": { "role": "SiteManager", "id": "site-e26b050b" } }, { "entry": { ...You may ask what the difference between these and the--queryoptions are. The--queryoption provides support for post-response filtering and manipulation through the JMESPath query language. JMESPath is a very powerful yet simple way to manipulate the response object but since it is performed as a post-response processing stage, may affect paging (e.g. the paging details may say that there are more results than are present in the final results). Also by using the server-side functionality ofinclude,fieldsandwhere, you may conserve bandwidth by reducing the number of results transmitted “over the wire”.
acsclient
No description available on PyPI.
acs-download
============``ACS`` Download============.. image:: https://img.shields.io/badge/try%20on-binder-F5A252.svg?logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFkAAABZCAMAAABi1XidAAAB8lBMVEX///9XmsrmZYH1olJXmsr1olJXmsrmZYH1olJXmsr1olJXmsrmZYH1olL1olJXmsr1olJXmsrmZYH1olL1olJXmsrmZYH1olJXmsr1olL1olJXmsrmZYH1olL1olJXmsrmZYH1olL1olL0nFf1olJXmsrmZYH1olJXmsq8dZb1olJXmsrmZYH1olJXmspXmspXmsr1olL1olJXmsrmZYH1olJXmsr1olL1olJXmsrmZYH1olL1olLeaIVXmsrmZYH1olL1olL1olJXmsrmZYH1olLna31Xmsr1olJXmsr1olJXmsrmZYH1olLqoVr1olJXmsr1olJXmsrmZYH1olL1olKkfaPobXvviGabgadXmsqThKuofKHmZ4Dobnr1olJXmsr1olJXmspXmsr1olJXmsrfZ4TuhWn1olL1olJXmsqBi7X1olJXmspZmslbmMhbmsdemsVfl8ZgmsNim8Jpk8F0m7R4m7F5nLB6jbh7jbiDirOEibOGnKaMhq+PnaCVg6qWg6qegKaff6WhnpKofKGtnomxeZy3noG6dZi+n3vCcpPDcpPGn3bLb4/Mb47UbIrVa4rYoGjdaIbeaIXhoWHmZYHobXvpcHjqdHXreHLroVrsfG/uhGnuh2bwj2Hxk17yl1vzmljzm1j0nlX1olL3AJXWAAAAbXRSTlMAEBAQHx8gICAuLjAwMDw9PUBAQEpQUFBXV1hgYGBkcHBwcXl8gICAgoiIkJCQlJicnJ2goKCmqK+wsLC4usDAwMjP0NDQ1NbW3Nzg4ODi5+3v8PDw8/T09PX29vb39/f5+fr7+/z8/Pz9/v7+zczCxgAABC5JREFUeAHN1ul3k0UUBvCb1CTVpmpaitAGSLSpSuKCLWpbTKNJFGlcSMAFF63iUmRccNG6gLbuxkXU66JAUef/9LSpmXnyLr3T5AO/rzl5zj137p136BISy44fKJXuGN/d19PUfYeO67Znqtf2KH33Id1psXoFdW30sPZ1sMvs2D060AHqws4FHeJojLZqnw53cmfvg+XR8mC0OEjuxrXEkX5ydeVJLVIlV0e10PXk5k7dYeHu7Cj1j+49uKg7uLU61tGLw1lq27ugQYlclHC4bgv7VQ+TAyj5Zc/UjsPvs1sd5cWryWObtvWT2EPa4rtnWW3JkpjggEpbOsPr7F7EyNewtpBIslA7p43HCsnwooXTEc3UmPmCNn5lrqTJxy6nRmcavGZVt/3Da2pD5NHvsOHJCrdc1G2r3DITpU7yic7w/7Rxnjc0kt5GC4djiv2Sz3Fb2iEZg41/ddsFDoyuYrIkmFehz0HR2thPgQqMyQYb2OtB0WxsZ3BeG3+wpRb1vzl2UYBog8FfGhttFKjtAclnZYrRo9ryG9uG/FZQU4AEg8ZE9LjGMzTmqKXPLnlWVnIlQQTvxJf8ip7VgjZjyVPrjw1te5otM7RmP7xm+sK2Gv9I8Gi++BRbEkR9EBw8zRUcKxwp73xkaLiqQb+kGduJTNHG72zcW9LoJgqQxpP3/Tj//c3yB0tqzaml05/+orHLksVO+95kX7/7qgJvnjlrfr2Ggsyx0eoy9uPzN5SPd86aXggOsEKW2Prz7du3VID3/tzs/sSRs2w7ovVHKtjrX2pd7ZMlTxAYfBAL9jiDwfLkq55Tm7ifhMlTGPyCAs7RFRhn47JnlcB9RM5T97ASuZXIcVNuUDIndpDbdsfrqsOppeXl5Y+XVKdjFCTh+zGaVuj0d9zy05PPK3QzBamxdwtTCrzyg/2Rvf2EstUjordGwa/kx9mSJLr8mLLtCW8HHGJc2R5hS219IiF6PnTusOqcMl57gm0Z8kanKMAQg0qSyuZfn7zItsbGyO9QlnxY0eCuD1XL2ys/MsrQhltE7Ug0uFOzufJFE2PxBo/YAx8XPPdDwWN0MrDRYIZF0mSMKCNHgaIVFoBbNoLJ7tEQDKxGF0kcLQimojCZopv0OkNOyWCCg9XMVAi7ARJzQdM2QUh0gmBozjc3Skg6dSBRqDGYSUOu66Zg+I2fNZs/M3/f/Grl/XnyF1Gw3VKCez0PN5IUfFLqvgUN4C0qNqYs5YhPL+aVZYDE4IpUk57oSFnJm4FyCqqOE0jhY2SMyLFoo56zyo6becOS5UVDdj7Vih0zp+tcMhwRpBeLyqtIjlJKAIZSbI8SGSF3k0pA3mR5tHuwPFoa7N7reoq2bqCsAk1HqCu5uvI1n6JuRXI+S1Mco54YmYTwcn6Aeic+kssXi8XpXC4V3t7/ADuTNKaQJdScAAAAAElFTkSuQmCC:target: https://mybinder.org/v2/gh/chekos/acs_download/binder?urlpath=lab%2Ftree%2Fexamples%2Fnotebooks%2F00_Downloading_Data.ipynb.. image:: https://img.shields.io/pypi/v/acs_download.svg:target: https://pypi.python.org/pypi/acs_download.. image:: https://img.shields.io/travis/chekos/acs_download.svg:target: https://travis-ci.org/chekos/acs_download.. image:: https://readthedocs.org/projects/acs-download/badge/?version=latest:target: https://acs-download.readthedocs.io/en/latest/?badge=latest:alt: Documentation StatusDownload American Community Survey (ACS) complete Public Use Micro Sample (PUMS) data files from census FTP server.* Free software: MIT license* Documentation: https://acs-download.readthedocs.io.Usage-----.. code:: pythonimport acs_download as acsacs.get_data(year = 2017,state = 'California',download_path = '../data/raw/',extract = True,extract_path = '../data/interim/',)This will download ACS PUMS data file of California to your``../data/raw/`` folder and extract it to ``../data/interim/`` folder.``acs_download`` uses pypi package ``us``, which uses ``jellyfish``, tohandle ``state`` input so you can use variations.. image:: https://raw.githubusercontent.com/chekos/acs_download/master/static/acs_download_example1.gifFeatures--------* TODOCredits-------This package was created with Cookiecutter_ and the `audreyr/cookiecutter-pypackage`_ project template... _Cookiecutter: https://github.com/audreyr/cookiecutter.. _`audreyr/cookiecutter-pypackage`: https://github.com/audreyr/cookiecutter-pypackage=======History=======0.1.0 (2019-04-10)------------------* First release on PyPI.
acse-9-irp-wafflescore
acse-9-independent-research-project-wafflescore
acsefunctions
No description available on PyPI.
acsets
PyACSetsA Catlab-compatible implementation of acsets in python.Included also is an AlgebraicJulia-compatible implementation of petri nets.🚀 InstallationThe most recent code and data can be installed directly from GitHub with:$pipinstallgit+https://github.com/AlgebraicJulia/py-acsets.git👐 ContributingContributions, whether filing an issue, making a pull request, or forking, are appreciated. SeeCONTRIBUTING.mdfor more information on getting involved.👋 Attribution⚖️ LicenseThe code in this package is licensed under the MIT License.🍪 CookiecutterThis package was created with@audreyfeldroy'scookiecutterpackage using@cthoyt'scookiecutter-snekpacktemplate.🛠️ For DevelopersSee developer instructionsThe final section of the README is for if you want to get involved by making a code contribution.Development InstallationTo install in development mode, use the following:$gitclonegit+https://github.com/AlgebraicJulia/py-acsets.git $cdpy-acsets $pipinstall-e.🥼 TestingAfter cloning the repository and installingtoxwithpip install tox, the unit tests in thetests/folder can be run reproducibly with:$toxAdditionally, these tests are automatically re-run with each commit in aGitHub Action.📖 Building the DocumentationThe documentation can be built locally using the following:$gitclonegit+https://github.com/AlgebraicJulia/py-acsets.git $cdpy-acsets $tox-edocs $opendocs/build/html/index.htmlThe documentation automatically installs the package as well as thedocsextra specified in thesetup.cfg.sphinxplugins liketexextcan be added there. Additionally, they need to be added to theextensionslist indocs/source/conf.py.📦 Making a ReleaseAfter installing the package in development mode and installingtoxwithpip install tox, the commands for making a new release are contained within thefinishenvironment intox.ini. Run the following from the shell:$tox-efinishThis script does the following:UsesBump2Versionto switch the version number in thesetup.cfg,src/acsets/version.py, anddocs/source/conf.pyto not have the-devsuffixPackages the code in both a tar archive and a wheel usingbuildUploads to PyPI usingtwine. Be sure to have a.pypircfile configured to avoid the need for manual input at this stepPush to GitHub. You'll need to make a release going with the commit where the version was bumped.Bump the version to the next patch. If you made big changes and want to bump the version by minor, you can usetox -e bumpversion minorafter.
acs_examine_student_assignment
AboutConsole app and Python API for automated assignment examination of ourACSstudents.InstallationTo install acs_examine_student_assignment run:$ pip install acs_examine_student_assignmentConsole app usageQuick start:$ acs_examine_student_assignment <computer>Show help:$ acs_examine_student_assignment --helpPython API usageQuick start:>>> import logging >>> logging.basicConfig(level=logging.DEBUG, format="[%(levelname)s] %(message)s") >>> from acs_examine_student_assignment import examine_student_assignment >>> examine_student_assignment('extracted', 's200')ContributeIf you find any bugs, or wish to propose new featuresplease let us know.If you’d like to contribute, simply forkthe repository, commit your changes and send a pull request. Make sure you add yourself toAUTHORS.
acs_extract_student_assignments
AboutConsole app and Python API for extracting assignments from exam archives of ourACSstudents.InstallationTo install acs_extract_student_assignments run:$ pip install acs_extract_student_assignmentsConsole app usageQuick start:$ acs_extract_student_assignmentsShow help:$ acs_extract_student_assignments --helpPython API usageQuick start:>>> import logging >>> logging.basicConfig(level=logging.DEBUG, format="[%(levelname)s] %(message)s") >>> from acs_extract_student_assignments import find_student_assignments, store_student_assignments >>> student_assignments = find_student_assignments('archives') >>> store_student_assignments(student_assignments, 'extracted')ContributeIf you find any bugs, or wish to propose new featuresplease let us know.If you’d like to contribute, simply forkthe repository, commit your changes and send a pull request. Make sure you add yourself toAUTHORS.
acshell
AtCoder ShellThis is the shell library to execute AtCoder submission on your commandline, for Python/PyPy user.Basic operationIf you want to know more information about commands, executeacsh help. (Note that the command results are output in Japanese.)1. Installationpipinstallacshell2. Loginacshlogin# acsh lgYour account credentials are stored for a certain period of time.3. Create "Contest folder"acshloadagc001# acsh ld agc001The "contest folder" of specified contest is created where you are currently. Each task folder is generated in the contest folder.4. Write your codeImplement the answers in the file likeagc001_a.pycreated in the folder for each task.5. Run your codes with published testcasesConfirm the formats of the command arguments below.acshtest[task][num][lang]acshcheck[task][lang]# acsh t# acsh coptionrequiredvalue formattaskYestask code such asA,BnumYes (intest)number of testcase as integerlangxlanguage(pythonorpypy)6. Submit your codesConfirm the formats of the command arguments below. Unlike the test running, you have to specify which language you submit codes as.acshsubmit[task][lang]# acsh soptionrequiredvalue formattaskYestask code such asA,BlangYeslanguage(pythonorpypy)7. Confirm results of your submissionacshrecent# for recent submissionacshstatus# for your contest scores# acsh rc# acsh rsAdditional OperationsSetup your cheat sheetsFollow the steps below to use your own prepared cheat-sheet codes, Please check the arguments with theacsh helpcommand.Setupopen the config folder by executingacsh edit-cheat/acsh ec.write your cheat sheets, and place the folder opened.Usageexecuteacsh add-cheat <task_code> <cheat_name (without extension)>so add the specified cheat file in the task folder.writefrom <filename> import <func/class name>in your code.The cheat files will be merged with main code file when you submit.Setup initial codesIf you want to set a template code, addinitial.pyin the cheat-sheet setup shown above.Pyenv managementIf you want to run codes by pypy, you should use pyenv for python version management. (This package can be run on python3.8, but you should use 3.11 for the exact debug in contests.)Install the latest version of3.11.*andpypy3.10-7.3.*.execute the command below.pyenvABCA: The Python version you usually useB: The version of installed3.11.*C: The version of installedpypy3.10-7.3.*
acsia-smtpclient
ACSIA SMTPClientA simple smtpclient client based on pythonUsageacsia_smtpclient --from demarcog83 --to [email protected] --subject ciao --body "my email content" --host smtp.gmail.com -p 587 --username [email protected] --password "*********" --use-tls --use-sslAuthorGiuseppe De Marco
acslib
Access Control Systems LibraryA library for interacting with Access Control Systems like Genetec or Ccure9kFree software: MITDocumentation:https://github.com/ncstate-sat/acslibFeaturesTODO
acsone.recipe.odoo.pydev
An extension toanybox.recipe.odoothat generatesOdooprojects for theEclipse PyDevIDE.ContentsWhat it isHow to use itSupported optionsBehind the curtainContributorsChange history2.0 (2014-12-08)1.2 (2014-10-13)1.1 (2014-09-08)1.0 (2014-05-30)DownloadCode repository:http://github.com/acsone/acsone.recipe.odoo.pydevReport bugs athttp://github.com/acsone/acsone.recipe.odoo.pydev/issuesWhat it isThis buildout recipe is an extension to the fully featured recipe developed by Anybox:anybox.recipe.odoo.It generates a ready-to-use Eclipse PyDev Project, pointing to all dependencies required to develop, run and debug yourOdooserver as well as your own addons.The generated project is fully configured, including a preset PYTHONPATH so as to support debugging, pep8 import checks, auto completionHow to use itSince the recipe is an extension toanybox.recipe.odoo, the first step, if not done, is to add youranybox.recipe.odoo, configuration tobuildout.cfgand include it in${buildout:parts}.An example:[buildout] ... parts = ... openerp [openerp] recipe = anybox.recipe.odoo:server version = git https://github.com/odoo/odoo.git odoo 7.0 addons = ... ....Another example using git and Odoo V8:[buildout] ... parts = ... openerp [openerp] recipe = anybox.recipe.odoo[bzr]:server version = git https://github.com/odoo/odoo.git odoo 8.0 addons = ... ....A good practice is to use the inheritance mechanism of buildout to define your development environment in an other file such asdevel.cfg:[buildout] extends = buildout.cfg parts = ... pydevproject [pydevproject] <= openerp recipe = acsone.recipe.odoo.pydev project-name = my_project_name python-version = python 2.7 python-interpreter = Default eggs += any_additional_egg_you_wantThen prepare your virtualenv and install zc.buildout$ virtualenv$ bin/pip install zc.buildoutTo run the recipe and generate your project, run$ bin/buildout install pydevprojectThe launch eclipse, import the project and you are ready to go. To debug, use bin/start_openerp_pydev in the eclipse debug configuration.Supported optionsThese match the options of a PyDev Project.nameThe project name. This is just for Eclipse and can be anything you want.python-versionThe combination of interpreter and grammar version. E.g.python 2.7(default ispython 2.7)python-interpreterThe interpreter name, as configured in the the Eclipse Preferences for PyDev. UsuallyDefaultis fine. (default isDefault)Behind the curtainIn addition to the startup scripts and configuration file generated byanybox.recipe.odoo., this recipe generates the two files that define a PyDev Project:.project.pydevproject.While eggs and their dependencies are declared as external libraries, the server and its addons are declared as source folders. In the same time, the recipe uses in background thecollective.recipe.omeletterecipe to build a unified directory structure of declared addons, symlinking to the actual contents, in order to allow proper pep8 check and auto completion. This directory structure is also declared as external dependency to avoid confusion between source folder and the unified directory structure.It’s a know issue that when same addons are both in the PYTHONPATH and addons_path (it’s the case with the generated project definition), it’s not possible to start the server due to import errors. To avoid this problem, the recipe adds to the generated scripts , specific code to remove parts of sys.path that are also in addons_path.ContributorsLaurent Mignon (ACSONE SA/NV), AuthorChange history2.0 (2014-12-08)github #5: Extends anybox.recipe.odoo in place of anybox.recipe.opener1.2 (2014-10-13)github #4: Pydev > 3.7 and Odoo 8.0 compatibilitygithub #3: Incorrect Pythonpath in Eclipse1.1 (2014-09-08)github #1: support new addons layout on github. The eclipse syntax analyser also scan addons directory at root of the cloned directory if exists. BTW, code completion is fully functionnal with odoo sources distribution from github.1.0 (2014-05-30)First release [ACSONE SA/NV]Download
acsoo
This is a set of command-line utilities to facilitate the Odoo development workflow at Acsone.It assumes the project is a setuptools-based python package that can be packaged and installed with pip.ContentsInstallationWhat we have hereInitialize a new projectacsoo pr-statusacsoo tagacsoo tag-requirementsacsoo checklogDeprecated commandsacsoo addonsacsoo freezeacsoo wheelacsoo releaseacsoo flake8acsoo pylintacsoo.cfgUseful linksMaintainerChanges3.1.0 (2021-01-04)3.0.2 (2020-10-14)3.0.1 (2020-07-29)3.0.0 (2020-07-01)2.1.0 (2020-05-25)2.0.0 (2020-01-15)1.9.0 (2019-02-28)1.8.3 (2019-01-22)1.8.2 (2018-11-05)1.8.1 (2018-10-30)1.8.0 (2018-10-29)1.7.1 (2018-07-15)1.7.0 (2018-06-04)1.6.0 (2018-02-16)1.5.0 (2017-09-19)1.4.3 (2017-06-16)1.4.2 (2017-06-16)1.4.1 (2017-06-14)1.4.0 (2017-06-13)1.3.0 (2017-06-04)1.2.2 (2017-05-30)1.2.1 (2017-05-27)1.1.0 (2017-05-25)1.0.1 (2017-05-21)Criteria for tools to be included here:being small wrappers around standard commands (git,pip, etc)yet being sufficiently non-trivial to be error-prone or time consuming when done manuallybeing used across several Acsone Odoo projectsInstallationpipinstall--useracsooorpipxinstallacsooNoteSinceacsoohas a lot of dependencies that are not required at runtime, for your application, it is not recommanded to install it in the same virtualenv as your project.To enable bash completion, add this line in your.bashrc:eval"$(_ACSOO_COMPLETE=sourceacsoo)"What we have hereBelow, the list of available commands with a few examples.Useacsoo--helporacsoo <command>--helpfor more information.Initialize a new projectmrbobacsoo:templates/projectcd{projectname}mkvirtualenv{projectname}-a.acsoo pr-statusLook for git references of the formrefs/pull/NNN/headin requirement files and print the corresponding GitHub pull request status.acsoo tagTag the current project after ensuring everything has been commited to git.acsoo tag-requirementsTag all VCS requirements found inrequirements.txt, so the referenced commits are not lost in case of VCS garbage collection.acsoo checklogCheck if an odoo log file contains error, with the possibility to ignore some errors based on regular expressions.acsoochecklogodoo.logodoo-dmydb-ibase--stop-after-init|acsoochecklogacsoochecklog--ignore"WARNING.*blah"odoo.logDeprecated commandsacsoo addonsacsoo addons is deprecated: use `manifestoo <https://pypi.org/project/manifestoo>`_ instead: it is more robust and has better test coverage.A set of commands to print addons lists, useful when running tests.acsooaddonslistacsooaddonslist-dependsacsoo freezeDeprecated: use `pip-deepfreeze <https://pypi.org/project/pip-deepfreeze>`_ instead.Just like pip freeze, except it outputs only dependencies of the provided distribution name.acsoo wheelThis command is deprecated, use pip >= 20.1 and do not use editable VCS dependencies. `pip wheel -e . -r requirements.txt –wheel-dir=release` will then give the same result, including caching of pinned VCS dependencies.Build wheels for all dependencies found inrequirements.txt, plus the project in the current directory.The main advantage of this command (compared to a regularpip wheel -r requirements.txt -e . –wheel_dir=release –src src), was that it maintains a cache of git dependencies that are pinned with a sha1.acsoo releaseThis command is deprecated. Releasing is automated via .gitlab-ci. See the `build` stage in the project template.Performacsoo tag,acsoo tag_requirementsandacsoo wheelin one command.acsoo flake8This command is deprecated, use a .flake8 file in your project, in combination with pre-commit. See the project template for a reasonable default.Runflake8with sensible default for Odoo code.It is possible to pass additional options to theflake8command, eg:acsooflake8----ignoreE24,W504acsoo pylintThis command is deprecated, use a .pylintrc file in your project, in combination with pre-commit. See the project template for a reasonable default.Runpylinton detected Odoo addons in odoo/addons, odoo_addons or the current directory. It automatically uses thepylint-odooplugin and runs with a reasonable configuration, including an opinionated set of disabled message.It is possible to pass additional options to thepylintcommand, eg:acsoopylint----disablemissing-final-newlineThis command returns an non-zero exit code if any message is reported. It is however possibly to display messages while reporting success, eg:acsoopylint--expectedapi-one-deprecated:2,line-too-longThe above command succeeds despite having exactly 2api-one-deprecatedor any number ofline-too-longmessages being reported.It is also possible to force failure on messages that areexpectedin the default configuration, eg to fail onfixmeerrors, just expect 0fixmemessages, like this:acsoopylint--expectedfixme:0acsoo.cfgA file namedacsoo.cfgat the project root helps you set sensible defaults.Here is a minimal example:[acsoo]trigram=xyzseries=10.0version=1.5.0And a more elaborate example:[acsoo]trigram=xyzseries=11.0version=1.5.2pushable=github.com:acsonegithub.com:mozaik[checklog]ignore=WARNING .* module .*:description is empty !WARNING:unable to set column .* of table account_analytic_account not nullUseful linkspypi page:https://pypi.python.org/pypi/acsonecode repository:https://github.com/acsone/acsooreport issues at:https://github.com/acsone/acsoo/issuesMaintainerThis project is maintained by ACSONE SA/NV.Changes3.1.0 (2021-01-04)pr-status: detect PR status from github pull requests URLs found in requirement files; useful to detect them in commentsVarious improvements to the project template (in gitlab-ci.yml, and pre-commit config, mostly)#75,#77.In acsoo tag, do not complain about empty directories#76.Deprecateacsoo freezein favor ofpip-deepfreeze.Deprecateacsoo addonsin favor ofmanifestoo.Add license and development status check to project template.Remove bumpversion from acsoo dependencies. This project is now replaced by bump2versions, and it’s better to install it separately with pipx.3.0.2 (2020-10-14)Lift setuptools version restriction. It has https issues with pypi.org since 2020-10-14. his means odoo-autodiscover>=2 must be used on Odoo<=10 projects. See alsohttps://github.com/acsone/setuptools-odoo/issues/10.3.0.1 (2020-07-29)[REM] Remove mrbob dependency as pip >= 20.2 make install crash3.0.0 (2020-07-01)[DEL] drop python 2 support (previous versions of acsoo are still available on PyPI, and for regular use, the python 3 version works for Odoo 8, 9, 10 projects too)[ADD] acsoo freeze to limit pip freeze output to dependencies of a given distribution[ADD] acsoo pr-status to print the status of GitHub pull requests found in requirement files with revision of the form refs/pull/NNN/head[DEL] deprecateacsoo wheel(now supported by pip natively) andacsoo release(which is automated in GitLab CI)[IMP] project template: ci.skip when pushing translation updates2.1.0 (2020-05-25)[IMP] project template: better and simpler isort config[IMP] project template: merge request templates[IMP] support non-editable VCS requirements in tag_requirements command[DEL] remove –force option of tag_requirements command as it does nothing useful, since all we want of this command is to make sure that a tag is present[MNT] declarative setuptools configuration[IMP] pin flake8 to version 3.7.9 (reminder: acsoo flake8 is deprecated, use pre-commit instead)[IMP] pin pylint-odoo to version 3.1.0 (reminder: acsoo pylint is deprecated, use pre-commit instead)2.0.0 (2020-01-15)[IMP] project template: publish html coverage in gitlab-ci[IMP] project template: branch coverage in gitlab-ci[IMP] project template: pre-commit cache in gitlab-ci[DEL] deprecate acsoo pylint in favor of pre-commit and per project .pylintrc[IMP] Odoo 13 support[IMP] project template: rename requirements-dev.txt to requirements.txt.in, better reflecting that these are input requirements and not requirements for the development environment[IMP] project template: update copyright year[IMP] project template: remove module_auto_update, use click-odoo-update instead1.9.0 (2019-02-28)[IMP] project template: use pre-commit (black, isort, flake8)[FIX] project template: fail on click-odoo-update error[FIX] project template: fix deploy log file[FIX] acsoo pylint: compatibility with pylint 21.8.3 (2019-01-22)[FIX] acsoo pylint: Adapt config to also work with pytlint-odoo 2.0.1[IMP] project template: use click-odoo-update1.8.2 (2018-11-05)[IMP] project template: better way to declare python version in .gitlab-ci.ymlFix acsoo tag for Odoo 121.8.1 (2018-10-30)[IMP] ignore pylint C0303 (https://github.com/PyCQA/pylint/issues/289)1.8.0 (2018-10-29)[IMP] acsoo wheel: add –no-deps, so we can build requirements.txt without fetching dependencies, and later install the project with –no-index and –find-links=release/ so as to detect missing dependencies (#38)[IMP] acsoo wheel: add –exclude-project option (to build requirements.txt without the current project), in preparation of #44[IMP] acsoo wheel: use a cache of editable git dependencies[IMP] acsoo wheel: use pip wheel -e . to build project instead of setup.py bdist_wheel, since the reason we were doing that has apparently been resolved in recent pip version (pip issue 3499 referred in a comment is apparently unrelated unfortunately, so I’m not sure why we were doing that exactly, probablyhttps://github.com/pypa/pip/issues/3500)[IMP] flake8: ignore W503 and W504 by default (line break around logical operators)[IMP] project template: Odoo 12 support[IMP] project template: pin acsoo version[IMP] project template: acsoo wheel –no-deps, so, combined with pip install –no-index in the test stage, it verifies that all dependencies are included in requirements.txt1.7.1 (2018-07-15)[IMP] project template: add makepot in .gitlab-ci.yml[IMP] pylint: whitelist lxml c library1.7.0 (2018-06-04)[IMP] more python 3 and Odoo 11 support[IMP] project template: build stage in gitlab-ci[IMP] project template: new style deploy / upgrade (using checksum upgrades and click-odoo-upgrade script)[IMP] project template: enforce odoo-autodiscover>=2 and do not use it for Odoo >= 11[IMP] add –dry-run option to acsoo tag and tag_requirements[IMP] make the list of places where tag_requirements can push configurable[IMP] project template: on demand installation of acsoo and ssh-agent[IMP] project template: use click-odoo-initdb in gitlab-ci1.6.0 (2018-02-16)[IMP] checklog: add –no-err-if-empty option[IMP] python 3 support[IMP] preliminary Odoo 11 support[IMP] project template: various improvements[IMP] refactoring of get_installable_addons() method for better reusability1.5.0 (2017-09-19)[IMP] tag_requirements: fetch more aggressively; this solves the errors trying to write ref with non existent object[IMP] tag: always tag requirements when doing acsoo tag[IMP] tag: tag requirements before tagging project, so if something fails when tagging the requirements the project is not tagged and the release build is not triggered.[ADD] addons: add –separator option (and fix tests that were not testing much)[IMP] addons: consider current dir as addons dir candidate[IMP] pylint: look for module to test in current dir by default, using the same algorithm asaddons list[IMP] pylint: support python 3 style odoo/addons namespace (without __init__.py)1.4.3 (2017-06-16)[IMP] checklog: consider ignore lines starting with # as comments[FIX] checklog: the previous release broke checklog color output1.4.2 (2017-06-16)[IMP] checklog: fail if no log record found in input[IMP] checklog: echo with click to be less sensitive to unicode issues1.4.1 (2017-06-14)[FIX] regression in acsoo release1.4.0 (2017-06-13)[IMP] colored logging[IMP] major change to acsoo tag and tag_editable_requirements. These changes make it easier to work with a CI-driven release process that is triggered on new tags. The usual manualacsoo releaseprocess should be mostly unimpacted by these changes.tag_editable_requirementsis nowtag_requirements.the tags structure has changed from{series}-{trigram}_{version}to{series}-{trigram}-{req_sha}-{egg}, where{req_sha}is the sha of the last change torequirements.txt.tag_requirementsincludes the egg name in the tag so different commits in the same repo can be tagged (before, all addons in a given dependency repo had to be on the same commit).when a tag for the given series, trigram and egg already exists on the dependency commit,tag_requirementsdoes not attempt to create another tag (this avoids creating useless tags or forced tags) and this is sufficient because the sole purpose of these dependency tags is to avoid commits to be garbage collected.acsoo tagnow invokestag_requirements. In most cases however this will not place additional tags on dependencies, because the normal workflow is to invoketag_requirementsas soon asrequirements.txtis updated.tag_requirementsautomatically transforms http(s) urls into ssh urls for the purpose of pushing tags. This allows to maximize the use of http(s) urls in requirements so CI and scripts do not require ssh access to the public dependencies. This currently only works for the acsone organization on github but the mechanism is easy to extend, should the need arise.1.3.0 (2017-06-04)[IMP] flake8: read additionalflake8-optionsin acsoo configuration file.[IMP] template: series-dependent odoo command in.gitlab.ci.yml.[IMP] template: createdb in.gitlab-ci.ymlbecause Odoo 8 does not do it by itself.[ADD] addons list-depends:--excludeoption1.2.2 (2017-05-30)[FIX] regression intag,tag_editable_requirementsandreleasecommands.1.2.1 (2017-05-27)[IMP] add possibility to provide main config file as option.[IMP] checklog: read default options from[checklog]section of config file.[IMP] pylint: read default options from[pylint]section of config file.[IMP] pylint: the module or package to lint may be provided with-m.[IMP] flake8: read default options from[flake8]section of config file. The only option so far isconfigto provide an alternate flake8 configuration file. This is useful so developer only need to typeacsoo flake8locally, even when a specific configuration is needed, so it’s trivial to run locally with the same config as in CI.1.1.0 (2017-05-25)[IMP] pylint: BREAKING the package to test must be provided explicitly, as soon as additional pylint options are provided, so as to enable easy local testing of a subset of a project. Examples:acsoo pylint---dsome-messageodoo,acsoo pylint--odoo.addons.xyz;[IMP] pylint: disable more code complexity errors:too-many-nested-blocks,too-many-return-statements.[IMP] pylint: display messages causing failure last, so emails from CI. that show the last lines of the log are more relevant.[IMP] pylint: display summary of messages that did not cause failure, also when there is no failure.[ADD]acsoo addons listandacsoo addonslist-depends.[ADD]acsoo checklog.1.0.1 (2017-05-21)First public release.
acsploit
ACsploit: a tool for generating worst-case inputs for algorithmsByTwo Six TechnologiesACsploit is an interactive command-line utility to generate worst-case inputs to commonly used algorithms. These worst-case inputs are designed to result in the target program utilizing a large amount of resources (e.g. time or memory).ACsploit is designed to be easy to use and contribute to. Future features will include adding arbitrary constraints to inputs, creating an API, and hooking into running programs to feed worst-case input directly to functions of interest.Join us on the ACsploit Slackhere!UsageStart ACsploit withpython3 acsploit.py. From there, you can use thehelpcommand to see what commands are available. You can callhelpon any of them to learn more about how to use that command, such ashelp set.To see the available exploits, use theshowcommand. To stage one for use, useuse [exploit_name]. To see a description of the exploit, runinfo. At any point, you can runoptionsto see the current input, output, and exploit options, and then useset [option_name] [value]to set an option. To see detailed descriptions of the options, useoptions describe.Tab completion is enabled for exploit and option names.Finally, userunto generate output from the exploit.ACsploit supports abbreviated commands, bash commands using!,CTRL+Rhistory search, and more.Command-line Options--load-file SCRIPTruns the commands inSCRIPTas if they had been entered in an interactive ACsploit session and then exits.#can be used for comments as in Python.--debugenables debug mode, in which ACsploit prints stack-traces when errors occur.DocumentationDocuments are generated using pdoc3 and can be found in thedocsdirectory.Generating DocumentsRunpip3 install pdoc3to install the documentation dependencies and then runpython generate_docs.pyWarningCaution should be used in generating and accessing ACsploit exploits. Using unreasonable exploit parameters may cause denial of service on generation. Additionally, the canned exploits (e.g. compression bombs) may cause denial of service if accessed by relevant applications.TestsTests for ACsploit can be invoked from inside theacsploitdirectory by runningpython -m pytest test. Alternatively, individual tests can be invoked by runningpython -m pytest test/path/to/test.py.To run the tests and obtain an HTML coverage report run the following:python -m pytest --cov=. --cov-report html:cov test/Finally to run the tests in parellel the-nflag can be used followed by the number of tests to run in parallel. On Linux and Mac the following works:python -m pytest -n`nproc` --cov=. --cov-report html:cov test/Contributing to ACsploitWe welcome community contributions to all aspects of ACsploit! For guidelines on contributing, please seeCONTRIBUTING.mdLicenseAcsploit is available under the 3-clause BSD license (seeLICENSE)
acss
# Assetto Corsa Stats ServiceThis project contains the following components:- cron script: this is the script that reads and processes driver results and stores data in a sqlite db file.- stats service: this is a wsgi service used to interact with the stored data and the minimal http interface AC provides## Stats serviceExposed paths:- */api/server_info*: return information for the configured server- */api/tracks*: return a list of unique tracks stored in db- */api/tracks/{track_name}/bestlaps*: return a list of all the best laps for a specific track.- */api/tracks/{track_name}/{car_names}/bestlaps*: return a list of all the best laps for a specific track and cars.Example: ```[..., {"car_name": "ferrari_458_gt2", "track_name": "monza", "driver_name": "FooBar", "best_lap": 426007}, ...]```## Installation```python setup.py install (virtualenv is recommended)```### Example crontab job```*/2 * * * * /opt/assetto_stats_service/python27/bin/acss_cron /path/to/assetto_corsa/dedicated/server/results /opt/assetto_stats_service/acss.db```### Example supervisord service configuration:```[program:assetto_corsa_stats]command=/opt/assetto_stats_service/python27/bin/acssd /opt/assetto_stats_service/etc/acss.confdirectory=/opt/assetto_stats_servicenumprocs=1user=nobody```
acss-core
ACSS CoreACSS (Accelerator Control and Simulation Services) provides an environment for scheduling and orchestrating of multiple intelligent agents, training and tuning of ML models, handling of data streams and for software testing and verification.User specific services are located at github (https://github.com/desy-ml/ml-pipe-services).DependenciesDocker and docker-compose >= 1.28.0 are required.Install Core ServicesClone the acss-services repository.git clone https://github.com/desy-ml/ml-pipe-servicesConfigure Core ServicesTo install the core services of ACSS you need to set the following environment values in a .env file.ACSS_EXTERNAL_HOST_ADDR=localhost ACSS_DB_PW=xxxx ACSS_DB_USER=xxxx ACSS_CONFIG_FILEPATH = /path/to/ml-pipe-config.yaml PATH_TO_ACSS_SERVICES_ROOT=/path/to/ml-pipe-servicesACSS_CONFIG_FILEPATH is the path to the yaml config file, which look like this:observer: # used to check if jbb is done url: observer:5003 event_db_pw: xxxx # event_db_url: event_db_usr: root register: # registers all services url: register:5004 simulation: # sql database which maps the machine parameter sim_db_pw: xxxx sim_db_usr: root sim_db_url: simulation_database:3306 msg_bus: # message bus # external_host_addr: localhost broker_urls: kafka_1:9092,kafka_2:9096In production replace ACSS_EXTERNAL_HOST_ADDR=localhost with the server url and set PATH_TO_ACSS_SERVICES_ROOT to the location of the cloned ml-pipe-services repository. The environment values ACSS_DB_PW and ACSS_DB_USER define the credentials for the databases used by ACSS.Build Docker imagesOpen the root folder of the acss-core project:cd /path/to/projectTo build all docker images run:makebuild-allThis can take a while...Notes: After changing code you just need to rebuild the service images, which is much faster.make build-service-imagesYou can check if all core services are started correctly by executing:docker-compose -p pipeline psIn the project root folder.Stop Core ServicesTo stop de Core Services just runmakedownTests locallyNote: Docker and docker-compose => 2.80 is required to run tests locallyRun all tests:make tests ENV_FILE=.envRun end to end tests:make e2e-tests ENV_FILE=.envRun integration tests:make integration-tests ENV_FILE=.envRun unit testsmake unit-tests ENV_FILE=.envAdditional StuffMaxwellLog in via ssh to max-wgs.desy.deInstall python 3.8.8wget https://repo.anaconda.com/archive/Anaconda3-2021.05-Linux-x86_64.sh sh Anaconda3-2021.05-Linux-x86_64.sh export PATH=$HOME/anaconda3/bin:$PATHPyTine and K2I2K_os on Machine PETRA IIIThe machine observer and controller for PETRA III are using the PetraAdapter which is using the libs PyTine and K2I2K_os. The Path to this libs have to be added to the PYTHONPATH.For PyTine have a look athttps://confluence.desy.de/display/HLC/Developing+with+Python.K2I2K_os can be cloned via git from:gitclonehttps://[email protected]/scm/pihp/petra3.optics.tools.gitJupyter notebookTo use KafkaPipeClient in a Jupyter notebook you need to add the virtual environment to Jupyter.First activate the python virtual environment.For PipenvpipenvshellStart the jupyter notebook:pipinstall--useripykernelNote: You have to add the virtual environment to jupyter. First, activate the virtual environment. Then run:python-mipykernelinstall--user--name=<myenv>
acs_student_attendance
AboutConsole app and Python API for analyzing and reporting the lab attendance of ourACSstudents.InstallationTo install acs_student_attendance run:$ pip install acs_student_attendanceConsole app usageQuick start:$ acs_student_attendance stud_auth.log semester-config.ymlShow help:$ acs_student_attendance --helpPython API usageQuick start:>>> from acs_student_attendance.analysis import StudentAttendanceAnalysisWithExport >>> log_lines = open('stud_auth.log') >>> analyzer = StudentAttendanceAnalysisWithExport('semester-config.yml') >>> results = analyzer(log_lines)ContributeIf you find any bugs, or wish to propose new featuresplease let us know.If you’d like to contribute, simply forkthe repository, commit your changes and send a pull request. Make sure you add yourself toAUTHORS.
acs_student_mail_harvester
AboutConsole app and Python API for harvesting email addresses of ourACSstudents during their first week of ACS lab coursework.Instructions for teaching assistantsThese steps should be performed with your every student group, at the very start of their first week in the ACS labs:Copy theexamples/student-info.inifile intoispitni_materijaliA/.eXXXXX/andispitni_materijaliB/.eXXXXX/Ask the administrator to switch the lab to the “exam” mode (aka “provera”)Wait for all of the students to loginInstruct the students to:Locate thestudent-info.inifile in their$HOME/$STUDENT_IDdirectoryUpdate the contents of the file with their own informationSave the file and close the editorLogoutAsk the administrator to collect the exam.tararchive and switch the lab to the “normal” modePlease send your collected exam archives to your professors as soon as you can.InstallationTo install acs_student_mail_harvester run:$ pip install acs_student_mail_harvesterConsole app usageQuick start:$ acs_student_mail_harvester students.csv --tar-path=examples/Show help:$ acs_student_mail_harvester --helpPython API usageQuick start:>>> from acs_student_mail_harvester import get_student_info, store_results_to >>> student_info = get_student_info('examples/') >>> store_results_to('students.csv', student_info)ContributeIf you find any bugs, or wish to propose new featuresplease let us know.If you’d like to contribute, simply forkthe repository, commit your changes and send a pull request. Make sure you add yourself toAUTHORS.
acstools
Python Tools for HST ACS (Advanced Camera for Surveys) Data
acstore
ACStore, or Attribute Container Storage, provides a stand-alone implementation to read and write attribute container storage files.
ac-stubs
Assetto Corsa stubs library for “ac” object. Useful for autocompletion in IDE during development.
acsuite-orangechannel
acsuiteaudiocutter(.py) replacement for VapourSynth.Allows for easy frame-based cutting/trimming/splicing of audio files using VapourSynth clip information.Includes some extra tools for working with audio files or timestamps.Functions:eztrim(clip, trims, audio_file[, outfile, ffmpeg_path=, quiet=, timecodes_file=])importvapoursynthasvscore=vs.corefromacsuiteimporteztrimfile=r'/BDMV/STREAM/00003.m2ts'afile=r'/BDMV/STREAM/00003.wav'# pre-extracted with TSMuxer or similarsrc=core.lsmas.LWLibavSource(file)# for the example, we will assume the src clip is 100 frames long (0-99)trimmed_clip=src[3:22]+src[23:40]+src[48]+src[50:-20]+src[-10:-5]+src[97:]# `clip` arg should be the uncut/untrimmed source that you are trimming fromeztrim(src,[(3,22),(23,40),(48,49),(50,-20),(-10,-5),(97,None)],afile)Output:Uses the file extension of the inputaudio_fileto output a cut/trimmed audio file with the same extension. If nooutfileis given, defaults toaudio_file_cut.ext.concat(audio_files, outfile[, ffmpeg_path=, quiet=])concat(['file.aac','file2.aac'],'outfile.aac')Will concatenate a list of audio files (paths given as strings) into one file using FFmpeg.Utility Functions:f2ts(f, src_clip=[, precision=, timecodes_file=])Useful for finding the timestamp for a frame number.fromfunctoolsimportpartialimportvapoursynthasvscore=vs.coreclip=core.std.BlankClip()ts=partial(f2ts,src_clip=clip)ts(5),ts(9),ts(clip.num_frames),ts(-1)# ('00:00:00.208', '00:00:00.375', '00:00:10.000', '00:00:09.958')clip_to_timecodes(src_clip)Returns a list of timecodes for VFR clips. Used as a fallback whentimecodes_fileis not given tof2tsoreztrim.Getting StartedDependenciesFFmpegVapourSynth R49+InstallingArch LinuxInstall theAUR packagevapoursynth-tools-acsuite-gitwith your favorite AUR helper:$yay-Svapoursynth-tools-acsuite-gitGentoo LinuxInstall via theVapourSynth portage tree.Windows / OtherUse thePython Package Index (PyPI / pip):python3-mpipinstall--user--upgradeacsuite-orangechannelor simplypipinstallacsuite-orangechannelif you are able to use apipexecutable directly.Help!Check out thedocumentationor use Python's builtinhelp():help('acsuite')
acs-wrapper
acsAmerican Community Survey (ACS) Wrapper
acsylla
AcsyllaA composition ofasync+cassandra+scyllawords.A high performance Python Asyncio client library for Cassandra and ScyllaDBUnder the hoodacsyllahas modern, feature-rich and shard-awareC/C++ clientlibrary forCassandraandScyllaDB.Table of ContentsFeaturesCompatibilityInstallBuild your own packageClusterConfiguration optionsConfiguration methodsSessionMethods of Session objectStatementMethods of Statement objectPreparedStatementMethods of PreparedStatement objectBatchMethods of Batch objectResultMethods of Result objectRowMethods of Row objectExamplesBasic usageBinding ParametersNon Prepared StatementPrepared StatementUse prepared statement and pagingConfigure Shard-Awareness connectionSSL ExampleRetrieving metadataConfigure loggingSet log levelSet callback for capture log messagesExecution profilesTracingDevelopingFeaturesShard-AwarenessAsynchronous APISimple, Prepared, and Batch statementsAsynchronous I/O, parallel execution, and request pipeliningConnection poolingAutomatic node discoveryAutomatic reconnectionConfigurable load balancingWorks with any cluster sizeAuthenticationSSLLatency-aware routingPerformance metricsTuples and UDTsNested collectionsRetry policiesClient-side timestampsData typesIdle connection heartbeatsSupport for materialized view and secondary index metadataSupport for clustering key order,frozen<>and Cassandra version metadataWhitelist/blacklist DC, and whitelist/blacklist hosts load balancing policiesCustom authenticatorsReverse DNS with SSL peer identity verification supportRandomized contact pointsSpeculative executionCompatibilityThis driver works exclusively with the Cassandra Query Language v3 (CQL3) and Cassandra's native protocol. The current version works with:Scylla and Scylla EnterpriseApache Cassandra® versions 2.1, 2.2 and 3.0+Python 3.7, 3.8, 3.9, 3.10 and 3.11 for Linux and MacOSInstallThere is an Beta realease compabitble with Python 3.7, 3.8, 3.9, 3.10 and 3.11 for Linux and MacOS environments uploaded as a Pypi package. Use the following command for installing it:pipinstallacsyllaBuild your own packageYou can build your own package for any supported python version forx86_64andaarch64Linux.Example for build wheel for Python 3.11 aarch64 from master branchcurl-Ohttps://raw.githubusercontent.com/acsylla/acsylla/master/bin/build.sh curl-Ohttps://raw.githubusercontent.com/acsylla/acsylla/master/bin/build_in_docker.sh chmod+xbuild.shbuild_in_docker.sh ./build_in_docker.sh3.11masteraarch64ClusterTheClusterobject describes a Cassandra/ScyllaDB cluster’s configuration. Thedefault cluster object is good for most clustersand only requires a single or multiple list of contact points in order to establish a session connection.For example:cluster = acsylla.create_cluster('127.0.0.1, 127.0.0.2')Once a session is connected using a cluster object its configuration is constant.Modifying the cluster object configuration once a session is established does not alter the session’s configuration.Configuration optionsList of named arguments to configure cluster withacsylla.create_clusterhelper.contact_points:Sets contact points. This MUST be set. White space is striped from the contact points.Examples:“127.0.0.1”, “127.0.0.1,127.0.0.2”, “server1.domain.com”port:Sets the port.Default:9042local_address:Sets the local address to bind when connecting to the cluster, if desired. IP address to bind, or empty string for no binding. Only numeric addresses are supported; no resolution is done.local_port_range_min:Sets the range of outgoing port numbers (ephemeral ports) to be used when establishing the shard-aware connections. This is applicable when the routing of connection to shard is based on the client-side port number.When application connects to multiple CassCluster-s it is advised to assign mutually non-overlapping port intervals to each. It is assumed that the supplied range is allowed by the OS (e.g. it fits inside /proc/sys/net/ipv4/ip_local_port_range on *nix systems)Default:49152local_port_range_max:Seelocal_port_range_minDefault:65535username:Set username for plain text authentication.password:Set password for plain text authentication.connect_timeout:Sets the timeout for connecting to a node.Default:5secondsrequest_timeout:Sets the timeout for waiting for a response from a node. Use 0 for no timeout.Default:12secondsresolve_timeout:Sets the timeout for waiting for DNS name resolution.Default:2secondslog_level:Sets the log level. Available levels:disabledcriticalerrorwarninfodebugtraceDefault:warnlogging_callback:Sets a callback function to catch log messages.Default:An internal logger with "acsylla" name.logging.getLogger('acsylla')ssl_enable:Enable SSL connectionDefault:Falsessl_cert:Set client-side certificate chain. This is used to authenticate the client on the server-side. This should contain the entire Certificate chain starting with the certificate itselfssl_private_key:Set client-side private key. This is used to authenticate the client on the server-side.ssl_private_key_password:Password forssl_private_keyssl_trusted_cert:Adds a trusted certificate. This is used to verify the peer’s certificate.ssl_verify_flags:Sets verification performed on the peer’s certificate.NONENo verification is performedPEER_CERTCertificate is present and validPEER_IDENTITYIP address matches the certificate’s common name or one of its subject alternative names. This implies the certificate is also present.PEER_IDENTITY_DNSHostname matches the certificate’s common name or one of its subject alternative names. This implies the certificate is also present.Default:PEER_CERTprotocol_version:Sets the protocol version. The driver will automatically downgrade to the lowest supported protocol version.Default:acsylla.ProtocolVersion.V4oracsylla.ProtocolVersion.DSEV1when using the DSE driver with DataStax Enterprise.use_beta_protocol_version:Use the newest beta protocol version. This currently enables the use of protocol versioncyacsylla.ProtocolVersion.V5orcyacsylla.ProtocolVersion.DSEV2when using the DSE driver with DataStax Enterprise.Default:Falseconsistency:Sets default consistency level of statement.acsylla.ConsistencyDefault:LOCAL_ONEserial_consistency:Sets default serial consistency level of statement.acsylla.ConsistencyDefault:ANYqueue_size_io:Sets the size of the fixed size queue that stores pending requests.Default:8192core_connections_per_host:Sets the number of connections made to each server in each IO thread.Default:1constant_reconnect_delay_ms:Configures the cluster to use a reconnection policy that waits a constant time between each reconnection attempt. Time in milliseconds to delay attempting a reconnection; 0 to perform a reconnection immediately.Default:Not setexponential_reconnect_base_delay_ms:The base delay (in milliseconds) to use for scheduling reconnection attempts. Configures the cluster to use a reconnection policy that waits exponentially longer between each reconnection attempt; however will maintain a constant delay once the maximum delay is reached.Note:A random amount of jitter (+/- 15%) will be added to the pure exponential delay value. This helps to prevent situations where multiple connections are in the reconnection process at exactly the same time. The jitter will never cause the delay to be less than the base delay, or more than the max delay.Default:2000exponential_reconnect_max_delay_ms:The maximum delay to wait between two reconnection attempts. Seeexponential_reconnect_max_delay_msDefault:60000coalesce_delay_us:Sets the amount of time, in microseconds, to wait for new requests to coalesce into a single system call. This should be set to a value around the latency SLA of your application’s requests while also considering the request’s roundtrip time. Larger values should be used for throughput bound workloads and lower values should be used for latency bound workloads.Default:200usnew_request_ratio:Sets the ratio of time spent processing new requests versus handling the I/O and processing of outstanding requests. The range of this setting is 1 to 100, where larger values allocate more time to processing new requests and smaller values allocate more time to processing outstanding requests.Default:50max_schema_wait_time_ms:Sets the maximum time to wait for schema agreement after a schema change is made (e.g. creating, altering, dropping a table/keyspace/view/index etc).Default:10000millisecondstracing_max_wait_time_ms:Sets the maximum time to wait for tracing data to become available.Default:15millisecondstracing_retry_wait_time_ms:Sets the amount of time to wait between attempts to check to see if tracing is available.Default:3millisecondstracing_consistency:Sets the consistency level to use for checking to see if tracing data is available.Default:ONEload_balance_round_robin:Configures the cluster to use round-robin load balancing. The driver discovers all nodes in a cluster and cycles through them per request. All are considered local.load_balance_dc_aware:The primary data center to try first. Configures the cluster to use DC-aware load balancing. For each query, all live nodes in a primary ‘local’ DC are tried first, followed by any node from other DCs.Note:This is the default, and does not need to be called unless switching an existing from another policy or changing settings. Without further configuration, a default local_dc is chosen from the first connected contact point, and no remote hosts are considered in query plans. If relying on this mechanism, be sure to use only contact points from the local DC.token_aware_routing:Configures the cluster to use token-aware request routing or not. This routing policy composes the base routing policy, routing requests first to replicas on nodes considered ‘local’ by the base load balancing policy.Important:Token-aware routing depends on keyspace metadata. For this reason enabling token-aware routing will also enable retrieving and updating keyspace schema metadata.Default:True(enabled).token_aware_routing_shuffle_replicas:Configures token-aware routing to randomly shuffle replicas. This can reduce the effectiveness of server-side caching, but it can better distribute load over replicas for a given partition key.Note:Token-aware routingtoken_aware_routingmust be enabled for the setting to be applicable.Default:True(enabled).latency_aware_routing:Configures the cluster to use latency-aware request routing or not. This routing policy is a top-level routing policy. It uses the base routing policy to determine locality (dc-aware) and/or placement (token-aware) before considering the latency.Default:False(disabled).latency_aware_routing_settings:Configures the settings for latency-aware request routing. Instance ofacsylla.LatencyAwareRoutingSettingsDefault:exclusion_threshold2.0Controls how much worse the latency be compared to the average latency of the best performing node before it penalized.scale_ms100 millisecondsControls the weight given to older latencies when calculating the average latency of a node. A bigger scale will give more weight to older latency measurements.retry_period_ms10,000 millisecondsThe amount of time a node is penalized by the policy before being given a second chance when the current average latency exceeds the calculated threshold (exclusion_threshold * best_average_latency).update_rate_ms100 millisecondsThe rate at which the best average latency is recomputed.min_measured50The minimum number of measurements per-host required to be considered by the policywhitelist_hosts:Sets whitelist hosts. The first call sets the whitelist hosts and any subsequent calls appends additional hosts. Passing an empty string will clear and disable the whitelist. White space is striped from the hosts.This policy filters requests to all other policies, only allowing requests to the hosts contained in the whitelist. Any host not in the whitelist will be ignored and a connection will not be established. This policy is useful for ensuring that the driver will only connect to a predefined set of hosts.Examples: “127.0.0.1”, “127.0.0.1,127.0.0.2”blacklist_hosts:Sets blacklist hosts. The first call sets the blacklist hosts and any subsequent calls appends additional hosts. Passing an empty string will clear and disable the blacklist. White space is striped from the hosts.This policy filters requests to all other policies, only allowing requests to the hosts not contained in the blacklist. Any host in the blacklist will be ignored and a connection will not be established. This policy is useful for ensuring that the driver will not connect to a predefined set of hosts.Examples: “127.0.0.1”, “127.0.0.1,127.0.0.2”whitelist_dc:Same aswhitelist_hosts, but whitelist all hosts of a dcExamples: “dc1”, “dc1,dc2”blacklist_dc:Same asblacklist_hosts, but blacklist all hosts of a dcExamples: “dc1”, “dc1,dc2”tcp_nodelay:Enable/Disable Nagle’s algorithm on connections.Default:Truetcp_keepalive_sec:Set keep-alive delay in seconds.Default:disabledtimestamp_gen:"server_side" or "monotonic" Sets the timestamp generator used to assign timestamps to all requests unless overridden by setting the timestamp on a statement or a batch.Default:Monotonically increasing, client-side timestamp generator.heartbeat_interval_sec:Sets the amount of time between heartbeat messages and controls the amount of time the connection must be idle before sending heartbeat messages. This is useful for preventing intermediate network devices from dropping connections.Default:30 secondsidle_timeout_sec:Sets the amount of time a connection is allowed to be without a successful heartbeat response before being terminated and scheduled for reconnection.Default:60 secondsretry_policy:May be set todefaultorfallthroughSets the retry policy used for all requests unless overridden by setting a retry policy on a statement or a batch.defaultThis policy retries queries in the following cases:On a read timeout, if enough replicas replied but data was not received.On a write timeout, if a timeout occurs while writing the distributed batch logOn unavailable, it will move to the next hostIn all other cases the error will be returned. This policy always uses the query’s original consistency level.fallthroughThis policy never retries or ignores a server-side failure. The error is always returned.Default:defaultThis policy will retry on a read timeout if there was enough replicas, but no data present, on a write timeout if a logged batch request failed to write the batch log, and on a unavailable error it retries using a new host. In all other cases the default policy will return an error.retry_policy_logging:This policy logs the retry decision of its child policy. Logging is done using INFO level.Default:Falseuse_schema:Enable/Disable retrieving and updating schema metadata. If disabled this is allows the driver to skip over retrieving and updating schema metadata andsession.get_metadata()will always return an empty object. This can be useful for reducing the startup overhead of short-lived sessions.Default:True (enabled)hostname_resolution:Enable retrieving hostnames for IP addresses using reverse IP lookup. This is useful for authentication (Kerberos) or encryption (SSL) services that require a valid hostname for verification.Default:False (disabled)randomized_contact_points:Enable/Disable the randomization of the contact points list.Important:This setting should only be disabled for debugging or tests.Default:True (enabled)speculative_execution_policy:Enable constant speculative executions with the supplied settingsacsylla.SpeculativeExecutionPolicy.max_reusable_write_objects:Sets the maximum number of “pending write” objects that will be saved for re-use for marshalling new requests. These objects may hold on to a significant amount of memory and reducing the number of these objects may reduce memory usage of the application.The cost of reducing the value of this setting is potentially slower marshalling of requests prior to sending.Default:Max unsigned integer valueprepare_on_all_hosts:Prepare statements on all available hosts.Default:Trueno_compact:Enable the NO_COMPACT startup option. This can help facilitate uninterrupted cluster upgrades where tables using COMPACT_STORAGE will operate in “compatibility mode” for BATCH, DELETE, SELECT, and UPDATE CQL operations.Default:Falsehost_listener_callback:Sets a callback for handling host state changes in the cluster. Note: The callback is invoked only when state changes in the cluster are applicable to the configured load balancing policy(s). NOT IMPLEMENTED YETapplication_name:Set the application name. This is optional; however it provides the server with the application name that can aid in debugging issues with larger clusters where there are a lot of client (or application) connections.application_version:Set the application version. This is optional; however it provides the server with the application version that can aid in debugging issues with large clusters where there are a lot of client (or application) connections that may have different versions in use.client_id:Set the client id. This is optional; however it provides the server with the client ID that can aid in debugging issues with large clusters where there are a lot of client connections.Default:UUID v4 generatedmonitor_reporting_interval_sec:Sets the amount of time between monitor reporting event messages.Default:300 seconds.cloud_secure_connection_bundle:Absolute path to DBaaS credentials file.Sets the secure connection bundle path for processing DBaaS credentials. This will pre-configure a cluster using the credentials format provided by the DBaaS cloud provider.Note:contact_pointsandssl_enableshould not used in conjunction withcloud_secure_connection_bundle.Example:"/path/to/secure-connect-database_name.zip"Default:Nonedse_gssapi_authenticator:Enables GSSAPI authentication for DSE clusters secured with the DseAuthenticator. Instance ofacsylla.DseGssapiAuthenticatordse_gssapi_authenticator_proxy:Enables GSSAPI authentication with proxy authorization for DSE clusters secured with the DseAuthenticator. Instance ofacsylla.DseGssapiAuthenticatorProxydse_plaintext_authenticator:Enables plaintext authentication for DSE clusters secured with the DseAuthenticator. Instance ofacsylla.DsePlaintextAuthenticatordse_plaintext_authenticator_proxy:Enables plaintext authentication with proxy authorization for DSE clusters secured with the DseAuthenticator. Instance ofacsylla.DsePlaintextAuthenticatorProxyConfiguration methodsFor full list of methods to configureClusterseebase.pySessionA session object is used to execute queries and maintains cluster state through the control connection. The control connection is used to auto-discover nodes and monitor cluster changes (topology and schema). Each session also maintains multiple pools of connections to cluster nodes which are used to query the cluster.importacsyllacluster=acsylla.create_cluster(['localhost'])session=awaitcluster.create_session(keyspace="acsylla")Methods ofSessionobjectasync def close(self):Closes the session instance, outputs a close future which can be used to determine when the session has been terminated. This allows in-flight requests to finish.async def set_keyspace(self, keyspace: str) -> "Result":Sets the keyspace for sessiondef get_client_id(self) -> str:Get the client id.def get_metadata(self):ReturnsMetadatainstance class for retrieving metadata from cluster.async def create_prepared(self, statement: str, timeout: Optional[float] = None) -> PreparedStatement:Create a prepared statement.By providing atimeoutall requests built by the prepared statement will use it, otherwise timeout provided during theClusterinstantantation will be used. Value expected is seconds.async def execute(self, statement: "Statement") -> ResultExecutes an statement and returns theResultinstance.async def execute_batch(self, batch: Batch) -> Result:Executes a batch of statements.def metrics(self) -> SessionMetrics:Returns the metrics related to the session.def speculative_execution_metrics(self) -> SpeculativeExecutionMetrics:Returns speculative execution performance metrics gathered by the driver.StatementA statement object is an executable query. It represents either a regular (adhoc) statement or a prepared statement. It maintains the queries’ parameter values along with query options (consistency level, paging state, etc.)Methods ofStatementobjectdef add_key_index(self, index: int) -> None:Adds a key index specifier to this a statement. When using token-aware routing, this can be used to tell the driver which parameters within a non-prepared, parameterized statement are part of the partition key.Use consecutive calls for composite partition keys.This is not necessary for prepared statements, as the key parameters are determined in the metadata processed in the prepare phase.def reset_parameters(self, count: int) -> None:Clear and/or resize the statement’s parameters.def bind(self, index: int, value: SupportedType) -> None:Binds the value to a specific index parameter.If an invalid type is used for a prepared statement this will raise immediately an error. If a none prepared exception is used error will be raised later during the execution statement.If an invalid index is used this will raise immediately an errordef bind_by_name(self, name: str, value: SupportedType) -> None:Binds the the value to a specific parameter by name.If an invalid type is used for this will raise immediately an error. If an invalid name is used this will raise immediately an errordef bind_list(self, values: Sequence[SupportedType]) -> None:Binds the values into all parameters from left to right.For types supported and errors that this function might raise take a look at theStatement.bindfunction.def bind_dict(self, values: Mapping[str, SupportedType]) -> None:Binds the values into all parameter names. Names are the keys of the mapping provided. For types supported and errors that this function might raise takelook at theStatement.bind_dictfunction. Note: This method are only allowed for statements created using prepared statementsdef set_page_size(self, page_size: int) -> None:Sets the statement's page size.def set_page_state(self, page_state: bytes) -> None:Sets the statement's paging state. This can be used to get the next page of data in a multi-page query.Warning:The paging state should not be exposed to or come from untrusted environments. The paging state could be spoofed and potentially used to gain access to other data.def set_timeout(self, timeout: float) -> None:Sets the statement's timeout in seconds for waiting for a response from a node.Default:Disabled (use the cluster-level request timeout)def set_consistency(self, timeout: float) -> None:Sets the statement’s consistency level.Default:LOCAL_ONEdef set_serial_consistency(self, timeout: float) -> None:Sets the statement’s serial consistency level.Default:Not setdef set_timestamp(self, timestamp: int):Sets the statement’s timestamp.def set_is_idempotent(self, is_idempotent: bool):Sets whether the statement is idempotent. Idempotent statements are able to be automatically retried after timeouts/errors and can be speculatively executed.def set_retry_policy(self, retry_policy: str, retry_policy_logging: bool = False):Sets the statement’s retry policy.May be set todefaultorfallthroughdefaultThis policy retries queries in the following cases:On a read timeout, if enough replicas replied but data was not received.On a write timeout, if a timeout occurs while writing the distributed batch logOn unavailable, it will move to the next hostIn all other cases the error will be returned. This policy always uses the query’s original consistency level.fallthroughThis policy never retries or ignores a server-side failure. The error is always returned.Default:defaultThis policy will retry on a read timeout if there was enough replicas, but no data present, on a write timeout if a logged batch request failed to write the batch log, and on a unavailable error it retries using a new host. In all other cases the default policy will return an error.retry_policy_loggingIf set toTrue, this policy logs the retry decision of its child policy. Logging is done usingINFOlevel.Default:Falsedef set_tracing(self, enabled: bool = None):Sets whether the statement should use tracing.def set_host(self, host: str, port: int = 9042):Sets a specific host that should run the query.In general, this should not be used, but it can be useful in the following situations:To query node-local tables such as system and virtual tables.To apply a sequence of schema changes where it makes sense for all the changes to be applied on a single node.def set_execution_profile(self, name: str) -> None:Sets the execution profile to execute the statement with. Note: Empty string will clear execution profile from statementPreparedStatementA statement that has been prepared cluster-side (It has been pre-parsed and cached).Methods ofPreparedStatementobjectUse thesession.create_prepared()coroutine for creating a new instance ofPreparedStatement.prepared=awaitsession.create_prepared("SELECT id, value FROM test")statement=prepared.bind(page_size=10)def bind(self, page_size: Optional[int] = None, page_state: Optional[bytes] = None, execution_profile: Optional[str] = None,) -> Statement:Returns a newStatementusing the prepared.def set_execution_profile(self, statement: Statement, name: str) -> None:Sets the execution profile to execute the statement with.Note:Empty string will clear execution profile from statementBatchA group of statements that are executed as a single batch.Methods ofBatchobjectUse theacsylla.create_batch_logged(),acsylla.create_batch_unlogged()andacsylla.create_batch_counter()factories for creating a new instance.def set_consistency(self, consistency: int):Sets the batch’s consistency leveldef set_serial_consistency(self, consistency: int):Sets the batch’s serial consistency level.def set_timestamp(self, timestamp: int):Sets the batch’s timestamp.def set_request_timeout(self, timeout_ms: int):Sets the batch’s timeout for waiting for a response from a node.Default:Disabled (use the cluster-level request timeout)def set_is_idempotent(self, is_idempotent):Sets whether the statements in a batch are idempotent. Idempotent batches are able to be automatically retried after timeouts/errors and can be speculatively executed.def set_retry_policy(self, retry_policy: str, retry_policy_logging: bool = False):Sets the batch’s retry policy.May be set todefaultorfallthroughdefaultThis policy retries queries in the following cases:On a read timeout, if enough replicas replied but data was not received.On a write timeout, if a timeout occurs while writing the distributed batch logOn unavailable, it will move to the next hostIn all other cases the error will be returned. This policy always uses the query’s original consistency level.fallthroughThis policy never retries or ignores a server-side failure. The error is always returned.Default:defaultThis policy will retry on a read timeout if there was enough replicas, but no data present, on a write timeout if a logged batch request failed to write the batch log, and on a unavailable error it retries using a new host. In all other cases the default policy will return an error.retry_policy_loggingIf set toTrue, this policy logs the retry decision of its child policy. Logging is done usingINFOlevel.Default:Falsedef set_tracing(self, enabled: bool):Sets whether the batch should use tracing.def add_statement(self, statement: Statement) -> None:Adds a new statement to the batch.def set_execution_profile(self, name: str) -> None:Sets the execution profile to execute the statement with.Note:Empty string will clear execution profile from statementResultThe result of a query.Methods ofResultobjectProvides a result instance class. Use thesession.execute()coroutine for getting the result from a querydef count(self) -> int:Returns the total rows of the resultdef column_count(self) -> int:Returns the total columns returneddef columns_names(self):Returns the columns namesdef first(self) -> Optional["Row"]:Return the first result, if there is no row returns None.def all(self) -> Iterable["Row"]:Return the all rows using of a result, using an iterator.If there is no rows iterator returns no rows.def has_more_pages(self) -> bool:Returns true if there is still pages to be fetcheddef page_state(self) -> bytes:Returns a token with the page state for continuing fetching new results.Before calling this method you must first checks if there are more results using thehas_more_pagesfunction, and if there are use the token returned by this function as an argument of the factories for creating an statement for returning the next page.RowA collection of column values.Methods ofRowobjectProvides access to a row of aResult.result=awaitsession.execute(statement)forrowinresult:print(row.as_dict())def as_dict(self) -> dict:Returns the row as dict.def as_list(self) -> list:Returns the row as list.def as_tuple(self) -> tuple:Returns the row as tuple.def as_named_tuple(self) -> tuple:Returns the row as named tuple.def column_count(self) -> int:Returns column count.def column_value(self, name: str) -> SupportedType:Returns the row column value called byname.Raises aCassExceptionderived exception if the column can not be foundType is inferred by using the Cassandra driver and converted, if supported, to a Python type or one of the extended types provided by Acsylla.def column_value_by_index(self, index):Returns the column value bycolumn index. Raises an exception if the column can not be foundExamplesThe driver includes several examples in theexamplesdirectory.Basic usageThe following snippet shows the minimal stuff that would be needed for creating a newSessionobject for the keyspaceacsyllaand then peform a query for reading a set of rows. For more info seebase.pyandfactories.pyAcsylla supports all native datatypes includingCollectionsandUDTimportasyncioimportacsyllaasyncdefmain():cluster=acsylla.create_cluster(['localhost'])session=awaitcluster.create_session(keyspace="acsylla")statement=acsylla.create_statement("SELECT id, value FROM test WHERE id=100")result=awaitsession.execute(statement)row=result.first()value=row.column_value("value")awaitsession.close()asyncio.run(main())Binding ParametersThe ‘?’ marker is used to denote the bind variables in a query string. This can be used for both regular and prepared parameterized queries.Non Prepared StatementIn addition to adding the bind marker to your query string your application must also provide the number of bind variables toacsylla.create_statement()viaparameterskwargs when constructing a new statement.importasyncioimportacsyllaasyncdefbind_by_index():cluster=acsylla.create_cluster(['localhost'])session=awaitcluster.create_session(keyspace="acsylla")statement=acsylla.create_statement("INSERT INTO test (id, value) VALUES (?, ?)",parameters=2)statement.bind(0,1)statement.bind(1,1)awaitsession.execute(statement)asyncdefbind_list():cluster=acsylla.create_cluster(['localhost'])session=awaitcluster.create_session(keyspace="acsylla")statement=acsylla.create_statement("INSERT INTO test (id, value) VALUES (?, ?)",parameters=2)statement.bind_list([1,1])awaitsession.execute(statement)asyncio.run(bind_by_index())asyncio.run(bind_list())Prepared StatementBind variables can be bound by the marker’s index or by name and must be supplied for all bound variables.importasyncioimportacsyllaasyncdefbind_by_index():cluster=acsylla.create_cluster(['localhost'])session=awaitcluster.create_session(keyspace="acsylla")prepared=awaitsession.create_prepared("INSERT INTO test (id, value) VALUES (?, ?)")statement=prepared.bind()statement.bind(0,1)statement.bind(1,1)awaitsession.execute(statement)asyncdefbind_by_name():cluster=acsylla.create_cluster(['localhost'])session=awaitcluster.create_session(keyspace="acsylla")prepared=awaitsession.create_prepared("INSERT INTO test (id, value) VALUES (?, ?)")statement=prepared.bind()statement.bind_by_name("id",1)statement.bind_by_name("value",1)awaitsession.execute(statement)asyncdefbind_list():cluster=acsylla.create_cluster(['localhost'])session=awaitcluster.create_session(keyspace="acsylla")prepared=awaitsession.create_prepared("INSERT INTO test (id, value) VALUES (?, ?)")statement=prepared.bind()statement.bind_list([0,1])awaitsession.execute(statement)asyncdefbind_dict():cluster=acsylla.create_cluster(['localhost'])session=awaitcluster.create_session(keyspace="acsylla")prepared=awaitsession.create_prepared("INSERT INTO test (id, value) VALUES (?, ?)")statement=prepared.bind()statement.bind_dict({'id':1,'value':1})awaitsession.execute(statement)asyncdefbind_named_parameters():cluster=acsylla.create_cluster(['localhost'])session=awaitcluster.create_session(keyspace="acsylla")prepared=awaitsession.create_prepared("INSERT INTO test (id, value) VALUES (:test_id, :test_value)")statement=prepared.bind()statement.bind_dict({'test_id':1,'test_value':1})awaitsession.execute(statement)asyncio.run(bind_by_index())asyncio.run(bind_by_name())asyncio.run(bind_list())asyncio.run(bind_dict())asyncio.run(bind_named_parameters())Use prepared statement and pagingimportasyncioimportacsyllaasyncdefmain():cluster=acsylla.create_cluster(['localhost'])session=awaitcluster.create_session(keyspace="acsylla")prepared=awaitsession.create_prepared("SELECT id, value FROM test")statement=prepared.bind(page_size=10)whileTrue:result=awaitsession.execute(statement)print(result.columns_names())# ['id', 'value']forrowinresult:print(dict(row))# {'id': 1, 'value': 'test'}print(list(row))# [('id', 1), ('value', 'test')]print(row.as_list())# [1, 'test']print(row.as_tuple())# (1, 'test')ifresult.has_more_pages():statement.set_page_size(100)# you can change statement settings on the flystatement.set_page_state(result.page_state())else:breakasyncio.run(main())Example for pagging result with async generatorimportasyncioimportacsyllaclassAsyncResultGenerator:def__init__(self,session,statement):self.session=sessionself.statement=statementasyncdef__aiter__(self):result=awaitself.session.execute(self.statement)whileTrue:ifresult.has_more_pages():self.statement.set_page_state(result.page_state())future_result=asyncio.create_task(self.session.execute(self.statement))awaitasyncio.sleep(0)else:future_result=Noneforrowinresult:yielddict(row)iffuture_resultisnotNone:result=awaitfuture_resultelse:breakdeffind(session,statement):returnAsyncResultGenerator(session,statement)asyncdefmain():cluster=acsylla.create_cluster(['localhost'])session=awaitcluster.create_session(keyspace="acsylla")prepared=awaitsession.create_prepared("SELECT id, value FROM test")statement=prepared.bind(page_size=10)asyncforresinfind(session,statement):print(res)asyncio.run(main())ConfigureShard-Awarenessconnection to ScyllaDB clusterimportacsyllacluster=acsylla.create_cluster(['node1','node2','node3'],port=19042,# default: 9042core_connections_per_host=8,# default: 1local_port_range_min=49152,# default: 49152local_port_range_max=65535# default: 65535)SSL Exampleimportacsyllawithopen('./certs/client.cert.pem')asf:ssl_cert=f.read()withopen('./certs/client.key.pem')asf:ssl_private_key=f.read()withopen('./certs/trusted.cert.pem')asf:ssl_trusted_cert=f.read()cluster=acsylla.create_cluster(['localhost'],ssl_enabled=True,ssl_cert=ssl_cert,ssl_private_key=ssl_private_key,ssl_trusted_cert=ssl_trusted_cert,ssl_verify_flags=acsylla.SSLVerifyFlags.PEER_IDENTITY)Retrieving metadataimportasyncioimportacsyllaasyncdefmain():cluster=acsylla.create_cluster(['localhost'])session=awaitcluster.create_session(keyspace="acsylla")metadata=session.get_metadata()forkeyspaceinmetadata.get_keyspaces():keyspace_metadata=metadata.get_keyspace_meta(keyspace)print('\n\n'.join(keyspace_metadata.as_cql_query(formatted=True)))awaitsession.close()asyncio.run(main())Configure loggingSet log levelAvailable levels:disabledcriticalerrorwarninfodebugtraceimportloggingimportasyncioimportacsyllalogging.basicConfig(format="[%(levelname)1.1s%(asctime)s] (%(name)s)%(message)s")asyncdefmain():cluster=acsylla.create_cluster(['localhost'],log_level='trace')session=awaitcluster.create_session(keyspace="acsylla")cluster.set_log_level('info')awaitsession.close()asyncio.run(main())Set callback for capture log messagesimportasyncioimportacsylladefon_log_message(msg):print(msg.time_ms,msg.log_level,msg.file,msg.line,msg.function,msg.message)asyncdefmain():cluster=acsylla.create_cluster(['localhost'],log_level='debug',logging_callback=on_log_message)session=awaitcluster.create_session(keyspace="acsylla")awaitsession.close()asyncio.run(main())Execution profilesimportasyncioimportacsyllaasyncdefmain():cluster=acsylla.create_cluster(['localhost'])cluster.create_execution_profile('test_profile',request_timeout=200,load_balance_round_robin=True,whitelist_hosts='localhost',retry_policy='default',retry_policy_logging=True,)session=awaitcluster.create_session(keyspace="acsylla")# For statementstatement=acsylla.create_statement("SELECT id, value FROM test WHERE id=100",execution_profile="test_profile")# orstatement.set_execution_profile('statement')awaitsession.execute(statement)# For prepared statementprepared=awaitsession.create_prepared("SELECT id, value FROM test")statement=prepared.bind(execution_profile='test_profile')# orstatement.set_execution_profile('test_profile')awaitsession.execute(statement)# For batchbatch=acsylla.create_batch(execution_profile="test_profile")# orbatch.set_execution_profile("test_profile")awaitsession.close()asyncio.run(main())Tracingimportacsyllaimportasyncioasyncdefpint_tracing_result(session,tracing_id):print('*'*10,tracing_id,'*'*10)statement=acsylla.create_statement("SELECT * FROM system_traces.sessions WHERE session_id = ?",1)statement.bind(0,tracing_id)result=awaitsession.execute(statement)forrowinresult:print("\n".join([f"\033[1m{k}:\033[0m{v}"fork,vinlist(row)]))asyncdeftracing_example():cluster=acsylla.create_cluster(["localhost"])session=awaitcluster.create_session()# Statement tracingstatement=acsylla.create_statement("SELECT release_version FROM system.local")statement.set_tracing(True)result=awaitsession.execute(statement)awaitpint_tracing_result(session,result.tracing_id)# Batch tracingbatch_statement1=acsylla.create_statement("INSERT INTO acsylla.test (id, value) VALUES (1, 1)")batch_statement2=acsylla.create_statement("INSERT INTO acsylla.test (id, value) VALUES (2, 2)")batch=acsylla.create_batch_logged()batch.add_statement(batch_statement1)batch.add_statement(batch_statement2)batch.set_tracing(True)result=awaitsession.execute_batch(batch)awaitpint_tracing_result(session,result.tracing_id)asyncio.run(tracing_example())DevelopingFor developing you must clone the respository and first compile the CPP Cassandra driver, please follow theinstructionsfor installing any dependency that you would need for compiling the driver:NOTE:The driver depends onlibuvandopenssl. To install on Mac OS X, dobrew install libuvandbrew install opensslrespectively. Additionally, you may need to export openssl lib locations:export LDFLAGS="-L/usr/local/opt/openssl/lib"andexport CPPFLAGS="-I/usr/local/opt/openssl/include"[email protected]:acsylla/acsylla.git makeinstall-driverSet up the environment and compile the package using the following commands:python-mvenvvenvsourcevenv/bin/activate makecompile makeinstall-devAnd finally run the tests:makecert docker-composeup-d maketest