package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
zeep-jwh
Failed to fetch description. HTTP Status Code: 404
zeepoptesting
Failed to fetch description. HTTP Status Code: 404
zeep-roboticia
A fast and modern Python SOAP clientHighlights:Compatible with Python 2.7, 3.5, 3.6, 3.7 and PyPyBuild on top of lxml and requestsSupport for Soap 1.1, Soap 1.2 and HTTP bindingsSupport for WS-Addressing headersSupport for WSSE (UserNameToken / x.509 signing)Support for tornado async transport via gen.coroutine (Python 2.7+)Support for asyncio via aiohttp (Python 3.5+)Experimental support for XOP messagesPlease see for more information the documentation athttp://docs.python-zeep.org/InstallationpipinstallzeepUsagefromzeepimportClientclient=Client('tests/wsdl_files/example.rst')client.service.ping()To quickly inspect a WSDL file use:python -m zeep <url-to-wsdl>Please see the documentation athttp://docs.python-zeep.orgfor more information.SupportIf you want to report a bug then please first readhttp://docs.python-zeep.org/en/master/reporting_bugs.htmlPlease only report bugs and not support requests to the GitHub issue tracker.
zeep.sms
Zeep Mobile, offers sending and receiving SMS messages for free from any Web App.With just a few lines of code, our API lets your web app communicate with your users over SMS. Zeep Mobile makes it easy to:Manage your users’ mobile content subscriptionsSend and receive SMS contentWhy go with Zeep?We looked far and wide and couldn’t find an easy way to get our apps to send and receive SMS content, so we made one! You’re going to love it because:It’s easy to implementThere are no volume restrictionsIt’s absolutely free!Install$ easy_install zeep.smsUsingTo send Free SMS using Python visithttp://zeepmobile.comand follow the [http://www.zeepmobile.com/developers/getting_started/Getting started Guide.]You will need to generate your API and Secret keys.With those in hand you can send a text message to your users once they have subscribed using this code.>>> import zeep.sms >>> connection = zeep.connect('<api-key>','<secret-key>') >>> connection.send_message('user-id', "Honey, I'm home!")
zeep-yandex
This is a security placeholder package. If you want to claim this name for legitimate purposes, please contact us [email protected]@yandex-team.ru
zeero
No description available on PyPI.
zeetils
No description available on PyPI.
zeetoo
zeetooA collection of various Python scripts created as a help in everyday work in Team II IChO PAS.Geting StartedRunning ScriptsCommand Line InterfacePython APIGraphical User InterfaceDescription of modulesanalystex- converts plain text compound analyses to LaTeXbackuper- simple automated backup tool for Windowsconfsearch- find conformers of given molecule using RDKitfixgvmol- correct .mol files created with GaussView softwaregetcdx- extract all ChemDraw files embedded in .docx filegofproc- simple script for processing Gaussian output filessdf_to_gjf- save molecules from .sdf file as separate .gjf filesRequirementsLicense & DisclaimerChangelogGetting StartedTo use this collection of scripts you will need a Python 3 interpreter. You can download an installer of latest version frompython.org(a shortcut to direct download for Windows:Python 3.7.4 Windows x86 executable installer).The easiest way to getzeetooup and running is to runpip install zeetooin the command line*. Alternatively, you can download this package as zip file using 'Clone or download' button on this site. Unzip the package and from the resulting directory runpython setup.py installin the command line*.And that's it, you're ready to go!* On windows you can reach command line by right-clicking inside the directory while holding Shift and then choosing "Open PowerShell window here" or "Open command window here".Running ScriptsCommand Line InterfaceAll zeetoo functionality is available from command line. After installation of the package each module can be accessed with use ofzeetoo [module_name] [parameters]. For more information runzeetoo --helpto see available modules orzeetoo [module_name] --helpto see the help page for specific module.Python APIModules contained inzeetoomay also be used directly from python. This section will be supplemented with details on this topic soon.Graphical User InterfaceA simple graphical user interface (GUI) is available for backuper script. Please refer to thebackuper sectionfor details. GUIs for other modules will probably be available in near future.Description of ModulesanalystexParses plain text file containing list of analyses of chemical compounds and saves it in LaTeX format. Resulting LaTeX code requiressiunitxpackage andchemmacrospackage with 'spectroscopy' option. Input file must contain following data entries in specified order, one entry (type of analysis) per line:compound's ID,compound's name,yield (Yield: XX%, compounds form),1HNMR (1H NMR ( MHz, <solvent_as_formula>) δ = <shift/range> (, H), comma separated),13CNMR (13C NMR ( MHz, <solvent_as_formula>) δ = , comma separated),IR peaks (IR () cm-1, comma separated),HRMS (HRMS () for : found – , calculated – ),specific rotation (α = (c = , solv. )),melting point (m.p. – ) (optional).Data sets should be separated by an empty line.backuperA simple Python script for scheduling and running automated backup. Essentially, it copies specified files and directories to specified location with regard to date of last modification of both, source file and existing copy:if source file is newer than backup version, the second will be overridden;if both files have the same last modification time, file will not be copied;if backup version is newer, it will be renamed to "oldname_last-modification-time" and source file will be copied, preserving both versions.After creating a specification for backup job (that is, specifying backup destination and files that should be copied; these information are stored in .ini file), it may be run manually or scheduled. Scheduling is currently available only on Windows, as it uses build-in Windows task scheduler. It is important to remember, that this is not a version control software. Only lastly copied version is stored. A minimal graphical user interface for this script is available (see below).graphical user interface for backuper moduleTo start up the graphical user interface (GUI) runzeetoo backuper_guiin the command line. If you've downloaded the whole package manually, you may also double-click on start_gui.bat file. A window similar to the one below should appear. Further you'll find description of each element of this window.This field shows path to backup's main directory. Files will be copied there. You can change this field directly or by clicking 2.Choose backup destination directory using graphical interface.All files and directories that are meant to be backuped are listed here. It will be called 'source' from now on. For more details read 4-7.Add file or files to source. Files will be shown in 3 as line without slash character at the end. Each file will be copied to the directory of the same name as directory it is located in; in example shown above it would be 'x:\path_to\backup\destination\some_important\text_file.text'.Add a directory to source. Directories will be shown in 3 as line with slash character at the end. All files (but not subdirectories!) present in this directory will be copied to directory with same name.Add a directory tree to source. Trees will be shown in 3 as line with slash and star characters at the end. The whole content of chosen directory will be copied, including all files and subdirectories.Remove selected path from source.All files and directories marked as ignored will be shown here. Ignored files and directories won't be copied during backup, even if they are inside source directory or tree, or listed as source.Add file or files to ignored.Add directory to ignored.Remove selected item from list of ignored files and directories.Set how often backup should be run (once a day, once a week or once a month) and at what time.Schedule backup task according to specified guidelines. WARNING: this will also automatically save configuration file.Remove backup task scheduled earlier.Run backup task now, according to specified guidelines. Saving configuration to file not needed.Load configuration from specified file.Save configuration.Configuration is stored in[User]/AppData/Local/zeetoo/backuper/config.inifile. After scheduling backup task this file should not be moved. It can be modified though, backup task will be done with this modified guidelines from now on. Scheduling new backup task, even using different configuration file, will override previous task, unless task_name in this file is specifically changed.confsearchPerforms a conformational search on set of given molecules. Takes a .mol file (or number of them) as an input and saves a list of generated conformers to specified .sdf file. Some restriction on this process may be given: a number of conformers to generate, a minimum RMSD value, a maximum energy difference, a maximum number of optimization cycles, and a set of constraints for force field optimization.fixgvmol.mol files created with GaussView (GV5 at least) lack some information, namely a mol version and END line. Without it some programs might not be able to read such files. This script adds these pieces of information to .mol files missing them.getcdxExtracts all embedded ChemDraw files from a .docx document and saves it in a separate directory (which might be specified by user), using in-text description of schemes/drawings as file names. It may be specified if description of the scheme/drawing is located above or underneath it (the former is default). Finally, It may be specified how long filename should be.gofprocExtracts information about molecule energy and imaginary frequencies from given set of Gaussian output files withfreqjob performed. Extracted data might be written to terminal (stdout) or to specified .xlsx file (must not be opened in other programs) at the end of the file or appended to a row, based on name of the file parsed. Calculations, that did not converged are reported separately.sdf_to_gjfWrites molecules contained in an .sdf file to a set of .gjf files in accordance with the guidelines given by user.Requirementsgetcdx module requires olefile packagegofproc module requires openpyxl packageconfsearch module requires RDKit softwarePlease note, that the RDKitwill notbe installed automatically with this package. The recommended way to get RDKit software is through use of Anaconda Python distribution. Please refer to RDKit documentation for more information.License & DisclaimerSee the LICENSE.txt file for license rights and limitations (MIT).Changelogv.0.1.4addedanalystexscriptfixed "run now" function in backuper's GUIv.0.1.3fixed sdf_to_gjf ignoring parameters "charge" and "multiplicity"supplemented sdf_to_gjf default values and help messagefixed typo in sdf_to_gjf CLI ("sufix" -> "suffix")enabled specifying coordinates' precision in sdf_to_gjfenhanced handling of link0 commands by sdf_to_gjfremoved filtering of explicitly provided non-.mol files in fixgvmolv.0.1.2getcdx now changes characters forbidden in file names to "-" instead of raising an exceptionstart_gui.bat should now work regardless its locationv.0.1.1fixed import errors when run as modulev.0.1.0initial release
zeev-test2
no long decsription given
zef
A data-oriented toolkit for graph dataversioned graphs + streams + query using Python + GraphQLDocs|Blog|Chat|ZefHubDescriptionZef is an open source, data-oriented toolkit for graph data. It combines the access speed and local development experience of an in-memory data structure with the power of a fully versioned, immutable database (and distributed persistence if needed with ZefHub). Furthermore, Zef includes a library of composable functional operators, effects handling, and native GraphQL support. You can pick and choose what you need for your project.If any of these apply to you, Zef might help:I need a graph database with fast query speeds and hassle-free infraI need a graph data model that's more powerful than NetworkX but easier than Neo4jI need to "time travel" and access past states easilyI like Datomic but prefer something open source that feels like working with local data structuresI would prefer querying and traversing directly in Python, rather than a query language (like Cypher or GSQL)I need a GraphQL API that's easy to spin up and close to my data modelFeaturesa graph language you can use directly in Python codefully versioned graphsin-memory access speedsfree and real-time data persistence (via ZefHub)work with graphs like local data structuresno separate query languageno ORMGraphQL API with low impedance mismatch to data modeldata streams and subscriptionsStatusZef is currently in Public Alpha.Private Alpha: Testing Zef internally and with a closed group of users.Public Alpha: Anyone can use Zef but please be patient with very large graphs!Public Beta: Stable enough for most non-enterprise use cases.Public: Stable for all production use cases.InstallationThe platforms we currently support are 64-bit Linux and MacOS. The latest version can be installed via the PyPI repository using:pipinstallzefThis will attempt to install a wheel if supported by your system and compile from source otherwise. See INSTALL for more details if compiling from source.Check out ourinstallation docfor more details about getting up and running once installed.Using ZefHere's some quick points to get going. Check out ourQuick Startand docs for more details.A quick note, in Zef, we overloaded the "|" pipe so users can chain together values, Zef operators (ZefOps), and functions in sequential, lazy, and executable pipelines where data flow is left to right.💆 Get started 💆fromzefimport*# these imports unlock user friendly syntax and powerful Zef operators (ZefOps)fromzef.opsimport*g=Graph()# create an empty graph🌱 Add some data 🌱p1=ET.Person|g|run# add an entity to the graph(p1,RT.FirstName,"Yolandi")|g|run# add "fields" via relations triples: (source, relation, target)🐾 Traverse the graph 🐾p1|Out[RT.FirstName]# one hop: step onto the relationp1|out_rel[RT.FirstName]# two hops: step onto the target⏳ Time travel ⌛p1|time_travel[-2]# move reference frame back two time slicesp1|time_travel[Time('2021 December 4 15:31:00 (+0100)')]# move to a specific date and time👐 Share with other users (via ZefHub) 👐g|sync[True]|run# save and sync all future changes on ZefHub# ---------------- Python Session A (You) -----------------g|uid|to_clipboard|run# copy uid onto local clipboard# ---------------- Python Session B (Friend) -----------------graph_uid:str='...'# uid copied from Slack/WhatsApp/email/etcg=Graph(graph_uid)g|now|all[ET]|collect# see all entities in the latest time slice🚣 Choose your own adventure 🚣Basic tutorial of ZefBuild Wordle clone with ZefImport data from CSVImport data from NetworkXSet up a GraphQL APIUse Zef graphs in NetworkX📌 A note on ZefHub 📌Zef is designed so you can use it locally and drop it into any existing project. You have the option of syncing your graphs with ZefHub, a service that persists, syncs, and distributes graphs automatically (and the company behind Zef). ZefHub makes it possible toshare graphs with other users and see changes live, by memory mapping across machines in real-time!You can create a ZefHub account for free which gives you full access to storing and sharing graphs forever. For full transparency, our long-term hope is that many users will get value from Zef or Zef + ZefHub for free, while ZefHub power users will pay a fee for added features and services.RoadmapWe want to make it incredibly easy for developers to build fully distributed, reactive systems with consistent data and cross-language (Python, C++, Julia) support. If there's sufficient interest, we'd be happy to share a public board of items we're working on.ContributingThank you for considering contributing to Zef! We know your time is valuable and your input makes Zef better for all current and future users.To optimize for feedback speed, please raise bugs or suggest features directly in our community chathttps://zef.chat.Please refer to ourCONTRIBUTING fileandCODE_OF_CONDUCT filefor more details.LicenseZef is licensed under the Apache License, Version 2.0 (the "License"). You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0DependenciesThe compiled libraries make use of the following packages:asio(https://github.com/chriskohlhoff/asio)JWT++(https://github.com/Thalhammer/jwt-cpp)Curl(https://github.com/curl/curl)JSON(https://github.com/nlohmann/json)Parallel hashmap(https://github.com/greg7mdp/parallel-hashmap)Ranges-v3(https://github.com/ericniebler/range-v3)Websocket++(https://github.com/zaphoyd/websocketpp)Zstandard(https://github.com/facebook/zstd)pybind11(https://github.com/pybind/pybind11)pybind_json(https://github.com/pybind/pybind11_json)
zefactor
Python Zefactorzefactoris library for renaming tokens in a project.Zefactor featuresFind and replace tokens in a directory.Detect similar tokens in alternate case and replace with proper casingPure pythonInstallationpip install zefactorGuided UsageRun zefactor without any arguments and it will provide a set of guided prompts to refactor a project.$ zefactorGetting StartedThe guided mode can be skipped with the-yoption. A simple example is below:# Replace all text 'red' with 'blue'. # This will automtically include replacing 'RED' with 'BLUE' and 'Red' with 'Blue' $ zefactor -f red -r blue -y # Replace all text 'red rocket' with 'blue ship' # This will automatically include replacing 'RedRocket' with 'BlueShip', 'RED ROCKET' with 'BLUE SHIP' and many alternatives $ zefactor -f red -f rocket -r blue -r shipUsageThe Zefactor CLI helps refactor projects. It will find and replace text and all variations of casing of that text.
zeff
ZeffAn effective nuclear charge calculator.IntroductionIf you are a chemistry teacher or student, chances are that you are going to do Zeffand S calculations based onSlater’s Rulesor onClementi *et al* screening constants. Probably while teaching/learning periodic trends.This simple project automates these calculations returning them as tables (Pandas DataFrames) so that one can easily plot the data or manipulate them. Indeed, there are plot functions that can do graphs providing a convenient visualization functionality for classes.InstallationJust clone or download this repo. This is not a package (yet, maybe someday :-))UsageLook at some examples at thetutorialavailable as aJupyter notebook.Under the hood - RequirementsThis project relies heavily onmendeleev,pandasandmatplotlibpackages so these must be installed. Check therequirements.txtfile for requirements and feel free to use it to create a virtual environment.ContributingAll contributions are welcome.IssuesFeel free to submit issues regarding:recommendationsmore examples for the tutorialenhancement requests and new useful featurescode bugsPull requestsbefore starting to work on your pull request, please submit an issue firstfork the repoclone the project to your own machinecommit changes to your own branchpush your work back up to your forksubmit a pull request so that your changes can be reviewedFor full details on how to contribute, check out theContributing Guide.LicenseMIT, seeLICENSE.CitingIf you use Zeffin a scientific publication or in classes, please consider citing as:F. L. S. Bustamante,Zeff- An effective nuclear charge (Zeff) and shielding (S) calculator and graphing tool., 2019 - Available at:GitHub repository
zeffee
Failed to fetch description. HTTP Status Code: 404
zefir-analytics
Zefir AnalyticsThe Zefir Analytics module was created by the authors of PyZefir for relatively inexpensive processing or conversion of raw data into user-friendly condition, such as a report or a set of graphs. Also, this module is a set of computational methods used in the endpoints of the Zefir Backend repository.Setup Development EnvironmentInstall repository from global pip index:pipinstallzefir-analyticsMake setupCheck if make is already installedmake--versionIf not, install makesudoaptinstallmakeMake stagesInstall virtual environment and all dependenciesmakeinstallRun linters check (black, pylama)makelintRun unit and fast integration tests (runs lint stage before)MakeunitRun integration tests (runs lint and unit stages before)maketestRemove temporary directories such as .venv, .mypy_cache, .pytest_cache etc.makecleanAvailable methods in Zefir Engine objectssource_params:get_generation_sumget_dump_energy_sumget_load_sumget_installed_capacityget_generation_demandget_fuel_usageget_capex_opexget_emissionaggregated_consumer_params:get_fractionsget_n_consumersget_yearly_energy_usageget_total_yearly_energy_usageget_fractionslbs_params:get_lbs_fractionget_lbs_capacityline_params:get_flowget_transmission_fee
zefram
A convenent, pythnonic way of interacting with data from the [IZA Database of Zeolite Structures](http://www.iza-structure.org/databases/).DependenciesSQLAlchemynumpyInstallationSimplest way to installzeframis withpip:pipinstallzeframUsageThe package exposes a simple API to access the database.frameworkmethod acceps either three letter framework codes such asAFIorMFIor a list of such strings returning aFrameworkobject (or list of objects). The example below shows also the accessible attributes of theFrameworkobject>>>fromzeframimportframework>>>afi=framework('AFI')>>>sorted(list(afi.__dict__.keys()))['_sa_instance_state','_spacegroup_id','a','accessible_area','accessible_area_m2pg','accessible_volume','accessible_volume_pct','alpha','atoms','b','beta','c','cages','channel_dim','channels','cif','code','connections','framework_density','gamma','id','isdisordered','isinterrupted','junctions','lcd','maxdsd_a','maxdsd_b','maxdsd_c','maxdsi','name','occupiable_area','occupiable_area_m2pg','occupiable_volume','occupiable_volume_pct','pld','portals','rdls','sbu','specific_accessible_area','specific_occupiable_area','td10','topological_density','tpw_abs','tpw_pct','url_iza','url_zeomics']DataAttributeTypeCommentData Sourceafloataunit cell length in Angstroms[1]bfloatbunit cell length in Angstroms[1]cfloatcunit cell legth in Angstroms[1]alphafloatalphaunit cell angle in degrees[1]betafloatcunit cell angle in degrees[1]gammafloatcunit cell angle in degrees[1]codestrthree letter framework code[1]namestrname of the framework in english[1]atomsintnumber of atoms in the unit cell[2]portalsintnumber of portals in the unit cell[2]cagesintnumber of cages in the unit cell[2]channelsintnumber of channels in the unit cell[2]junctionsintnumber of junctions in the unit cell[2]connectionsintnumber of connections in the unit cell[2]tpv_absfloattotal pore volume in cm^3/g[2]tpv_relfloatrelative total pore volume in %[2]lcdfloatlargest cavity diameter in Angstrom[2]pldfloatpore limiting diameter in Angstrom[2]accessible_areafloataccessible area in Angstrom^2[1]accessible_area_m2pgfloataccessible area in m^2/g[1]accessible_volumefloataccessible volume in Angstrom^3[1]accessible_volume_pctfloataccessible volume in %[1]channel_dimintchannel dimensionality[1]cifstrcif file contents[1]framework_densityfloatnumber of T-atoms per 1000 Angstrom^3[1]isinterrruptedboolinterrrupted framework[1]isdisorderedbooldisordered framework[1]maxdsd_afloatmaximum diameter of a sphere that can diffuse alonga[1]maxdsd_bfloatmaximum diameter of a sphere that can diffuse alongb[1]maxdsd_cfloatmaximum diameter of a sphere that can diffuse alongc[1]maxdsifloatmaximum diameter of a sphere that can be included[1]occupiable_areafloatoccupiable area in Angstrom^2[1]occupiable_area_m2pgfloatoccupiable area in m^2/g[1]occupiable_volumefloatoccupiable volume in Angstrom^3[1]occupiable_volume_pctfloatoccupiable volume in %[1]specific_accessible_areafloataccessible area per unit volume in m^2/cm^3[1]specific_occupiable_areafloatoccupiable area per unit volume in m^2/cm^3[1]td10floatapproximate topological density[1]topological_densityfloattopological density[1]url_izastrlink to the source[1]for this frameworkurl_zeomicsstrlink to the source[2]for this framework[1](1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30)IZA database of zeolite structures[2](1,2,3,4,5,6,7,8,9,10,11)ZEOMICS databaseLicenseThe MIT License (MIT)Copyright (c) 2015 Lukasz MentelPermission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the “Software”), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in allcopies or substantial portions of the Software.THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THESOFTWARE.
zegami-cli
zegami-cliA Command Line Interface forZegami.Zegami is a visual data exploration tool that makes the analysis of large collections of image rich information quick and simple.The Zegami cli relies on a combination of yaml files and arguments.The first step is to create a collectionInstallationpip3 install zegami-cli[sql]CommandsLoginThe login command promtps for username and password which is then used to retrieve a long-lived API token which can be used for subsequent requests. The token is stored in a file in the currenet users data directory. Once retrieved all subsequent commands will use the stored token, unless it is specifically overridden with the--tokenoptionzeg loginGet a collectionGet the details of a collection. If thecollection idis excluded then all collections will be listed.zeg get collections [collection id] --project [Project Id]Create a collectionCreate a collection using a combined dataset and imageset config.zeg create collections --project [Project Id] --config [path to configuration yaml]Project id, or workspace id, can be found in the url of a collection or collection listing page. For example:https://zegami.com/mycollections/66xtfqskIn the case of this workspace, it's66xtfqsk.The following config properties are supported for file based imageset and datasets.# The name of the collection name: file based description: an example collection with a file based imageset and dataset # The type of data set. For now this needs to be set to 'file'. (optional) dataset_type: file # Config for the file data set type imageset_type: file # Config for the file image set type file_config: # Whether to recursively scan any directories. (optional) recursive: True # If provided, the mime-type to use when uploading images. (optional) mime_type: image/jpeg # Path to the dataset file. (optional) path: path/to/file/mydata.csv # A collection of paths to image files. Paths can be to both images and directories paths: - an_image.jpg - a/directory/path # Name of the column in the dataset that contains the image name. (optional) dataset_column: image_nameIf thedataset_columnproperty is not provided, the backend will automatically select the column with the closest match.To create a collection with only images thedataset_typeandpathproperties can be omitted.When providing amime_typeproperty, all files in directories will be uploaded regardless of extension.If you are creating a url based imageset with a data file use these properties.The dataset_column property is used to set the column where the url is stored. You will need to include the full image url e.g.https://zegami.com/wp-content/uploads/2018/01/weatherall.svg# The name of the collection name: url based # The description of the collection description: an example collection with a file based dataset where images are to be downloaded from urls # The type of image set. for now this needs to be set to 'url' imageset_type: url # Name of the column in the dataset that contains the image url. (optional) dataset_column: image_name # Url pattern - python format string where {} is the values from the dataset_column in data file url_template: https://example.com/images/{}?accesscode=abc3e20423423497 # Custom headers to add when fetching the image image_fetch_headers: Accept: image/png dataset_type: file # Config for the file data set type file_config: # Path to the dataset file. (optional) path: path/to/file/mydata.csvIf you are creating an imageset on Azure from a private azure bucket with a local file do as follows:# The name of the collection name: azure bucket based # The description of the collection description: an example collection with a file based dataset where images are to be downloaded from an azure bucket dataset_type: file. (optional) # Config for the file data set type file_config: # Path to the dataset file. (optional) path: path/to/file/mydata.csv # The type of image set. for now this needs to be set to 'url' imageset_type: azure_storage_container # Name of the container container_name: my_azure_blobs # Name of the column in the dataset that contains the image url. (optional) dataset_column: image_name # Note that the storage account connection string should also be made available via environment variable AZURE_STORAGE_CONNECTION_STRINGIf you are using SQL data see below for configCreate a collection with multiple image sources# The name of the collection name: file based description: an example collection with a file based imageset and dataset collection_version: 2 # The type of data set. For now this needs to be set to 'file'. dataset_type: file file_config: # Path to the dataset file. path: path/to/file/mydata.csv image_sources: # source from file based imageset - paths: - a/directory/path # source_name is a compulsory field. Each source's source_name needs to be unique. source_name: first_source # Name of the column in the dataset that contains the image name. (optional) dataset_column: path imageset_type: file # source from url based imageset - url_template: https://example.com/images/{}?accesscode=abc3e20423423497 image_fetch_headers: Accept: image/png source_name: second_source imageset_type: urlUpdate a collectionUpdate a collection -coming soon.Delete a collectionDelete a collectionzeg delete collections [collection id] --project [Project Id]Publish a collectionzeg publish collection [collection id] --project [Project Id] --config [path to configuration yaml]Similarly to the workspace id, the collection id can be found in the url for a given collection. For instance:https://zegami.com/collections/public-5df0d8c40812cf0001e99945?pan=FILTERS_PANEL&view=grid&info=trueThis url is pointing to a collection with a collection id which is 5df0d8c40812cf0001e99945.The configyamlfile is used to specify additional configuration for the collection publish.# The type of update. For now this needs to be set to 'publish' update_type: publish # Config for the publish update type publish_config: # Flag to indicate if the collection should be published or unpublished publish: true # The id of the project to publish to destination_project: publicGet a data setGet a data setzeg get dataset [dataset id] --project [Project Id]Dataset Ids can be found in the collection information, obtained by running:zeg get collections <collection id> --project <project id>From hereupload_dataset_idcan be obtained. This identifies the dataset that represents the data as it was uploaded. Whereasdataset_ididentifies the processed dataset delivered to the viewer.Update a data setUpdate an existing data set with new data.Note that when using against a collection the dataset id used should be the upload_dataset_id. This is different from the below imageset update which requires the dataset identifier known as dataset_id from the collection.zeg update dataset [dataset id] --project [Project Id] --config [path to configuration yaml]The configyamlfile is used to specify additional configuration for the data set update. There aretwosupporteddataset_typesupported.FileThefiletype is used to update a data set with a file. It can be set up to either specify the fully qualified path to a.csv.,.tsvor.xlsxfile to upload using thepathpropertyorthedirectoryproperty can be used to upload the latest file in a directory location.# The type of data set. For now this needs to be set to 'file' dataset_type: file # Config for the file data set type file_config: # Path to the dataset file path: path/to/file/mydata.csv # Or path to a directory that contains data files. # Only the latest file that matches the accepted extensions (.csv, .tsv, .xlsx) # will be uploaded. This is useful for creating collections based on # automated exports from a system, like log files. directory:SQLThesqltype is used to update a data set based on anSQLquery. Uses SQLAlchemy to connect to the database. Seehttp://docs.sqlalchemy.org/en/latest/core/engines.htmlandhttps://www.connectionstrings.com/for the correct connection string format.# The type of data set. For now this needs to be set to 'file' dataset_type: sql # Config for the sql data set type sql_config: # The connection string. connection: # SQL query query:PostgreSQL - tested on Linux and windows, up to Python v3.8Pre-requisites :Standard requirements - code editor, pip package manager, python 3.8.Make sure Zegami CLI latest is installedpip install zegami-cli[sql] --upgrade --no-cache-dirNote: --no-cache-dir avoids some errors upon installTest the install with the login command, which prompts for username and password. This is then used to retrieve a long-lived API token which can be used for subsequent requests. The token is stored in a file in the current users data directory. Once retrieved all subsequent commands will use the stored token, unless it is specifically overridden with the--tokenoptionzeg loginInstall pre-requirements for PostgreSQL connectionPsycopg2 -https://pypi.org/project/psycopg2/,http://initd.org/psycopg/pip install python-psycopg2libpq-dev was required for linux, not windowslibpq-dev -https://pypi.org/project/libpq-dev/,https://github.com/ncbi/python-libpq-devsudo apt-get install libpq-devOnce these are installed you will need to create a YAML file with the correct connection strings.Connection String Example:# The type of data set. For now this needs to be set to 'file' dataset_type: sql # Config for the sql data set type sql_config: # The connection string. connection: "postgresql://postgres:myPassword@localhost:5432/postgres?sslmode=disable" # SQL query query: select * from XYZNote: Connections strings must have indentation by "connection" and "query"If you have already created a collection we can run the update command as above e.g. zeg update dataset upload_dataset_id --project projectID --config root/psqlconstring.yamlIf successful the following message will appear:========================================= update dataset with result: ----------------------------------------- id: datasetID name: Schema dataset for postgresql test source: blob_id: blobID dataset_id: datasetID upload: name: zeg-datasetiop9cbtn.csv =========================================Useful links:https://www.npgsql.org/doc/connection-string-parameters.htmlhttps://www.connectionstrings.com/postgresql/(Standard)https://docs.sqlalchemy.org/en/13/core/engines.html#postgresql(Specifies pre-reqs for connection)Delete a data setDelete a data set -coming soon.zeg delete dataset [dataset id] --project [Project Id]Get an image setGet an image set -coming soon.zeg get imageset [imageset id] --project [Project Id]Update an image setUpdate an image set with new images.zeg update imageset [imageset id] --project [Project Id] --config [path to configuration yaml]The configyamlfile is used to specify additional configuration for the image set update. Note that an imageset can only be changed before images are added to it.File imagesetThepathsproperty is used to specify the location of images to upload and can include both images and directories.# The type of image set. for now this needs to be set to 'file' imageset_type: file # Config for the file image set type file_config: # A collection of paths. Paths can be to both images and directories paths: - an_image.jpg - a/directory/path # Unique identifier of the collection collection_id: 5ad3a99b75f3b30001732f36 # Unique identifier of the collection data set (get this from dataset_id) dataset_id: 5ad3a99b75f3b30001732f36 # Name of the column in the dataset that contains the image name dataset_column: image_name # Only required if this imageset is from a multiple image sources collection source_name: first_sourceURL imagesetThe dataset_column property is used to set the column where the url is stored. You will need to include the full image url e.g.https://zegami.com/wp-content/uploads/2018/01/weatherall.svg# The type of image set. for now this needs to be set to 'url' imageset_type: url # Unique identifier of the collection collection_id: 5ad3a99b75f3b30001732f36 # Unique identifier of the collection data set dataset_id: 5ad3a99b75f3b30001732f36 # Name of the column in the dataset that contains the image url dataset_column: image_name # Url pattern - python format string where {} is the name of the image name (from data file) url_template: https://example.com/images/{}?accesscode=abc3e20423423497 # Optional set of headers to include with the requests to fetch each image, # e.g. for auth or to specify mime type image_fetch_headers: Accept: application/dicom Authorization: Bearer user:passAzure storage imageset# The type of image set. imageset_type: azure_storage_container # Name of the container container_name: my_azure_blobs # Unique identifier of the collection collection_id: 5ad3a99b75f3b30001732f36 # Unique identifier of the collection data set dataset_id: 5ad3a99b75f3b30001732f36 # Name of the column in the dataset that contains the image url dataset_column: image_name # Note that the storage account connection string should also be made available via environment variable AZURE_STORAGE_CONNECTION_STRINGDelete an image setDelete an image set -coming soon.zeg delete imageset [imageset id] --project [Project Id]DeveloperTestsSetup tests:pip install -r requirements/test.txtRun tests:python3 -m unittest discover .
zegami-sdk
Zegami Python SDKAn SDK and general wrapper for the lower level Zegami API for Python. This package provides higher level collection interaction and data retrieval.Getting startedGrab this repo, open the script, and load an instance of ZegamiClient into a variable.from zegami_sdk.client import ZegamiClient zc = ZegamiClient(username=USERNAME, password=PASSWORD)CredentialsThe client operates using a user token. By default, logging in once with a valid username/password will save the acquired token to your home directory aszegami.token. The next time you need to use ZegamiClient, you may callzc = ZegamiClient()with no arguments, and it will look for this stored token.Example Usagezc = ZegamiClient()WorkspacesTo see your available workspaces, use:zc.show_workspaces()You can then ask for a workspace by name, by ID, or just from a listall_workspaces = zc.workspaces first_workspace = all_workspaces[0]or:zc.show_workspaces() # Note the ID of a workspace my_workspace = zc.get_workspace_by_id(id)Collectionsmy_workspace.show_collections() # Note the name of a collection coll = my_workspace.get_collection_by_name(name_of_collection)You can get the metadata in a collection as a Pandas DataFrame using:rows = coll.rowsThis data can then be modified or augmentated and added back to the collection using:coll.replace_data(modified_rows)You can get the images of a collection using:first_10_img_urls = coll.get_image_urls(list(range(10))) imgs = coll.download_image_batch(first_10_img_urls)SourcesIf a collection contains multiple image sources, these can be seen using:coll.show_sources()Many operations require specifying which image source should be used. This can be specified by index or name for most functions.first_10_source2_img_urls = coll.get_image_urls(list(range(10)), source=2) # To see the first of these: coll.download_image(first_10_source2_img_urls[0])Using with onprem zegamiTo use the client with an onprem installation of zegami you have to set thehomekeyword argument when instantiatingZegamiClient.zegami_config = { 'username': <user>, 'password': <password>, 'home': <url of onprem zegami>, 'allow_save_token': True, } zc = ZegamiClient(**zegami_config)If your onprem installation has self-signed certificates you can disable SSL verification using the environment variableALLOW_INSECURE_SSLbefore running the python.export ALLOW_INSECURE_SSL=true python myscript.pyorALLOW_INSECURE_SSL=true python myscript.pyWARNING! You should not need to set this when using the SDK for cloud zegamiIn DevelopmentThis SDK is in active development. Features are actively being developed according to user feedback. Please share your suggestions or fork this repository and feel free to raise a PRDeveloper ConventionsKeeping the SDK easy and fluent to use externally and internally is crucial. If contributing PRs, some things to consider:RelevantMOST IMPORTANT - Zegami has concepts used internally in its data engine, like 'imageset', 'dataset'. Strive to never require the user to have to know anything about these, or even see them. If the user needs an image, they should ask for an image from a concept they ARE expected to understand like a 'collection' or a 'workspace'. Anything obscure should be hidden, for example:_get_imageset(), so that auto-suggestions of a class will always contain relevant and useful methods/attribs/properties.ObviousAvoid ambiguous parameters. Use the best worded, lowest level parameters types for functions/methods. Give them obvious names. Any ambiguity or unobvious parameters MUST be described in detail in the docstring. Avoid parameters like 'target' or 'action', or describe them explicitly. If an instance is needed, describe how/where that instance should come from.ExceptionsIf you expect an RGB image, check that your input is an array, that its len(shape) == 3, that shape[2] == 3, and throw an exception to clearly feed back to the user what has wrong. The message should help the user to solve the problem for themselves.MinimalDo not ask for more information than is already obtainable. A source knows its parent collection, which knows how to get its own IDs and knows the client. A method never needs to reference a source, the owning collection, and the client all together. Moreover, these chains should have sensible assertions and checks built in, and potentially property/method-based shortcuts (with assertions).HelpfulUse sensible defaults wherever possible for minimal effort when using the SDK. V1 collections typically usesource=None, while V2 collections usesource=0. This allows a user with an old/new (single source) collection to never even have to know what a source is when fetching images.
zegami-sdk-testrelease
Zegami Python SDKAn SDK and general wrapper for the lower level Zegami API for Python. This package provides higher level collection interaction and data retrieval.Getting startedGrab this repo, open the script, and load an instance of ZegamiClient into a variable.from zegami_sdk.client import ZegamiClient zc = ZegamiClient(username, login)CredentialsThe client operates using a user token. By default, logging in once with a valid username/password will save the acquired token to your home directory aszegami.token. The next time you need to use ZegamiClient, you may callzc = ZegamiClient()with no arguments, and it will look for this stored token.Example UsageGet the metadata and images associated with every dog of the 'beagle' breed in a collection of dogs:zc = ZegamiClient()WorkspacesTo see your available workspaces, use:zc.show_workspaces()You can then ask for a workspace by name, by ID, or just from a listall_workspaces = zc.workspaces first_workspace = all_workspaces[0]or:zc.show_workspaces() # Note the ID of a workspace my_workspace = zc.get_workspace_by_id(id)Collectionsmy_workspace.show_collections() # Note the name of a collection coll = my_workspace.get_collection_by_name(name_of_collection)You can get the metadata in a collection as a Pandas DataFrame using:rows = coll.rowsYou can get the images of a collection using:first_10_img_urls = coll.get_image_urls(list(range(10))) imgs = coll.download_image_batch(first_10_img_urls)If your collection supports the new multi-image-source functionality, you can see your available sources using:coll.show_sources()For source 2's (3rd in 0-indexed-list) images, you would use:first_10_source3_img_urls = novo_col.get_image_urls(list(range(10)), source=2)` # To see the first of these: coll.download_image(first_10_source3_img_urls[0])Using with onprem zegamiTo use the client with an onprem installation of zegami you have to set thehomekeyword argument when instantiatingZegamiClient.zegami_config = { 'username': <user>, 'password': <password>, 'home': <url of onprem zegami>, 'allow_save_token': True, } zc = ZegamiClient(**zegami_config)If your onprem installation has self-signed certificates you can disable SSL verification using the environment variableALLOW_INSECURE_SSLbefore running the python.export ALLOW_INSECURE_SSL=true python myscript.pyorALLOW_INSECURE_SSL=true python myscript.pyWARNING! You should not need to set this when using the SDK for cloud zegamiIn DevelopmentThis SDK is in active development, not all features are available yet. Creating/uploading to collections is not supported currently - check back soon!Developer ConventionsKeeping the SDK easy and fluent to use externally and internally is crucial. If contributing PRs, some things to consider:RelevantMOST IMPORTANT - Zegami has concepts used internally in its data engine, like 'imageset', 'dataset'. Strive to never require the user to have to know anything about these, or even see them. If the user needs an image, they should ask for an image from a concept they ARE expected to understand like a 'collection' or a 'workspace'. Anything obscure should be hidden, for example:_get_imageset(), so that auto-suggestions of a class will always contain relevant and useful methods/attribs/properties.ObviousAvoid ambiguous parameters. Use the best worded, lowest level parameters types for functions/methods. Give them obvious names. Any ambiguity or unobvious parameters MUST be described in detail in the docstring. Avoid parameters like 'target' or 'action', or describe them explicitly. If an instance is needed, describe how/where that instance should come from.assertIf you expect an RGB image, check that your input is an array, that its len(shape) == 3, that shape[2] == 3. Use a proper message if this is not the case.MinimalDo not ask for more information than is already obtainable. A source knows its parent collection, which knows how to get its own IDs and knows the client. A method never needs to reference a source, the owning collection, and the client all together. Moreover, these chains should have sensible assertions and checks built in, and potentially property/method-based shortcuts (with assertions).HelpfulUse sensible defaults wherever possible for minimal effort when using the SDK. V1 collections typically usesource=None, while V2 collections usesource=0. This allows a user with an old/new (single source) collection to never even have to know what a source is when fetching images.
zeigen
Zeigen - Show Water NetworksFeaturesZeigenfinds networks of water in PDB structures. The PDB query is highly configurable through thezeigen.confconfiguration file that is placed in the config directory upon first program run. The query results are placed in a TSV file, with global stats to a JSON file.Zeigenusesrcsbsearchto query the PDB. Currently thercsbsearchpackage is broken, as it uses the obsolete v1 query.Zeigenincludes a copy ofrcsbsearchwhich has been patched for v2 queries.RequirementsZeigenhas been developed under Python 3.10 and tested on Python 3.9 and 3.10 on Linux. Works on MacOS, but not tested due to energy costs.InstallationYou can installZeigenviapipfromPyPI:$pipinstallzeigenUsagePlease see theCommand-line Referencefor details.ContributingContributions are very welcome. To learn more, see theContributor Guide.LicenseDistributed under the terms of theBSD 3-Clause license,Zeigenis free and open source software.IssuesIf you encounter any problems, pleasefile an issuealong with a detailed description.CreditsZeigenwas written by Joel Berendzen.rcsbsearchwas written by Spencer Bliven.This project was generated from@cjolowicz'sHypermodern Python Cookiecuttertemplate.
zein
A Python package to set restrictions on visual content of all kinds and protect users from seeing harmful content, and ensure the safety of input and output machine learning models.
zeinab-handy
No description available on PyPI.
zeit
zeit
zeit3101helpers
No description available on PyPI.
zeitapi
No description available on PyPI.
zeit.deploynotify
This package collects the mechanics for sending notifications “version x of project y was just deployed to environment z”, for example to set markers in observability systems like Grafana, post a message to a slack channel, transition issues in jira to the corresponding status, etc.UsageThe package provides a “multi subcommand” CLI interface:python -m zeit.deploynotify --environment=staging --project=example --version=1.2.3 \ slack --channel=example --emoji=palm_treeTypically this will be integrated as aKeptn Deployment Task, like this:apiVersion: external-secrets.io/v1beta1 kind: ExternalSecret metadata: name: deployment-notify spec: refreshInterval: 1h secretStoreRef: name: baseproject-vault kind: SecretStore data: - secretKey: SLACK_HOOK_TOKEN remoteRef: key: zon/v1/slack/hackbot property: HOOK_TOKEN --- apiVersion: lifecycle.keptn.sh/v1alpha3 kind: KeptnTaskDefinition metadata: name: notify spec: container: name: task image: europe-west3-docker.pkg.dev/zeitonline-engineering/docker-zon/deploynotify:1.0.0 envFrom: - secretRef: name: deployment-notify args: - "--environment=staging" - "slack" - "--channel=example" - "--emoji=palm_tree"Changelog1.3.0 (2024-02-22)ChangesUpdate to keptn-0.10 context API (keptn)1.2.3 (2024-01-17)ChangesFix jira status change (jira)1.2.2 (2024-01-16)ChangesDon’t set jira status if status is already ‘more done’ (jira)1.2.1 (2024-01-11)ChangesDetect empty postdeploy properly (postdeploy)1.1.1 (2024-01-08)ChangesQuote changelog text correctly for slack (changelog)1.1.0 (2024-01-08)ChangesZO-4171: Implement posting the changelog diff to slack (changelog)1.0.4 (2024-01-08)Changespostdeploy: Retrieve changelog of the deployed version (postdeploy)1.0.3 (2023-12-18)ChangesFix jira changelog parsing (jira)1.0.2 (2023-12-18)ChangesFix bugsnag cli parsing (bugsnag)1.0.1 (2023-12-18)ChangesAllow calling multiple tasks in a single invocation (chain)1.0.0 (2023-12-13)ChangesInitial release (initial)
zeitdieb
ZeitdiebZeitdieballows you to profile the time each line of your code takes.pip install zeitdiebManual usagewithStopWatch(additional,callables)assw:your()code()print(sw)Alternatively, without using the context manager:sw=StopWatch(additional,callables)sw.start()your()code()sw.finish()print(sw)FormattingWhile you can just print theStopWatchobject, you can also customize the output by using f-strings:print(f"{sw:3b:0.3,0.1}")The format spec looks like this:[width][flags]:[threshold][,threshold].widthspecifies the width of the time column (e.g.4for an output like2.01)flagsare single-letter flags influencing the output:benables barplot mode: Instead of a numeric time output, a vertical barplot will be printedthresholds specify where to start marking times as critical/warnings (red/yellow). The thresholds must be ordered (highest to lowest).IntegrationsZeitdieb can optionally be intregrated with Pyramid, Flask, or FastAPI. After you've done so, you can trigger tracing with the special headerX-Zeitdieb.PyramidPut this somewhere in your Pyramid settings:zeitdieb.format=20bpyramid.tweens=...zeitdieb.pyramidFlaskFor Flask or flask-based frameworks, adjust yourcreate_app()function:defcreate_app():...my_flask_app.config["ZEITDIEB_FORMAT"]="7b:0.5"zeitdieb.flask(my_flask_app)FastAPIFastAPI can be configured by callingzeitdieb.fastapi()inside ofcreate_app():classSettings(...):...zeitdieb_format:Optional[str]="6b"defcreate_app(...):...zeitdieb.fastapi(app,settings)Settings client headersTo trigger the tracing of functions, you need to set anX-Zeitdiebheader:curl$curlhttps://.../-H'X-Zeitdieb:path.to.module:callable,path.to.othermodule:callable`jsonrpclibjsonrpclib.ServerProxy(host,headers={"X-Zeitdieb":"path.to.module:callable,path.to.othermodule:callable"})AcknowledgementsThis project was created as a result of a learning day @solute.
zeitgeber
No description available on PyPI.
zeitgeist
No description available on PyPI.
zeitgitterd
zeitgitter— IndependentgitTimestamperTimestamping: Why?Being able to provide evidence thatyou had some piece of information at a given timeandit has not changed sinceare important in many aspects of personal, academic, or corporate life.It can help provide evidencethat you had some idea already at a given time,that you already had a piece of code, orthat you knew about a document at a given time.Timestamping does not assureauthorshipof the idea, code, or document. It only provides evidence to theexistenceat a given point in time. Depending on the context, authorship might be implied, at least weakly.zeitgitterfor Timestampingzeitgitterconsists of two components:A timestamping client, which can add a timestamp as a digital signature to an existinggitrepository. Existinggitmechanisms can then be used to distribute these timestamps (stored in commits or tags) or keep them private.A timestamping server, which supports timestampinggitrepositories and stores its history of commits timestamped in agitrepository as well. Anybody can operate such a timestamping server, but using an independent timestamper provides strongest evidence, as collusion is less likely.Publication of the timestamps history; as well asgetting cross-timestamps of other independent timestampers on your timestamp history both provide mechanisms to assure that timestamping has not been done retroactively ("backstamping").The timestamping client is calledgit timestampand allows to issue timestamped, signed tags or commits.To simplify deployment, we provide a free timestamping server athttps://gitta.zeitgitter.ch. It is able to provide several million timestamps per day. However, if you or your organization plan to issue more than a hundred timestamps per day, please consider installing and using your own timestamping server and have it being cross-timestamped with other servers.Setting up your own timestamping serverHaving your own timestamping server provides several benefits:The number of timestamps you request, their commit ID, as well as the times at which they are stamped, remain you business alone.You can request as many timestamps as you like.If you like, you can provide a service to the community as well, by timestamping other servers in turn. This strengthens the overall trust of these timestamps.There are currently two options for installation:Running a Zeitgitter timestamper in Docker(recommended; only requires setting four variables)Traditional install on a Linux server(more work)General DocumentationTimestamping: Why and how?Protocol descriptionDiscussion of the use of (weak) cryptographyServer DocumentationDocker server (recommended)Native server (deprecated)How the server worksThe server's state machine
zeither
No description available on PyPI.
zeitig
A time tracker.The basic idea is to store all situation changes as a stream of events and create a report as an aggregation out of these.UsageUsage: z [OPTIONS] [GROUP] COMMAND [ARGS]... Options: --help Show this message and exit. Commands: add Apply tags and notes. break Change to or start the `break` situation. remove Remove tags and flush notes. report Create a report of your events. work Change to or start the `work` situation.Example sessionYou may add a timestamp, as in the example, which is parsed for your timezone. You may abbreviate the commands, so the shortes way to track your time of a running project is justz wandz b.> export ZEITIG_STORE=/tmp/zeitig; mkdir $ZEITIG_STORE > z foobar work -t foo "2018-04-01 08:00:00" > z break "2018-04-01 12:00:00" > z w "2018-04-01 13:00:00" > z b "2018-04-01 17:30:00" > z Actual time: 2018-05-04 23:09:01 Actual group: foobar of foobar Last situation in foobar: Break started at 2018-04-01 17:30:00 since 797.65 hours Store used: /tmp/zeitig/olli Last event: groups/foobar/source/2018-04-01T15:30:00+00:00 > z report Working times for foobar until Friday 04 May 2018 Week: 13 2018-04-01 08:00:00 - 12:00:00 - 4.00 - foo 2018-04-01 13:00:00 - 17:30:00 - 4.50 Total hours: 8.50InternalsYou may create a.zeitigdirectory somewhere in your current working directory path to use it as the store. Other defaults are~/.config/zeitigand~/.local/share/zeitig.For every user is a separate directory created, which containes the groups and the events sources:.zeitig/ | +- <user> | +- last ---+ | | +- groups | | | v | +- <group> | | | +- source | | | | | +- <event UTC timestamp> | | | +- templates | | | | | +- <jinja template> | | | +- template_defaults.toml | | | +- template_syntax.toml | +- templates | | | +- <jinja template> | +- template_defaults.toml | +- template_syntax.tomlThe events are stored as simpletomlfiles.ReportsEvents are fully exposed to the reporting template. You can pipeline certain filters and aggregators to modifiy the event stream.Templates are rendered byjinja2. You can modify the start and end tags by a specialtemplate_syntax.tomlfile.An example latex template may look like this:\documentclass{article}\usepackage[a4paper, total={6in, 8in}]{geometry}\usepackage{longtable,array,titling,booktabs}\setlength{\parindent}{0pt}\setlength{\parskip}{\baselineskip}\title{\vspace{-13em}Timesheet\vspace{0em}}\author{\vspace{-10em}}\date{\vspace{-5em}}% sans serif font\renewcommand{\familydefault}{\sfdefault}\begin{document}\maketitle\thispagestyle{empty}% no page footer\vspace{-5em}\begin{longtable}{>{\raggedleft\arraybackslash}r >{\raggedright\arraybackslash}l}\textbf{Client}:&We do something special\\\textbf{Contractor}:&Oliver Berger\\\textbf{Project number}:&12-345-6789-0\\\end{longtable}\begin{longtable}{>{\raggedright\arraybackslash}l >{\raggedright\arraybackslash}l >{\raggedleft\arraybackslash}r >{\raggedright\arraybackslash}l}Start&End&Hours&Description\\\BLOCK{for event in events.pipeline( report.source, events.filter_no_breaks, events.Summary.aggregate, events.DatetimeChange.aggregate )-}\BLOCK{if py.isinstance(event, events.DatetimeChange) and event.is_new_week}\midrule\BLOCK{endif-}\BLOCK{if py.isinstance(event, events.Work)}\VAR{event.local_start.to_datetime_string()}&\VAR{event.local_end.to_time_string()}&\VAR{'{0:.2f}'.format(event.period.total_hours())}&\BLOCK{if event.tags}\VAR{', '.join(event.tags)}\BLOCK{endif-}\\\BLOCK{endif-}\BLOCK{if py.isinstance(event, events.Summary)}\midrule\multicolumn{2}{l}{\textbf{Total hours}}&\textbf{\VAR{'{0:.2f}'.format(event.works.total_hours())}}&\\\BLOCK{endif-}\BLOCK{-endfor-}\end{longtable}\vspace{5em}\begin{longtable}{>{\centering\arraybackslash}p{3.5cm}l >{\centering\arraybackslash}p{5.5cm}}\cline{1-1}\cline{3-3}Date&&Signature of client\\\end{longtable}\end{document}Jinja syntaxGroup jinja template syntax will be merged into user syntax:[jinja_env] [jinja_env.latex] # define a latex jinja env block_start_string = "\\BLOCK{" block_end_string = "}" variable_start_string = "\\VAR{" variable_end_string = "}" comment_start_string = "\\#{" comment_end_string = "}" line_statement_prefix = "%%" line_comment_prefix = "%#" trim_blocks = true autoescape = false [templates] # map a template name to a jinja env latex_template = "latex"Jinja defaultsYou may define also template defaults for a group, which will be merged into the user template defaults.
zeit.msal
Helper to authenticate against Microsoft Azure AD and store the resulting tokens for commandline applications.UsageRun interactively to store a refresh token in the cacheUse in e.g. automated tests to retrieve an ID token from the cache (which automatically refreshes it if necessary).$ msal-token --client-id=myclient --client-secret=mysecret \ --cache-url=file:///tmp/msal.json login Please visit https://login.microsoftonline.com/... # Perform login via browser def test_protected_web_ui(): auth = zeit.msal.Authenticator( 'myclient', 'mysecret', 'file:///tmp/msal.json') http = requests.Session() http.headers['Authorization'] = 'Bearer %s' % auth.get_id_token() r = http.get('https://example.zeit.de/') assert r.status_code == 200Alternatively, retrieve the refresh token after interactive login, and use that in tests:auth.login_with_refresh_token('myrefreshtoken')zeit.msal changes1.1.0 (2021-07-28)Addget_access_tokenmethod, make scopes configurableImplement redis cache1.0.0 (2021-07-23)Initial release
zeit.nightwatch
zeit.nightwatchpytest helpers for http smoke testsMaking HTTP requestszeit.nightwatch.Browserwraps arequestsSessionto provide some convenience features:Instantiate with a base url, and then only use paths:http =Browser('https://example.com');http.get('/foo')will requesthttps://example.com/fooA conveniencehttpfixture is provided, which can be configured via thenightwatch_configfixture.Use call instead of get, because it’s just thatlittle bitshorter. (http('/foo')instead ofhttp.get('/foo'))Fill and submit forms, powered bymechanicalsoup. (We’ve customized this a bit, so that responses are only parsed with beautifulsoup if a feature like forms or links is actually used.)Logs request and response headers, so pytest prints these on test failures, to help debugging.Usesso_login(username, password)to log intohttps://meine.zeit.de.See source code for specific API details.Example usage:@pytest.fixture(scope='session') def nightwatch_config(): return dict(browser=dict( baseurl='https://example.com', sso_url='https://meine.zeit.de/anmelden', )) def test_my_site(http): r = http.get('/something') assert r.status_code == 200 def test_login(http): http('/login') http.select_form() http.form['username'] = '[email protected]' http.form['password'] = 'secret' r = http.submit() assert '/home' in r.url def test_meinezeit_redirects_to_konto_after_login(http): r = http.sso_login('[email protected]', 'secret') assert r.url == 'https://www.zeit.de/konto'Examining HTML responsesnightwatch adds two helper methods to therequests.Responseobject:xpath(): parses the response withlxml.htmland then callsxpath()on that documentcss(): converts the selector to xpath usingcssselectand then callsxpath()Example usage:def test_error_page_contains_home_link(http): r = http('/nonexistent') assert r.status_code == 404 assert r.css('a.home')Controlling a browser with Seleniumzeit.nightwatch.WebDriverChromeinherits fromselenium.webdriver.Chrometo provide some convenience features:Instantiate with a base url, and then only use paths:browser =WebDriverChrome('https://example.com');browser.get('/foo')A convenienceseleniumfixture is provided, which can be configured via thenightwatch_configfixture.wait()wrapsWebDriverWaitand convertsTimeoutException` into an ``AssertionErrorUsesso_login(username, password)to log intohttps://meine.zeit.deSee source code for specific API details.nightwatch also declares a pytest commandline option--selenium-visibleto help toggling headless mode, and adds aseleniummark to all tests that use aseleniumfixture, so you can (de)select them withpytest-mselenium(or-m'not selenium'). Since you’ll probably want to set a base url, you have to provide this fixture yourself.Example usage:@pytest.fixture(scope='session') def nightwatch_config(): return dict(selenium=dict( baseurl='https://example.com', )) def test_js_based_video_player(selenium): from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC s = selenium s.get('/my-video') s.wait(EC.presence_of_element_located((By.CLASS_NAME, 'videoplayer')))Advanced usecase: To intercept/modify browser requests withselenium-wire, install that package (e.g.pip installselenium-wire) and setdriver_class=ProxiedWebDriverChromein the nightwatchseleniumconfig:@pytest.fixture(scope='session') def nightwatch_config(): return dict(selenium=dict( baseurl='https://example.com', driver_class='ProxiedWebDriverChrome', )) def test_inject_authorization_header(selenium): s = selenium s.request_interceptor = lambda x: r.headers['authorization'] = 'Bearer MYTOKEN' s.get('/protected-page')Controlling a browser with playwrightAs an alternative to Selenium (above) nightwatch also supports playwright; mostly by pulling in thepytest-playwrightplugin, so you can use their fixtures, with some convenience features:Configure a base url, and then only use paths:page.goto('/foo')Example usage:@pytest.fixture(scope='session') def nightwatch_config(): return dict(selenium=dict( baseurl='https://example.com', )) def test_playwright_works(page): page.goto('/something')Running against different environmentsTo help with running the same tests against e.g. a staging and production environment, nightwatch declares a pytest commandline option--nightwatch-environment.A pattern we found helpful is using a fixture to provide environment-specific settings, like this:CONFIG_STAGING = { 'base_url': 'https://staging.example.com', 'username': 'staging_user', 'password': 'secret', } CONFIG_PRODUCTION = { 'base_url': 'https://www.example.com', 'username': 'production_user', 'password': 'secret2', } @pytest.fixture(scope='session') def nightwatch_config(nightwatch_environment): config = globals()['CONFIG_%s' % nightwatch_environment.upper()] return dict(environment=nightwatch_environment, browser=config) def test_some_integration_that_has_no_staging(http, nightwatch_config): if nightwatch_config['environment'] != 'production': pytest.skip('The xyz integration has no staging') r = http('/trigger-xyz') assert r.json()['message'] == 'OK'Sending test results to prometheusLike the medieval night watch people who made the rounds checking that doors were locked, our use case for this library is continuous black box high-level tests that check that main functional areas of our systems are working.For this purpose, we want to integrate the test results with our monitoring system, which is based onPrometheus. We’ve taken inspiration from thepytest-prometheusplugin, and tweaked it a little to use a stable metric name, so we can write a generic alerting rule.This uses the configuredPushgatewayto record metrics like this (theenvironmentlabel is populated from--nightwatch-environment, see above):nightwatch_check{test="test_error_page_contains_home_link",environment="staging",job="website"}=1 # pass=1, fail=0Clients should set the job name, e.g. like this:def pytest_configure(config): config.option.prometheus_job_name = 'website'This functionality is disabled by default, nightwatch declares a pytest commandline option--prometheuswhich has to be present to enable pushing the metrics. There also are commandline options to override the pushgateway url etc., please see the source code for those details.Sending test results to elasticsearchWe’re running our tests as kubernetes pods, and their stdout/stderr output is captured and sent to elasticsearch. However the normal pytest output is meant for humans, but is not machine-readable. Thus we’ve implemented a JSON lines test report format that can be enabled with--json-report=filenameor--json-report=-to directly send to stdout.Here’s an output example, formatted for readability (in reality, each test produces a single JSON line, since that’s what our k8s log processor expects):{ "time": "2023-12-08T10:37:40.630617+00:00", "test_stage": "call", "test_class": "smoketest.test_api", "test_name": "test_example", "test_outcome": "passed", "system_log": "11:37:40 INFO [zeit.nightwatch.requests][MainThread] > POST http://example.com/something\n..." }zeit.nightwatch changes1.7.1 (2023-12-13)Don’t try to json report if no argument was given1.7.0 (2023-12-08)Implement--json-report=-for line-based output1.6.0 (2022-12-16)Support playwright1.5.1 (2022-06-24)Use non-deprecated selenium API1.5.0 (2022-03-25)Supportselenium-wirein addition toselenium1.4.2 (2022-02-21)ZO-712: Set referer explicitly during sso_login, required for csrf validation1.4.1 (2021-10-27)Include tests & setup in tarball to supportdevpi test1.4.0 (2021-10-26)Add patch to requests1.3.3 (2021-04-01)Support contains instead of equals forfind_link1.3.2 (2021-02-18)Record skipped tests as passed to prometheus, not failed1.3.1 (2021-02-17)Handle same metric name (and testname only as label) correctly1.3.0 (2021-02-17)Allow to configure the test browsers via a config fixture1.2.0 (2021-02-17)Add conveniencenightwatchfixture and toplevel APIAdd first test & fix package setup1.1.0 (2021-02-12)Include prometheus functionality here, to fix pushgateway bug and support sending the test name as a label.Declare namespace package properly1.0.0 (2021-02-11)Initial release
zeit.shipit
No description available on PyPI.
zeitsprung
zeitsprungNote: zeitsprung.fm has moved to geschichte.fm, therefore this project is no longer maintained.This package provides a scraper for www.zeitsprung.fm, a great history podcast. To get the metadata of all episodes from the website, simply start the scraper:from zeitsprung.scraping import Scraper s = Scraper('path/to/folder/for/database') s.run()The scraper then downloads the all episode metadata and audio files. The metadata is written to the ‘meta’ table in the database. The audio files are converted to ‘.wav’ files and saved separately to a folder, while a link to the file is stored in the ‘audio’ table in the database.To access the data, create a SQLiteEngine:from zeitsprung.database import SQLiteEngine db = SQLiteEngine('path/to/folder/for/database/zeitsprung.db')Query the meta data from the database:db.query_all_meta()And the audio file paths and meta data:db.query_all_audio()Now have fun with analysing the episodes of zeitsprung!FeaturesScraper class to download the meta data and audio files of all episodes.Database class to setup and access the SQLite database containing the meta data of the episodes.To DoProcessing class to conduct speech recognition on the audio files and build an index for clustering the topics.Visualize up to date statistics.Referenceshttps://www.zeitsprung.fm, check it out!This package is licensed under MIT, see the LICENSE file for details.History0.1.1 (2021-10-02)Adjust URLs to GitHub account due to renaming @munterfinger to @munterfi.Note: zeitsprung.fm has moved to geschichte.fm, therefore this project no longer is maintained.0.1.0 (2020-09-22)First release on PyPI.Scraper class to download the meta data and audio files of all episodes.Database class to setup and access the SQLite database containing the meta data of the episodes.Documentation using readthedocs:https://zeitsprung.readthedocs.io/en/latest/Github action for building and testing the package.Coverage tests using codecov.io.
zeitzono
Zeitzono is an open source city-based timezone converter with atext-based user interface (TUI)Zeitzono allows you to select various cities and then modify the time tosee what the local time will be in each city.EMAIL: [email protected]: https://zeitzono.org/
zeka
Installpip install zekaUsageZeka is an opinionated cli tool for creating and mainting a zettelkasten note-taking system.The tool encourages a flat directory structure of markdown files containing yaml frontmatter for metadataExamplesOpen quick notezk newSpecifiy titlezk new -t 'my-note'Add tagszk new -a 'foo, bar, baz'Synczk sync
zeke
Zeke is a command-line tool for Zookeeper that is a pleasure to use.
zekeconv
UNKNOWN
zekrpack_reader
A python script to read Zekr translations models
zelda
No description available on PyPI.
zeldarose
Zelda RoseA straightforward trainer for transformer-based models.InstallationSimply install with pipxpipxinstallzeldaroseTrain MLM modelsHere is a short example of training first a tokenizer, then a transformer MLM model:TOKENIZERS_PARALLELISM=truezeldarosetokenizer--vocab-size4096--out-pathlocal/tokenizer--model-name"my-muppet"tests/fixtures/raw.txt zeldarosetransformer--tokenizerlocal/tokenizer--pretrained-modelflaubert/flaubert_small_cased--out-dirlocal/muppet--val-texttests/fixtures/raw.txttests/fixtures/raw.txtThe.txtfiles are meant to be raw text files, with one sample (e.g. sentence) per line.There are other parameters (seezeldarose transformer --helpfor a comprehensive list), the one you are probably mostly interested in is--config, giving the path to a training config (for which we haveexamples/).The parameters--pretrained-models,--tokenizerand--model-configare all fed directly toHuggingface'stransformersand can bepretrained modelsnames or local path.Distributed trainingThis is somewhat tricky, you have several optionsIf you are running in a SLURM cluster use--strategy ddpand invoke viasrunYou might want to preprocess your data first outside of the main compute allocation. The--profileoption might be abused for that purpose, since it won't run a full training, but will run any data preprocessing you ask for. It might also be beneficial at this step to load a placeholder model such asRoBERTa-minusculeto avoid runnin out of memory, since the only thing that matter for this preprocessing is the tokenizer.Otherwise you have two optionsRun with--strategy ddp_spawn, which usesmultiprocessing.spawnto start the process swarm (tested, but possibly slower and more limited, seepytorch-lightningdoc)Run with--strategy ddpand start withtorch.distributed.launchwith--use_envand--no_python(untested)Other hintsData management relies on 🤗 datasets and use their cache management system. To run in a clear environment, you might have to check the cache directory pointed to by theHF_DATASETS_CACHEenvironment variable.Inspirationshttps://github.com/shoarora/lmtunershttps://github.com/huggingface/transformers/blob/243e687be6cd701722cce050005a2181e78a08a8/examples/run_language_modeling.py
zelenium
zeleniumNew Selenium framework for Python with base pages and elementsInstallationpipinstallzeleniumUsageZelenium offers several features that could be combined with classical selenium usage:Driver singleton configuration;BasePage with BaseElements;Suffix and formatting mechanisms for BaseElements;It also should be useful for Appium testing.zelenium configurationTo setup configuration for zelenium you could just useConfig:fromseleniumimportwebdriverfromzeleniumimportConfigconfig=Config.get_instance()config.driver=webdriver.Chrome()Because Config is singleton - you could not use it with two different webdrivers at one moment. But if you need it, you could use private class:fromzeleniumimportConfigfromzelenium.base.configimport_Configconfig1=Config.get_instance()config2=_Config()assertnot(config1isconfig2)# No assertionBasePage and BaseElementWhat offers you BasePage:No need to pass webdriver instance - it would be passed from configuration automaticallySome predefined methods, which are useful in testingSuffix mechanismDefine new PageLet's imagine that we have already setup webdriver for Config, and starting to create new page:fromselenium.webdriver.common.byimportByfromzeleniumimportBasePageclassLoginPage(BasePage):title=(By.CSS_SELECTOR,"[data-test='title']")username=(By.CSS_SELECTOR,"[data-test='username']")password=(By.CSS_SELECTOR,"[data-test='password']")submit=(By.CSS_SELECTOR,"[data-test='submit']")defmain():login_page=LoginPage()print(login_page.title().text)main()If we execute it after opening something in browser - it will find element and print text inside of it.How it works?Well, BasePage also has ametaclassthat will go all over page class fields and if field is tuple with two strings - it would replace it withBaseElement.BaseElementitself has magic__call__method, which executes when you 'call' class instance:fromzeleniumimportBEelem=BE("by","selector")web_element=elem()# Here you calls class instance and it will return# WebElement for you. Just classic WebElementInherit pagesFor example, you have several pages, which have same structure, but some different logic, for example:fromselenium.webdriver.common.byimportByfromzeleniumimportBasePageclassLoginPage(BasePage):title=(By.CSS_SELECTOR,"[data-test='title']")username=(By.CSS_SELECTOR,"[data-test='username']")password=(By.CSS_SELECTOR,"[data-test='password']")submit=(By.CSS_SELECTOR,"[data-test='submit']")deflogin(self,username,password):self.username().send_keys(username)self.password().send_keys(password)self.submit().click()classRegisterPage(LoginPage):full_name=(By.CSS_SELECTOR,"[data-test='full_name']")defregister(self,full_name,username,password):self.full_name().send_key(full_name)self.username().send_keys(username)self.password().send_keys(password)self.submit().click()Using this - you have no need to redefine elements on different pages - you could just inherit them, if they have same locators (or quite the same).Format elementsSometimes you need to define a lot of elements with similar locators. Zelenium offers two way to solve this. First is BaseElement formatting:fromselenium.webdriver.common.byimportByfromzeleniumimportBasePage,BEclassDevicesPage(BasePage):_cell=BE(By.CSS_SELECTOR,"[data-test='devicesPageCell_{}']")user=_cell.format("user")imei=_cell.format("imei")iccid=_cell.format("iccid")model=_cell.format("model").format()method formats locator as a string and returns new instance of BaseElement.Second mechanism is suffix:fromselenium.webdriver.common.byimportByfromzeleniumimportBasePageclassDevicesPage(BasePage):__suffix="devicesPageCell_"user=(By.CSS_SELECTOR,"[data-test='{s}_user']")imei=(By.CSS_SELECTOR,"[data-test='{s}_imei']")iccid=(By.CSS_SELECTOR,"[data-test='{s}_iccid']")model=(By.CSS_SELECTOR,"[data-test='{s}_model']")Main differences of this two mechanisms are:Suffix adds to locator automatically;Suffix could be inherited;Format could be used anywhere outside classes - you could format element in some functions according to changes on page.Format requires usage of BaseElement class itselfExample of suffix inheritance:fromselenium.webdriver.common.byimportByfromzeleniumimportBasePageclassLoginPage(BasePage):__suffix="loginPageForm_"title=(By.CSS_SELECTOR,"[data-test='{s}title']")username=(By.CSS_SELECTOR,"[data-test='{s}username']")password=(By.CSS_SELECTOR,"[data-test='{s}password']")submit=(By.CSS_SELECTOR,"[data-test='{s}submit']")classRegisterPage(LoginPage):__suffix="registerPageForm_"email=(By.CSS_SELECTOR,"[data-test='{s}email']")confirm=(By.CSS_SELECTOR,"[data-test='{s}confirm']")classRenamedRegisterPage(RegisterPage):__suffix="renamedRegisterPageForm_"defmain():log=LoginPage()reg=RegisterPage()ren=RenamedRegisterPage()print(log.title)print(log.username)print(log.password)print(log.submit)print(reg.title)print(reg.username)print(reg.password)print(reg.submit)print(reg.email)print(reg.confirm)print(ren.title)print(ren.username)print(ren.password)print(ren.submit)print(ren.email)print(ren.confirm)if__name__=='__main__':main()This code will output:Element [data-test='loginPageForm_title'] (css selector) Element [data-test='loginPageForm_username'] (css selector) Element [data-test='loginPageForm_password'] (css selector) Element [data-test='loginPageForm_submit'] (css selector) Element [data-test='registerPageForm_title'] (css selector) Element [data-test='registerPageForm_username'] (css selector) Element [data-test='registerPageForm_password'] (css selector) Element [data-test='registerPageForm_submit'] (css selector) Element [data-test='registerPageForm_email'] (css selector) Element [data-test='registerPageForm_confirm'] (css selector) Element [data-test='renamedRegisterPageForm_title'] (css selector) Element [data-test='renamedRegisterPageForm_username'] (css selector) Element [data-test='renamedRegisterPageForm_password'] (css selector) Element [data-test='renamedRegisterPageForm_submit'] (css selector) Element [data-test='renamedRegisterPageForm_email'] (css selector) Element [data-test='renamedRegisterPageForm_confirm'] (css selector)
zelfred
Welcome tozelfredDocumentationzelfredis a framework for building interactive applications similar toAlfred Workflow, but in Python and for the terminal. It is free, open source, and cross-platform.You can view sample applications in theApp Gallery.Installzelfredis released on PyPI, so all you need is to:$pipinstallzelfredTo upgrade to latest version:$pipinstall--upgradezelfred
zeliapdf
No description available on PyPI.
zeliboba-deepspeed
This is a security placeholder package. If you want to claim this name for legitimate purposes, please contact us [email protected]@yandex-team.ru
zeliboba-deepspeed-2
This is a security placeholder package. If you want to claim this name for legitimate purposes, please contact us [email protected]@yandex-team.ru
zeligg
Failed to fetch description. HTTP Status Code: 404
zella
Zella AI Python PackageThis is a Python package for accessing Zella APIs.InstallationpipinstallzellaInitialize Zella AI using API Keyapi_key='api_jad3uf93iaf92902lkdj2ldu092d3d'zella_ai=ZellaAI(api_key)Chat CompletionUse Chat Completion API to get response from llms# Create Request Parametersuser="user_jskjf93o101"model={"platform":"openai","name":"gpt-3.5-turbo"}query={"messages":[{"role":"system","content":"You are a helpful assistant to help design api structure"},{"role":"user","content":"Hi There!"}]}response={"format":"json_object"}# Call APIresponse=zella_ai.chat.completions.create(user,model,query,response)Streaming Chat CompletionPass stream parameter in response to get streaming response from llms# Create Request Parametersuser="user_jskjf93o101"model={"platform":"openai","name":"gpt-3.5-turbo"}query={"messages":[{"role":"system","content":"You are a helpful assistant to help design api structure"},{"role":"user","content":"Hi There!"}]}response={"format":"json_object","stream":True}# Call APIstream=zella_ai.chat.completions.create(user,model,query,response)# Iterate over streamforchunkinstream:ifchunk:print(chunk)Complete Example Usageapi_key='api_jad3uf93iaf92902lkdj2ldu092d3d'zella_ai=ZellaAI(api_key)user="user_jskjf93o101"model={"platform":"openai","name":"gpt-3.5-turbo"}query={"messages":[{"role":"system","content":"You are a helpful assistant to help design api structure"},{"role":"user","content":"Hi There!"}]}response={"format":"json_object"}response=zella_ai.chat.completions.create(user,model,query,response)assertresponse.status.type=='ok'
zellij
Zellijis an open source Python framework forHyperParameter Optimization(HPO) which was orginally dedicated toFractal Decomposition based algorithms[1][2]. It includes tools to define mixed search space, manage objective functions, and a few algorithms. To implements metaheuristics and other optimization methods,ZellijusesDEAP[3]for theEvolutionary Algorithmspart andBoTorch[4]forBayesian Optimization.Zellijis defined as an easy to use and modular framework, based on Python object oriented paradigm.Seedocumentation.Install ZellijOriginal version$ pip install zellijDistributed ZellijThis version requires a MPI library, such asMPICHorOpen MPI. It is based onmpi4py$ pip install zellij[mpi]User will then be able to use theMPIoption of theLossdecorator.@Loss(MPI=True)Then the python script must be executed usingmpiexec:$mpiexec-machinefile<path/to/hostfile>-n<numberofprocesses>python3<path/to/python/script>DependenciesOriginal versionPython>=3.6numpy=>1.21.4DEAP>=1.3.1botorch>=0.6.3.1gpytorch>=1.6.0pandas>=1.3.4enlighten>=1.10.2MPI versionPython>=3.6numpy=>1.21.4DEAP>=1.3.1botorch>=0.6.3.1gpytorch>=1.6.0pandas>=1.3.4enlighten>=1.10.2mpi4py>=3.1.2ContributorsDesignThomas Firmin:[email protected] Talbi:[email protected][1]Nakib, A., Ouchraa, S., Shvai, N., Souquet, L. & Talbi, E.-G. Deterministic metaheuristic based on fractal decomposition for large-scale optimization. Applied Soft Computing 61, 468–485 (2017).[2]Demirhan, M., Özdamar, L., Helvacıoğlu, L. & Birbil, Ş. I. FRACTOP: A Geometric Partitioning Metaheuristic for Global Optimization. Journal of Global Optimization 14, 415–436 (1999).[3]Félix-Antoine Fortin, François-Michel De Rainville, Marc-André Gardner, Marc Parizeau and Christian Gagné, "DEAP: Evolutionary Algorithms Made Easy", Journal of Machine Learning Research, vol. 13, pp. 2171-2175, jul 2012.[4]M. Balandat, B. Karrer, D. R. Jiang, S. Daulton, B. Letham, A. G. Wilson, and E. Bakshy. BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization. Advances in Neural Information Processing Systems 33, 2020.
zelock
No description available on PyPI.
zelos
ZelosZelos (ZeropointEmulatedLightweightOperatingSystem) is a python-based binary emulation platform. One use of zelos is to quickly assess the dynamic behavior of binaries via command-line or python scripts. All syscalls are emulated to isolate the target binary. Linux x86_64 (32- and 64-bit), ARM and MIPS binaries are supported.Unicornprovides CPU emulation.Full documentationis availablehere.InstallationUse the package managerpipto install zelos.pipinstallzelosBasic UsageCommand-lineTo emulate a binary with default options:$zelosmy_binaryTo view the instructions that are being executed, add the--instflag:$zelos--instmy_binaryYou can print only the first time each instruction is executed, rather thaneveryexecution, using--fasttrace:$zelos--inst--fasttracemy_binaryBy default, syscalls are emitted on stdout. To write syscalls to a file instead, use the--trace_fileflag:$zelos--trace_filepath/to/filemy_binarySpecify any command line arguments after the binary name:$zelosmy_binaryarg1arg2Programmaticimportzelosz=zelos.Zelos("my_binary")z.start(timeout=3)PluginsZelos supports first- and third-partyplugins. Some notable plugins thus far:crashdcrash analyzer combining execution trace, dataflow and memory sanitization.overlay (ida plugin): highlightszelosexecution trace in IDA with instruction-level comments added.angr integration: enables symbolic execution inzelos.zdbserver: remote control and debugging of emulated binaries.syscall limiter: demonstrates event hooking and provides syscall-based execution and termination options.ContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.Please make sure to update tests as appropriate.Local Development EnvironmentFirst, create a new python virtual environment. This will ensure no package version conflicts arise:$python3-mvenv~/.venv/zelos$source~/.venv/zelos/bin/activateNow clone the repository and change into thezelosdirectory:(zelos)[email protected]:zeropointdynamics/zelos.git(zelos)$cdzelosInstall aneditableversion of zelos into the virtual environment. This makesimport zelosavailable, and any local changes to zelos will be effective immediately:(zelos)$pipinstall-e'.[dev]'At this point, tests should pass and documentation should build:(zelos)$pytest(zelos)$cddocs(zelos)$makehtmlBuilt documentation is found indocs/_build/html/.Install zelos pre-commit hooks to ensure code style compliance:(zelos)$pre-commitinstallIn addition to automatically running every commit, you can run them anytime with:(zelos)$pre-commitrun--all-filesWindows Development:Commands vary slightly on Windows:C:\> python3 -m venv zelos_venvC:\> zelos_venv\Scripts\activate.bat(zelos)C:\> pip install -e .[dev]LicenseAGPL v3ChangelogAll notable changes to this project will be documented in this file.The format is based onKeep a Changelog, and this project adheres toSemantic Versioning.[Version 0.2.0] - 2020-08-04AddedPlugins: YarascanIntroduction of Zelos Manipulation Language (ZML), used for specifying events on the command line and in scripts. New zml_hook function in apiAbility to redirect input to stdinHooks for internal memory reads, writes, and mapsLinked to crashd plugin, containing separate plugins for heap memory guards, static analysis via IDA Pro, and dataflow using QEMU TCGChangedMoved to different command line flags for specifying what degree of information (instructions or syscalls) is printed while runningBetter support for lists in command line argumentsFlags can be passed to the emulated program via the command lineMisc. bug fixes (thanks to seth1002)General improvements to syscallsRemovedVerbosity command line flag (now handled via other flags)[Version 0.1.0] - 2020-05-29AddedPlugins: IDA overlays, remote debug serverAdditional plugin APIsChangedMinor syscall emulation improvementsMemory management overhaulRemovedN/A[Version 0.0.1] - 2020-03-03AddedN/AChangedUpdated documentationRemovedN/A[Version 0.0.0] - 2020-03-02Initial public release.AddedInitial open source commit.ChangedN/ARemovedN/AThe Core Zelos TeamKevin Valakuzhy- Research Engineer, DeveloperRyan C. Court- Research Engineer, DeveloperKevin Z. Snow- Co-Founder, DeveloperSpecial Thanks ToFabian Monrose - Co-FounderAnn Cox - DHS Program ManagerAngelos Keromytis - DARPA Program Manager (Former)Dustin Fraze - DARPA Program ManagerSuyup Kim - Intern
zeloscloud
Zelos Cloud Python APIComing soon!
zelos-crashd
Zelos CrasHD PluginA plugin forZelosto enhance crash triaging by performing dataflow & root cause analysis.Optional PrerequisitesThis plugin has an optional dependency on thegraphvizpackage to render control flow graphs to png. The graphviz python package can be installed normally viapip install graphviz, but will also requireGraphvizitself to be installed locally as well. Instructions for installing Graphviz locally can be foundhere.If you do not wish to install the graphviz package or Graphviz, you can safely ignore this optional dependency and zelos-crashd will still work as intended, but control flow graphs will not be rendered to png.InstallationInstall from pypi$pipinstallzelos-crashdOr install directly from the repo$gitclonehttps://github.com/zeropointdynamics/zelos-crashd.git$cdzelos-crashd$pipinstall.Alternatively, install aneditableversion for development$gitclonehttps://github.com/zeropointdynamics/zelos-crashd.git$cdzelos-crashd$pipinstall-e'.[dev]'Related ResourcesCrasHD Visualizeris a VS Code extension for visualizing the results & output of this plugin that features:Contextual source code highlightingInteractive graph of data flowAdditional context & runtime informationCrasHD Examplesis a collection of reproducible crashes that can be used with this plugin.UsageThe following snippets use the example fromexamples-crashd/afl_training/vulnerable.cAfter compiling the above example (vulnerable.c) you can emulate the binary using zelos:$zelosvulnerable<inputs/crashing_inputTo gain a more information on the crashing program, use the--taintand--taint_outputflags in order to keep track of dataflow leading from the crash. When the--taintflag is used, Zelos will calculate the dataflow and taint information related to the crash.--taint_output terminalis used to specify that the output of--taintwill be to stdout.$zelos--taint--taint_outputterminalvulnerable<inputs/crashing_inputChangelogAll notable changes to this project will be documented in this file.The format is based onKeep a Changelog, and this project adheres toSemantic Versioning.[Version 0.0.2] - 2020-08-06Remove graphviz as a required dependency, add the taint_output flag.Addedtaint_output flagChangedN/ARemovedDependency on graphviz package[Version 0.0.1] - 2020-08-05Initial public release.AddedInitial open source commit.ChangedN/ARemovedN/AAuthorsRyan CourtKevin Z. SnowKevin ValakuzhySuyup Kim
zelos-demeter
ReadmeDemeter is a backtest tool for DEFI. Please go to thewebsitefor a full descriptionLinkswebsite:https://zelos-demeter.readthedocs.io/Medium:https://medium.com/zelos-researchPypi:https://pypi.org/project/zelos-demeterdemeter-fetch:https://github.com/zelos-alpha/demeter-fetch
zelt
Zalando end-to-end load testerAcommand-line toolfor orchestrating the deployment ofLocustinKubernetes.Use it in conjunction withTransformerto run large-scale end-to-end load testing of your website.PrerequistesPython 3.6+InstallationInstall using pip:pipinstallzeltUsageExample HAR files, locustfile, and manifests are included in theexamples/directory, try them out.N.BThe cluster to deploy to is determined by your currently configured context. Ensure you areusing the correct clusterbefore using Zelt.Locustfile as inputZelt can deploy Locust with a locustfile to a cluster:zeltfrom-locustfilePATH_TO_LOCUSTFILE--manifestsPATH_TO_MANIFESTSHAR files(s) as inputZelt can transform HAR file(s) into a locustfile and deploy it along with Locust to a cluster:zeltfrom-harPATH_TO_HAR_FILES--manifestsPATH_TO_MANIFESTSN.BThis requiresTransformerto be installed. For more information about Transformer, please refer toTransformer’s documentation.Rescale a deploymentZelt can rescale the number ofworkersin a deployment it has made to a cluster:zeltrescaleNUMBER_OF_WORKERS--manifestsPATH_TO_MANIFESTSDelete a deploymentZelt can delete deployments it has made from a cluster:zeltdelete--manifestsPATH_TO_MANIFESTSRun Locust locallyZelt can also run Locust locally by providing the--local/-lflag to either thefrom-harorfrom-locustfilecommand e.g.:zeltfrom-locustfilePATH_TO_LOCUSTFILE--localUse S3 for locustfile storageBy default, Zelt uses a ConfigMap for storing the locustfile. ConfigMaps have a file-size limitation of ~2MB. If your locustfile is larger than this then you can use an S3 bucket for locustfile storage.To do so, add the following parameters to your Zelt command:--storages3: Switch to S3 storage--s3-bucket: The name of your S3 bucket--s3-key: The name of the file as stored in S3N.B.Zelt willnotcreate the S3 bucket for you.N.B.Make sure to update your deployment manifest(s) to download the locustfile file from S3 instead of loading from the ConfigMap volume mount.Use a configuration file for Zelt optionsAn alternative to specifying Zelt’s options on the command-line is to use a configuration file, for example:zeltfrom-har--configexamples/config/config.yamlN.B.The configuration file’s keys are the same as the command-line option names but without the double dash (--).DocumentationTake a look at ourdocumentationfor more details.ContributingPlease readCONTRIBUTING.mdfor details on our process for submitting pull requests to us, and please ensure you follow theCODE_OF_CONDUCT.md.VersioningWe useSemVerfor versioning.AuthorsBrian Maher-@bmaherOliwia Zaremba-@tortilaThibaut Le Page-@thilpSee also the list ofcontributorswho participated in this project.LicenseThis project is licensed under the MIT License - see theLICENSEfile for details
zelus
No description available on PyPI.
zelus-route-manager
zelusRequrementspython >= 3.8 iproute2Initialize build environmentpython -m pip install -U setuptools wheel buildBuild python packagepython -m build .Building docker imagedocker build -t markfarrell/zelus .Running docker containerdocker build -t markfarrell/zelus . && \ docker run --rm -it --name zelus --volume $(pwd)/docker-data/etc/zelus:/etc/zelus --cap-add NET_ADMIN -p 9123:9123 markfarrell/zelus --interface eth0Exec into containerdocker exec -it zelus /bin/shTest prometheus metricscurl http://localhost:9123/metricsTestingLinttox -e lint
zema-emc-annotated
zema_emc_annotatedThis software provides a convenient API to access theannotated ZeMA dataset about remaining useful life of an electro-mechanical cylinder on Zenodo. The code was written forPython 3.10.Getting startedTheINSTALL guideassists in installing the required packages. Afterwards please visit ourexample page.DocumentationThe documentation can be found onReadTheDocs.DisclaimerThis software is developed at Physikalisch-Technische Bundesanstalt (PTB). The software is made available "as is" free of cost. PTB assumes no responsibility whatsoever for its use by other parties, and makes no guarantees, expressed or implied, about its quality, reliability, safety, suitability or any other characteristic. In no event will PTB be liable for any direct, indirect or consequential damage arising in connection with the use of this software.Licensezema_emc_annotated is distributed under theMIT license.
zemail
zemailZen email library.
zemath
This project helps to do basic mathsYou can install the module using the below command in cmdpip install zemath
zembed
#### Python - ZEmbed APIzembedis a simple client for ZEmbed api.## Installation` sudo pip install zembed `## Using`python import zembed embed =zembed.API.fetch('http://rohitkhatri.com')`## Contributing [https://github.com/rohitkhatri/zembed](https://github.com/rohitkhatri/zembed)
zemberek-grpc
Zemberek gRPCLatest version 0.16.0Zemberek-NLP provides some of its functions via a remote procedure call framework called gRPC. gRPC is a high performance, open-source universal RPC framework. Once Zemberek-NLP gRPC server is started, other applications can access remote services natively via automatically generated client libraries.https://github.com/ahmetaa/zemberek-nlpStart gRPC server with dockerYou can use it from directly in your computer or run with in docker container.https://hub.docker.com/r/ryts/zemberek-grpcRun with dockerdocker run -d --rm -p 6789:6789 --name zemberek-grpc ryts/zemberek-grpcCheck logsdocker logs -f zemberek-grpcInstall library and examplepipinstallzemberek-grpcPython Client Example#!/usr/bin/env python# -*- coding: utf-8 -*-importsysimportgrpcimportzemberek_grpc.language_id_pb2asz_langidimportzemberek_grpc.language_id_pb2_grpcasz_langid_gimportzemberek_grpc.normalization_pb2asz_normalizationimportzemberek_grpc.normalization_pb2_grpcasz_normalization_gimportzemberek_grpc.preprocess_pb2asz_preprocessimportzemberek_grpc.preprocess_pb2_grpcasz_preprocess_gimportzemberek_grpc.morphology_pb2asz_morphologyimportzemberek_grpc.morphology_pb2_grpcasz_morphology_gchannel=grpc.insecure_channel('localhost:6789')langid_stub=z_langid_g.LanguageIdServiceStub(channel)normalization_stub=z_normalization_g.NormalizationServiceStub(channel)preprocess_stub=z_preprocess_g.PreprocessingServiceStub(channel)morphology_stub=z_morphology_g.MorphologyServiceStub(channel)deffind_lang_id(i):response=langid_stub.Detect(z_langid.LanguageIdRequest(input=i))returnresponse.langIddeftokenize(i):response=preprocess_stub.Tokenize(z_preprocess.TokenizationRequest(input=i))returnresponse.tokensdefnormalize(i):response=normalization_stub.Normalize(z_normalization.NormalizationRequest(input=i))returnresponsedefanalyze(i):response=morphology_stub.AnalyzeSentence(z_morphology.SentenceAnalysisRequest(input=i))returnresponse;deffix_decode(text):"""Pass decode."""ifsys.version_info<(3,0):returntext.decode('utf-8')else:returntextdefrun():lang_detect_input='merhaba dünya'lang_id=find_lang_id(lang_detect_input)print("Language of ["+fix_decode(lang_detect_input)+"] is: "+lang_id)print("")tokenization_input='Merhaba dünya!'print('Tokens for input : '+fix_decode(tokenization_input))tokens=tokenize(tokenization_input)fortintokens:print(t.token+':'+t.type)print("")normalization_input='Mrhaba dnya'print('Normalization result for input : '+fix_decode(normalization_input))n_response=normalize(normalization_input)ifn_response.normalized_input:print(n_response.normalized_input)else:print('Problem normalizing input : '+n_response.error)print("")analysis_input='Kavanozun kapağını açamadım.'print('Analysis result for input : '+fix_decode(analysis_input))analysis_result=analyze(analysis_input)forainanalysis_result.results:best=a.bestlemmas=""forlinbest.lemmas:lemmas=lemmas+" "+lprint("Word = "+a.token+", Lemmas = "+lemmas+", POS = ["+best.pos+"], Full Analysis = {"+best.analysis+"}")if__name__=='__main__':run()
zemberek-python
ZEMBEREK-PYTHONPython implementation of Natural Language Processing library for Turkish,zemberek-nlp. It is based on zemberek 0.17.1 and is completely written in Python meaning there is no need to setup a Java development environment to run it.Source Codehttps://github.com/Loodos/zemberek-pythonDependenciesantlr4-python3-runtime==4.8numpy>=1.19.0Supported ModulesCurrently, following modules are supported.Core (Partially)TurkishMorphology (Partially)Single Word AnalysisDiacritics Ignored AnalysisWord GenerationSentence AnalysisAmbiguity ResolutionTokenizationSentence Boundary DetectionTokenizationNormalization (Partially)Spelling SuggestionNoisy Text NormalizationInstallationYou can install the package with pippip install zemberek-pythonExamplesExample usages can be found inexamples.pyNotesThere are some minor changes in codes where original contains some Java specific functionality and data structures. We used Python equivalents as much as we could but sometimes we needed to change them. And it affects the performance and accuracy a bit.InMultiLevelMphfclass, in the original Java implementation, there are some integer multiplication operations which I tried to reimplement using vanilla Python 'int', but the results were not the same. Then I tried it with numpy.int32 and numpy.float32, since default java int and float types are 4 byte. The results were the same with Java, however, oftenly these operations produced RuntimeWarning as the multiplication caused overflow. In Java there were no overflow warnings whatsoever. I could not find a reasonable explanation to this situation, nor I could find a better way to implement it. So I suppressed overflow warnings for MultiLevelMphf. Therefore, please be aware that, this is not a healthy behaviour, and you should be careful using this code.CreditsThis project is Python port ofzemberek-nlp.
zemfrog
zemfrogZemfrog is a simple framework based on flask for building a REST API quickly. Which focuses on building a customizable, flexible and manageable REST API!This project is heavily inspired byFastAPIandDjangoFramework.NotesThe project is still inBETAversion,which means that all the APIs in it are still unstable. Please be careful if you want to use it in a production environment! thanks.Why zemfrog?Zemfrog is equipped with advanced features including:Solid application structure.Automatically generate REST API.Built-in JWT authentication.RBAC support.Automatically generate API documentation (swagger-ui).Background jobs support.Database migration based on application environment.And much more…Donate & SupportKeep in mind that donations are very important to me, because currently I am working alone to develop this project. It takes a lot of time and energy. If this project is useful, please give me any support. I really appreciate it.And also you can donate your money via:LinksHomepage:https://github.com/zemfrog/zemfrogDocumentation:https://zemfrog.readthedocs.ioLicense:MITCreditsFlaskCookie CutterHistory1.0.0 (2020-09-03)First release on PyPI.1.0.1 (2020-09-07)Automation create (CRUD) APIUpdate template APIUpdate zemfrog release information.1.0.2 (2020-09-08)Update API structure1.0.3 (2020-09-08)re-upload1.0.4 (2020-09-09)fix manifest file1.0.5 (2020-09-10)add command boilerplateadd schema command1.0.6 (2020-09-15)add jwt authenticationrefactor blueprint boilerplateadd send async emailfix celery1.0.7 (2020-09-19)Fix:#8flask-apispec integration.improve authentication.add default schema models.Fix: rest api boilerplateIMPROVE: Added a prompt if a schema model exists.IMPROVE: add zemfrgo to requirementsDOC: add README to project boilerplate1.0.8 (2020-10-03)Fix:#12,#13,#14IMPROVE: import the orm model in the schema generator.General Update: update development status1.0.9 (2020-10-05)Fix:#16,#14,#17NEW: add version option1.2.0 (2020-10-19)NEW: add load urlsNEW: add load middlewaresNEW: middleware boilerplate.NEW: multiple apps supportFix minor bugs1.2.1 (2020-10-27)New Feature: added prompt to manage the app.moved mail dir to templates/emailsaddapi_doc&authenticatedecorator.NEW: add swagger oauth2.NEW: add first_name & last_name column.IMPROVE: Support creating REST API descriptions via function documents.Refactor Code: Rename and add field validation.Code Change: update REST API structure.1.2.2 (2020-10-28)Refactor generatorNew Feature: add error handler1.2.3 (2020-11-13)Adding: current_db local proxyrename services directory to tasks1.2.4 (2020-11-14)support multiple static filesAdd an endpoint to validate the password reset tokenfix#371.2.5 (2020-11-18)NEW: add extension, model, task generatorRefactor Code: add model mixinadd command user, role & permissionFIX: auth logsNew Feature: supports role-based access control1.2.6 (2020-11-21)IMPROVE: commands to manage nested applicationsAdded endpoint for checking token jwtAdd an endpoint to retrieve one data from the modelAdd schema to limit resultsAdded a handler for handling API errors1.2.7 (2020-11-24)FIX: user checks in the test token endpointNEW: support for creating your own app loaderFIX: Make user roles optionalFIX:#492.0.1 (2020-12-20)Refactoring app loadersIMPROVE: REST API, models & validatorsIMPROVE: added template checksIMPROVE: add password validatorIMPROVE: Compatible with frontend nuxtjsNEW: add flask-cors extension2.0.2 (2020-12-20)fix: missing flask-cors dependency2.0.3 (2020-12-20)IMPROVE: clean up dependencies3.0.1 (2020-12-20)add command secretkeyFix: varchar lengthAdded db migration based on environmentStable release4.0.1 (2021-03-04)IMPROVE: Move extensions to globalNEW: add pre-commit toolIMPROVE: refactor json responseRefactor Code: run pre-commitIMPROVE: Change ‘SystemExit’ to ‘ValidationError’IMPROVE: Rename the api directory to apisNEW: add autoflake hookChanged the stable version status to BETA4.0.2 (2021-03-05)FIX: response message in jwt & error handler boilerplateFIX: update zemfrog version in requirements.txt4.0.3 (2021-03-17)Fixhttps://github.com/zemfrog/zemfrog/issues/87Add pre-commit to requirements-dev.txt4.0.4 (2021-03-31)FIX: role & permission relationFIX: typo column nameIMPROVE: split blueprint and task to globalIMPROVE: split error handlers to globalIMPROVE: set default blueprint to blankIMPROVE: Use schema from source rather than local proxyIMPROVE: Using the model name corresponding user input4.0.5 (2021-04-01)FIX: Load the blueprint nameFIX: unknown columnNEW: added codecov workflow (testing)NEW: add default value to ‘confirmed’ column5.0.1 (2021-04-10)Flask-smorest integration. Based on#63Refactor Code: added scaffoldingFIX: use ‘alt_response’ instead of ‘response’ to wrap multiple responses.FIX: configuration to enable / disable OpenAPIIMPROVE: no longer supports to load main urlsIMPROVE: Add a command description to the sub applicationIMPROVE: use ‘subprocess.call’ instead of ‘os.system’Change the name of the password reset request templatecreate pyup.io config file
zemfrog-auth
zemfrog-authAuthentication for the zemfrog frameworkCurrently only supports JWT (JSON Web Token) authentication.FeaturesJWT Authentication BlueprintEvent signal support for user information (login, register, etc)UsageInstall the modulepipinstallzemfrog-authAdd jwt blueprints to your zemfrog applicationBLUEPRINTS=["zemfrog_auth.jwt"]Using event signalsIn this section I will give an example of using the event signal using a blinker.# Add this to wsgi.pyfromzemfrog_auth.signalsimporton_user_logged_in@on_user_logged_in.connectdefon_logged_in(user):print("Signal user logged in:",user)For a list of available signals, you can see ithere. For signal documentation you can visithere.
zemfrog-quasar
Zemfrog QuasarQuasar Framework IntegrationFeaturesSupport vue v2Added Vuex & Vue-RouterIntegrated with QuasarUsageInstall thispipinstallzemfrog-quasarAdd this to theZEMFROG_THEMESconfigurationZEMFROG_THEMES=["zemfrog_quasar"]Quick TutorialIn this theme, several jinja blocks are available, such as:meta- List of meta tags to includelinks- This is to be included in the head tag as (link,title, etc)content- This is to be included in the main tag (div#q-app)js- The js script that will be included, after vue, vuex, vue-router & quasarvuex- Configuration passed to vuexvue_router- Configuration passed to vue-routervue- Configuration passed to vueLayoutsExamples of using layouts:{% extends 'quasar/layout.html' %}LinksAn example of using block links, below we add material icons. By default it doesn't come with an icon, so you have to add it yourself.{% block links %}<linkhref="https://fonts.googleapis.com/css?family=Roboto:100,300,400,500,700,900|Material+Icons"rel="stylesheet"type="text/css">{% endblock %}ContentAn example of using block content:{% block content %}<q-toolbarclass="text-primary"><q-btnflatrounddenseicon="menu"></q-btn><q-toolbar-title>Toolbar</q-toolbar-title><q-btnflatrounddenseicon="more_vert"></q-btn></q-toolbar>{% endblock %}VueExamples of using vue:{% block content %}<q-toolbarclass="text-primary"><q-btnflatrounddenseicon="menu"@click="active = !active"></q-btn><q-toolbar-title>Is this active?<{ active }></q-toolbar-title><q-btnflatrounddenseicon="more_vert"></q-btn></q-toolbar>{% endblock %} {% block vue %} data() { return { active: false } }, {% endblock %}In the example above, there are several explanations. See below:We adddatato vue via thevue blockWe also change the vue delimiter to<{,}>QuasarIf you use this, you can't use the self-closing tag. See herehttps://quasar.dev/start/umd#usage
zemfrog-resetdb
No description available on PyPI.
zemfrog-test
zemfrog-testZemfrog unit testing toolsFeaturesSupport automatically create unit tests for API / blueprintsAvailable fixtures:clientThis is to access the Client class to interact with the APIapp_ctxThis is to enable the flask context applicationreq_ctxThis is to activate the flask request context applicationuserThis is to generate confirmed random usersWarningzemfrog test is available a finalizer to delete all users when the test session ends. so you need to create a special database for testing.UsageInstall thispipinstallzemfrog-testAnd add it to theCOMMANDSconfiguration in the zemfrog application.COMMANDS=["zemfrog_test"]Now that you have thetestcommand, here is a list of supported commands:init- Initialize the tests directory in the project directory.new- Create unit tests for the API or blueprint. (The names entered must matchAPISandBLUEPRINTSconfigurations. For examplezemfrog_auth.jwt)run- To run unit tests.It doesn't work with thepytestcommand, don't know why. :/
zemfrog-theme
zemfrog-themeZemfrog theme - to register your own theme template! The main idea is to make a template for a theme that is easy to customize.UsageInstall thispipinstallzemfrog-themeAdd this to theEXTENSIONSconfigurationEXTENSIONS=["zemfrog_theme"]And you can register your own theme via theZEMFROG_THEMESconfiguration.See boilerplate for theme creation herehttps://github.com/aprilahijriyan/zemfrog-theme-templateSee sample themes herehttps://github.com/zemfrog/zemfrog-quasar
zem-pysolarmanv5
Note:This project is a placeholder for unreleased versions of the upstream project. Please seehttps://pypi.org/project/pysolarmanv5/for the upstream project.pysolarmanv5This is a Python module to interact with Solarman (IGEN-Tech) v5 based solar inverter data loggers. Modbus RTU frames can be encapsulated in the proprietary Solarman v5 protocol and requests sent to the data logger on port tcp/8899.This module aims to simplify the Solarman v5 protocol, exposing interfaces similar to that of theuModbuslibrary.Details of the Solarman v5 protocol have been based on the excellent work ofInverter-Data-Logger by XtheOneand others.Documentationpysolarmanv5 documentation is available onRead the Docs.The Solarman V5 protocol is documentedhere.Supported DevicesA user contributed list of supported devices is availablehere.If you are unsure if your device is supported, please use thesolarman_scanutility to find compatible data logging sticks on your local network.Please note that theSolis S3-WIFI-STdata logging stick isNOT supported.SeeGH issue #8for further information.Some Ethernet data logging sticks have native support Modbus TCP and thereforedo not require pysolarmanv5. SeeGH issue #5for further information.Dependenciespysolarmanv5 requires Python 3.8 or greater.pysolarmanv5 depends onuModbus.InstallationTo install the latest stable version of pysolarmanv5 from PyPi, run:pip install pysolarmanv5To install the latest development version from git, run:pip install git+https://github.com/jmccrohan/pysolarmanv5.gitProjects using pysolarmanv5NosIreland/solismon3NosIreland/solismodjmccrohan/ha_pyscript_pysolarmanv5YodaDaCoda/hass-solarman-modbusschwatter/solarman_mqttRonnyKempe/solismontoledobastos/solarman_battery_autochargeAndyTaylorTweet/solis2mqttpixellos/codereinvented.automation.pycjgwhite/hass-solarimcfarla2003/solarconfiggithubDante/deye-controllerContributionsContributions welcome. Please raise any Issues / Pull Requests viaGithub.Licensepysolarmanv5 is licensed under theMIT License. Copyright (c) 2022 Jonathan McCrohan
zems
Zabbix Extended Monitoring Scripts (and templates)
zen3geo
zen3geoThe 🌏 data science library you've been waiting for~君の前前前世から僕は 君を探しはじめたよSince your past life, I have been searching for you公案Geography is difficult, but easy it can also be Deep Learning, you hope, has an answer to all Too this, too that, where to though, where to? Look out, sense within, and now you must knowInstallationTo install the development version from GitHub, do:pip install git+https://github.com/weiji14/zen3geo.gitOr the stable version fromPyPI:pip install zen3geoIf instead,conda-forgeyou desire:mamba install --channel conda-forge zen3geoOther instructions, seehttps://zen3geo.readthedocs.io/en/latest/#installation
zenai
No description available on PyPI.
zenaton
Easy Asynchronous Jobs Manager for DevelopersExplore the docs »Website·Examples in Python·Tutorial in PythonZenaton library for PythonZenatonhelps developers to easily run, monitor and orchestrate background jobs on your workers without managing a queuing system. In addition to this, a monitoring dashboard shows you in real-time tasks executions and helps you to handle errors.The Zenaton library for Python lets you code and launch tasks using Zenaton platform, as well as write workflows as code. You can sign up for an account onZenatonand go through thetutorial in python.RequirementsThis package has been tested with Python 3.5.Python DocumentationYou can find all details onZenaton's website.Table of contentsZenaton library for PythonRequirementsPython DocumentationGetting startedInstallationInstall the Zenaton AgentInstall the libraryFramework integrationQuick startClient InitializationExecuting a background jobOrchestrating background jobsUsing workflowsGetting helpTheorical ExamplesReal-life ExamplesContributingLicenseCode of ConductGetting startedInstallationInstall the Zenaton AgentTo install the Zenaton agent, run the following command:curlhttps://install.zenaton.com/|shThen, you need your agent to listen to your application. To do this, you need yourApplication IDandAPI Token. You can find both onyour Zenaton account.zenatonlisten--app_id=YourApplicationId--api_token=YourApiToken--app_env=YourApplicationEnv--boot=boot.pyInstall the libraryTo add the latest version of the library to your project, run the following command:pipinstallzenatonFramework integrationIf you are usingDjango, please refer to our dedicated documentation to get started:Getting started with DjangoQuick startClient InitializationTo start, you need to initialize the client. To do this, you need yourApplication IDandAPI Token. You can find both onyour Zenaton account.Then, initialize your Zenaton client:fromzenaton.clientimportClientClient(your_app_id,your_api_token,your_app_env)Executing a background jobA background job in Zenaton is a class implementing theZenaton.abstracts.task.Taskinterface.Let's start by implementing a first task printing something, and returning a value:importrandomfromzenaton.abstracts.taskimportTaskfromzenaton.traits.zenatonableimportZenatonableclassHelloWorldTask(Task,Zenatonable):defhandle(self):print('Hello World\n')returnrandom.randint(0,1)Now, when you want to run this task as a background job, you need to do the following:HelloWorldTask().dispatch()That's all you need to get started. With this, you can run many background jobs. However, the real power of Zenaton is to be able to orchestrate these jobs. The next section will introduce you to job orchestration.Orchestrating background jobsJob orchestration is what allows you to write complex business workflows in a simple way. You can execute jobs sequentially, in parallel, conditionally based on the result of a previous job, and you can even use loops to repeat some tasks.We wrote about some use-cases of job orchestration, you can take a look atthese articlesto see how people use job orchestration.Using workflowsA workflow in Zenaton is a class implementing theZenaton.abstracts.workflow.Workflowinterface.We will implement a very simple workflow:First, it will execute theHelloWorldtask. The result of the first task will be used to make a condition using anifstatement. When the returned value will be greater than0, we will execute a second task namedFinalTask. Otherwise, we won't do anything else.One important thing to remember is that your workflow implementationmustbe idempotent. You can read more about that in ourdocumentation.The implementation looks like this:fromtasks.hello_world_taskimportHelloWorldTaskfromtasks.final_taskimportFinalTaskfromzenaton.abstracts.workflowimportWorkflowfromzenaton.traits.zenatonableimportZenatonableclassMyFirstWorkflow(Workflow,Zenatonable):defhandle(self):n=HelloWorldTask().execute()ifn>0:FinalTask().execute()Now that your workflow is implemented, you can execute it by calling thedispatchmethod:MyFirstWorkflow().dispatch()If you really want to run this example, you will need to implement theFinalTasktask.There are many more features usable in workflows in order to get the orchestration done right. You can learn more in ourdocumentation.Getting helpNeed help? Feel free to contact us by chat onZenaton.Found a bug?You can open aGitHub issue.Theorical ExamplesPython examples repoReal-life ExamplesTriggering An Email After 3 Days of Cold Weather(Medium Article,Source Code)ContributingBug reports and pull requests are welcome on GitHubhere. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to theContributor Covenantcode of conduct.TestingTo test your changes before sending a pull request, first install the tests requirements:pipinstall'.[test]'Then run PyTest:pytestLicenseThe package is available as open source under the terms of theMIT License.Code of ConductEveryone interacting in the zenaton-Python project’s codebases, issue trackers, chat rooms and mailing lists is expected to follow thecode of conduct.Changelog[0.4.2] - 2019-10-03AddedAddedcustom_idargument for workflow schedule.Dispatch of tasks and workflows are now done using the API instead of a local agent.Pause, Resume and Kill workflows are now done using the API instead of a local agent.Send event to workflow is now done using the API instead of a local agent.Find workflow is now done using the API instead of a local agent.[0.4.1] - 2019-09-25AddedAdded aintent_idproperty when dispatching workflows and tasks, sending events to workflows, and pausing/resuming/killing workflows.Execution context for tasks and workflowsOptionalon_error_retry_delaymethod handling task failures and specifying how many seconds to wait before retrying.[0.4.0] - 2019-08-26AddedAdded aintent_idproperty when dispatching workflows and tasks, sending events to workflows, and pausing/resuming/killing workflows.Added scheduling:schedule(cron)[0.3.4] - 2019-07-01AddedRun tests in a continuous integration flow.No need for credentials when this lib is running in a Zenaton agent except if dispatching a sub-job.[0.3.3] - 2019-06-25FixedFix a typo in client.py that prevents correct executions of versions[0.3.2] - 2019-06-21FixedCallingday_of_monthon a wait task now waits for to wait for the next day having the requested day number, even if that means waiting for next month. (i.e calling Wait().day_of_month(31) on February, 2nd will wait for March, 31st)Fixed Wait task behavior in some edge casesEncodes HTTP params before sending requestAddedAddedevent_dataproperty when sending event.[0.3.1] - 2019-04-26FixedFixedMANIFEST.infile not included files required bysetup.py.[0.3.0] - 2019-03-25AddedCallingdispatchon tasks now allows to process tasks asynchronouslyFixedFixed Wait task behavior in some edge cases Encodes HTTP params before sending request[0.2.5] - 2018/10/17Object Serialization (including circular structures)[0.2.4] - 2018/09/26Enhanced WithDuration & WithTimestamp classes[0.2.3] - 2018/09/21Minor enhancements (including the workflow find() method)[0.2.2] - 2018/09/19New version scheme management[0.2.1] - 2018/09/17Reorganized modules[0.2.0] - 2018/09/14Full rewriting of the package
zenbitlib
This package if for QML studyKindly contact the author for more info.
zenbot
No description available on PyPI.
zenbu
A setup-agnostic cascading theme engine. Uses Jinja2 for templates and YAML for variable definition.The above gif was brought to you bywzb-utils.Installationpip install zenbuor just movezenbu.pyto somewhere in your$PATH. If you do the latter, you must install the dependencies in the following section manually.If you are running Arch Linux, you can also use the AUR packagezenbu-git(AUR).If you are running Gentoo:layman -o https://raw.githubusercontent.com/azahi/ricerlay/master/overlay.xml -f -a ricerlay layman -s ricerlay emerge app-misc/zenbuDependenciesPython (2 or 3)The below are Python libraries that should be installed viapip. Alternatively, if you didpip install zenbu, these should have been automatically installed.argcompletecolorlogJinja2PyYAMLtermcolorwatchdogTab completionsudo activate-global-python-argcompleteIf you installed via pip, you may need to run the following before autocompletion works:grep 'PYTHON_ARGCOMPLETE_OK' "$(which zenbu)" &>/dev/null || sudo sed -i "1a # PYTHON_ARGCOMPLETE_OK" "$(which zenbu)"UsageCheck theexamplefolder for some sample usage!For a more detailed explanation, check out thewiki homepage.For common issues, check thecommon gotchas wiki page.For some neat tools (including automatic desktop reloads), check thetools wiki page.usage: zenbu [-h] [-l] [-t TEMPLATE_DIR] [-d DEST_DIR] [-s VAR_SET_DIR] [-f FILTERS_FILE] [-i IGNORES_FILE] [-e] [-w] [--watch-command WATCH_COMMAND] [--watch-dirs WATCH_DIRS] [--diff] [--dry] [variable_files [variable_files ...]] A Jinja2 + YAML based config templater. Searches for an optional yaml file with a variable mapping in ~/.config/zenbu/defaults.yaml, an optional python file with filters in (by default) ~/.config/zenbu/filters.py, an optional yaml file with an ignore scalar of regexes in (by default) ~/.config/zenbu/ignores.yaml, and uses the Jinja2 templates in (by default) ~/.config/zenbu/templates/ to render into your home directory (by default). Additional variable files can be applied by supplying them as arguments, in order of application. They can either be paths or, if located in (by default) ~/.config/zenbu/variable_sets/, extension-less filenames. Environment variable support is available; simply run with the `-e` flag and put the name of the variable in Jinja2 brackets. The default Jinja2 globals and filters are available. Order of precedence is: last YAML variable defined > first YAML variable defined > environment variables. Variables are shallowly resolved once. Thus, for example you may have the following in your defaults.yaml for convenience: n_primary: "{{ colors[colors.primary].normal }}" Autocomplete support available, but only for the default variable set directory. A file watcher is available via the -w flag. Whenever a variable file in use, the filters file, the ignores file, or a template file changes, the templates are rendered if there are any differences. This can be overridden with a custom list of directories via the --watch-dirs flag. Diffs between the current destination files and template renderings are available via the --diff flag. For help on designing templates, refer to http://jinja.pocoo.org/docs/dev/templates/ For help on creating filters, refer to http://jinja.pocoo.org/docs/dev/api/#custom-filters positional arguments: variable_files additional variable files optional arguments: -h, --help show this help message and exit -l list variable sets. -t TEMPLATE_DIR template directory. Default: /Users/echan/.config/zenbu/templates -d DEST_DIR destination directory. Default: /Users/echan -s VAR_SET_DIR variable set directory. Default: /Users/echan/.config/zenbu/variable_sets -f FILTERS_FILE filters file. Default: /Users/echan/.config/zenbu/filters.py -i IGNORES_FILE ignores file. Default: /Users/echan/.config/zenbu/ignores.yaml -e whether or not to use environment variables. Default: don't use environment variables -w start file watcher. --watch-command WATCH_COMMAND what to execute when a change occurs. Default: Nothing --watch-dirs WATCH_DIRS override what directories to watch, colon-separated. Default: Nothing --diff show diff between template renderings and current destination files --dry do a dry runZenbu in the wildWhat happened to whizkers?This project may seem awfully similar towhizkers; in fact, this is a fork of whizkers which swaps the Mustache backend out with Jinja2. I’m keeping whizkers around for compatibility reasons. So what are the reasons for switching?Comprehensive documentation: See theJinja2 Template Designer Documentation.Better logic: Everything from if/else to macros. I originally praised Mustache for its logic-less philosophy, but then I realized that there would be no place to put logic other than the variable sets, which is a nightmare.Expressions: You can now do{{ ':bold' if use_bold else '' }}. You can even do{{colors[colors.primary]['normal']}}, which has led to the deprecation of the{` ... `}eval syntax.Filters: You can now do{{ colors.black.bold | to_rgb }}. A lot better than Mustache’s syntax.Better whitespace control: This means increased readability.To help ease the transition to zenbu, there are some tips under themigration wiki page.Thanks tohttps://gist.github.com/coleifer/33484bff21c34644dae1http://jinja.pocoo.org/http://pyyaml.org/fullsalvofor ideas, opinions, the readme gif, contributing to documentation, shilling, and overall being a good guy
zenbu-fftw
ZeNBuZe(ldovich calculations for) N-B(ody Em)u(lators)A package to compute all the Zeldovich contributions to the power spectrum of a tracer with quadratic bias, including beyond-one-loop terms.For further details on the calculations seeKokron et al(2022) andDeRose et al(2022) for the real and redshift-space calculations, respectively.RequirespyFFTW.Based on thevelocileptorscode.Can be installed via pip:pip install git+https://github.com/sfschen/ZeNBu
zencache
zencachePure memory cache powered by orpc.Installpip install zencacheExample server config: zencached-config.ymldaemon: true pidfile: zencached.pid loglevel: INFO logfile: zencached.log server: listen: 0.0.0.0 port: 6779 backlog: 8192 buffer_size: 65536 rfile_buffer_size: 65536 wfile_buffer_size: 65536 max_request_size: 4194304 authentication: enable: true users: app01: spnPF3HzY975GJYC app02: ZWRVfHrK8QkQoOnQ app03: xuFTlTy9i6KCfncp zencache: ttl-scanner-worker-interval: 60 ttl-scanner-manager-interval: 60expire optionsNONE: Default option, always set expiry.NX: Set expiry only when the key has no expiry.XX: Set expiry only when the key has an existing expiry.GT: Set expiry only when the new expiry is greater than current one.LT: Set expiry only when the new expiry is less than current one.Example client usagefrom orpc_client import OrpcConnectionPool zencached_client_pool = OrpcConnectionPool(10, kwargs={ "host": "127.0.0.1", "port": 6779, "username": "app01", "password": "spnPF3HzY975GJYC", "login_event": "zencache.login", "auto_login": True, }) with zencached_client_pool.get_session() as session: session.zencache.set('a', 'a') assert session.zencache.get('a') == 'a'Releasesv0.1.4Force to upgrade orpc version.Doc update.v0.1.3Add gevent patch all.Force item key to str format.v0.1.0First release.
zencad
ZenCadCAD system for righteous zen programmersWhat is it?ZenCad - it's a system for use oce geometry core in openscad's script style. So, it's openscad idea, python language and opencascade power in one.Manual and InformationManual:here.Articles:habr:Система скриптового 3д моделирования ZenCadCommunity chat (Telegram):https://t.me/zencadInstallationCommon:Zencad needspythonoccandopencascade core(OCCT). After first launch (typezencadorpython3 -m zencadcommands) library instalation utility will started. You can use it forpythonoccandOCCTinstallation. Also you can install libraries manualy.apt install qt5-default python3 -m pip install zencad[gui] zencad # On first launch, Zenсad will ask you to download the required libraries. # After completing the necessary operations, close the installation utility and run the program again. zencadInstallation without graphical part:Install zencad as library without gui part:python3 -m pip install zencadpython3 -m zencad --install-occt-forcepython3 -m zencad --install-pythonocc-forceFor Windows:Windows version of ZenCad neededvcredist(Microsoft Redistibutable Package).Please, installvcredist 2015for Python3.7 and alsovcredist 2019for Python3.8 and later.Standalone DistributionZenCad have standalone version for Windows. Windows prerelease version inreleases.Source codeMain project repo:https://github.com/mirmik/zencadRelated repos:https://github.com/mirmik/zenframehttps://github.com/mirmik/evalcacheHelloWorld#!/usr/bin/env python3#coding: utf-8fromzencadimport*model=box(200,center=True)-sphere(120)+sphere(60)display(m)show()Result:
zencelium
Zencelium: Personal Zentropi Instance ServerFree software: BSD 3-Clause LicenseInstallationpip install zenceliumYou can also install the in-development version with:pip install https://github.com/zentropi/python-zencelium/archive/master.zipDocumentationhttps://zencelium.readthedocs.io/DevelopmentTo run the all tests run:toxNote, to combine the coverage data from all the tox environments run:Windowsset PYTEST_ADDOPTS=--cov-append toxOtherPYTEST_ADDOPTS=--cov-append toxChangelog2020.0.0 (2020-03-06)First release on PyPI.
zenchi
zenchizenchi is a python3 application that communicates with AniDB UDP API. It provides an interface to convert raw responses strings into python objects. It does very little by itself and its only intention is to parse data for other applications to use.Currently, only Data commands are supported.Installingpip install -U zenchiUsageFairly straightforward:>>>importzenchi>>>zenchi.create_socket(anidb_server='api.anidb.net',anidb_port=9000)<socket.socket...>>>>zenchi.ping(nat=1)({'port':25065},300)Every command response is a tuple (data, code). data is a dictionary of variable keys containing the parsed response, and code is the response code.Environment variablesZENCHI_CLIENTNAMEandZENCHI_CLIENTVERSIONshould be replaced by your own keys generated at AniDB site. (no guarantee these values are valid at the time of your reading!)ZENCHI_CLIENTNAME=devel ZENCHI_CLIENTVERSION=1 ANIDB_SERVER=api.anidb.net ANIDB_PORT=9000 ANIDB_USERNAME=xXGodKillerXx ANIDB_PASSWORD=hunter2If these values are set, the socket is created automatically and it's much simpler. You can skip the call forcreate_socketentirely and just call the commands:>>>importzenchi>>>zenchi.auth()({'session':'ELahj'},200)>>>zenchi.character(1)(...,235)Anime masksTheANIMEcommand receives a mask as parameter to filter the anime data. zenchi provides an easy way to create these masks with the modulezenchi.mappings.anime.mask.>>>importzenchi.mappings.anime.maskasamask>>>zenchi.anime(amask.aid|amask.romaji_name|amask.english_name|amask.short_name|amask.year,aid=3433)({'aid':3433,'english_name':'Mushi-Shi','romaji_name':'Mushishi','short_name':['Mushi'],'updated_at':datetime.datetime(2019,11,10,19,55,18,1000),'year':'2005-2006'},230)Cachezenchi uses a very basic optional MongoDB database as cache, namedanidb_cache. It uses the environment variableMONGODB_URIto check the connection string. If the variable is not set, a warning will be issued and all cache usage will be ignored (highly unadvised, as per AniDB specifications).Any operations that use the cache have the parameteruse_cachethat defaults toTrue. You can set this toFalseto skip the cache for that specific command (for example, when you want to update the cached data). All cached data also returns aupdated_atkey (see example above), which is the last time that data was updated in the database.If you don't want to useanidb_cacheorMONGODB_URI, manually callzenchi.cache.setupwith the appropriate values before sending requests to the API.FeaturesIt's actually fairly simple to add new commands to zenchi, and I just wrote what I personally intend to use. Feel free to send PRs or request something in the issues.LicenseThis project is under MIT License.For data collection and usage, make sure to readAniDB Policies
zencode
zencodez3-assisted x86 shellcode encoderCurrently it's a rather unpolished implementation with plenty of room for improvement.ToDoIn no particular order:alphanumeric encodinglinting/styleprint resulting bytes (ready for usage) along with size, etctestsx86_64 support0.0.1Initial public release.
zencoder
UNKNOWN
zen_common
This is a security placeholder package. If you want to claim this name for legitimate purposes, please contact us [email protected]@yandex-team.ru
zen_common3
This is a security placeholder package. If you want to claim this name for legitimate purposes, please contact us [email protected]@yandex-team.ru
zen_common_py3
This is a security placeholder package. If you want to claim this name for legitimate purposes, please contact us [email protected]@yandex-team.ru
zenconf
Zenconf is an unopinionated config management system. You put dicts in, it recursively merges them, then returns a single dict with values from later dicts overwriting values from earlier dicts.A default implementation shows how to pull values from a dictionary of defaults, a config file, environment variables and command line parameters. Simply create your own custom config class and add values from whatever data sources you like in your chosen order of precedence. Then, request the merged dictionary and use it throughout the rest of your code.FeaturesSimple. Just add dicts (or OrderedDicts) from wherever you like, and get the merged dictionary back.No constraints on using a particular config file system, arg parser, etc.Key names can be normalised by applying a function to keys recursively to make for easier comparison between keys from different data sources (e.g. from environment variables or yaml files, where one uses underscores to separate words, the other hyphens). By default keys will be converted to lowercase, have hyphens converted to underscores and then have leading underscores removed.Support for filtering by, and stripping an app-specific prefix from keys (configurable per data source). This means that if your app is calledMYAPP, only environment variables with a prefix ofMYAPP_could be added, e.g.MYAPP_LOG_LEVEL=debugcould override a commandline argument–log-level.InstallationClone the repo then run./setup.py install.Or, install from pypi with:pip install zenconfRun tests with:./setup.py testUsageUsage is simple. Just instantiate an instance of zenconf.MergedConfig:Theappnameparameter can be used to namespace keys (such as environment variables).Thedict_boundaryparameter specifies a string that indicates that we should look up the next string fragment in a subdictionary. E.g. using a default of ‘__’, the stringLOG__LEVELrefers to config[‘log’][‘level’].Next, add dicts via theadd()method.To get the merged dict, just callget_merged_config().See comments in the code for more information about parameters. Also see the example in theexamplesdirectory of one way to use the class.
zenconfig
zen-configSimple configuration loader for python.Compared to other solutions, the goal is to bring:simple usage for simple use casesmultiple format supportuse objects rather than plain dict to interact with the configoptionally use the power of pydantic for validationSimple usageIf you don't want to configure much, pass the config path through the env variableCONFIG, and simply use:fromdataclassesimportdataclassfromzenconfigimportConfig@dataclassclassMyConfig(Config):some_key:strsome_optional_key:bool=Falsecfg=MyConfig(some_key="hello")cfg.save()...cfg=MyConfig.load()cfg.some_optional_key=Truecfg.save()...cfg.clear()Config file loadingWhen creating your config, you can specify at least one of those two attributes:ENV_PATHthe environment variable name containing the path to the config file, defaults toCONFIGPATHdirectly the config path💡 When supplying both, if the env var is not set, it will usePATH.User constructs will be expanded. If the file does not exist it will be created. You can specify the file mode viaConfig.FILE_MODE.The config can be loaded from multiple files, seefnmatchfor syntax. Note that you will not be able to save if not handling exactly one file.Read onlyIf you do not want to be able to modify the config from your code, you can useReadOnlyConfig.Supported formatsCurrently, those formats are supported:JSONYAML - requires theyamlextraTOML - requires thetomlextraThe format is automatically inferred from the config file extension. When loading from multiple files, files can be of multiple formats.Other formats can be added by subclassingFormat:Config.register_format(MyFormat(...), ".ext1", ".ext2").💡 You can re-register a format to change dumping options.Supported schemasCurrently, those schemas are supported:plain dictdataclassespydantic models - requires thepydanticextraattrs - requires theattrsextraThe schema is automatically inferred from the config class.Other schemas can be added by subclassingSchema:Config.register_schema(MySchema(...)).You can also force the schema by directly overriding theSCHEMAclass attribute on your config. This can be used to disable auto selection, or pass arguments to the schema instance.⚠️ When using pydantic, you have to supply theClassVartype annotations to all class variable you override otherwise pydantic will treat those as its own fields and complain.ConversionsFor all schemas and formats, common built in types are handledwhen dumping.⚠️ Keep in mind that onlyattrsandpydanticsupport casting when loading the config.You can add custom encoders withConfig.ENCODERS. Forpydantic, stick withthe standard way of doing it.ContributingSeecontributing guide.
zen-core
No description available on PyPI.
zencore-etcd3
zencore-etcd3etcd3的副本。原项目说明本项目是原项目etcd3的副本。原项目pypi页面是:https://pypi.org/project/etcd3/原项目github页面是:https://github.com/kragniz/python-etcd3/相关文档、代码及问题讨论,都请使用原项目相关链接。关于本项目使用pip install etcd3安装的最新版本是0.12.0,但都不能正常使用。其依赖的第三方包都已经升级更新,但通过pip install etcd3安装的代码与最新包都不兼容。测试发现,通过源代码安装,却可以与这些最新的第三方包配合工作。但不知道什么原因,原项目负责人并没有将这些源代码更新,制作成新版本,发布到pip上。本项目旨在将最新源代码进行打包,并发布到pypi上,方便自己的项目使用。版本管理在原项目版本的基础上,加上最新源代码的最后提交日期。版本历史v0.12.0.20220817项目创建。======= History0.1.0 (2016-09-30)First release on PyPI.
zencoreipinfo
zencoreipinfoGet outgoing ip fromhttps://zencore.cn/ipinfo.Installpip install zencoreipinfoUsage in Clitest@test zencoreipinfo % ipinfo --help Usage: ipinfo [OPTIONS] Options: --url TEXT Server url. Can apply multiple times. -q, --quiet Don't show error information. --help Show this message and exit. test@test zencoreipinfo % ipinfo aaa.xxx.yy.zzBy default, the program will use the default ipinfo serverhttps://zencore.cn/ipinfo.We ship cli program in nameipinfo, andzencoreipinfoas it's alias.Usage in ScriptIn [1]: from zencoreipinfo import get_outgoing_ip In [2]: ip = get_outgoing_ip() In [3]: print(ip) aaa.xxx.yy.zzHow to implement self ipinfo server?You need a linux server with a public ip address, and you need to install nginx service on the linux server. Add the ipinfo location into your site.# your other configs location /ipinfo { add_header Content-Type text/plain; return 200 $remote_addr; } # your other configsAfter you setup your ipinfo server. Run the ipinfo command with --url prarameter.test@test zencoreipinfo % ipinfo --url https://you.site.domain.cn/ipinfo aaa.xxx.yy.zzAbout the ipinfo server of zencore.cnWe do not promise continuity or reliability of the service.Release0.1.0First release.
zencore-json2csv
python-json2csvConvert json array data to csv.Note:zencore-json2csv rename to python-json2csvInstallpip install python-json2csvUsageE:\>json2csv --help Usage: json2csv [OPTIONS] Options: -f, --file FILENAME Input file name, use - for stdin. --file-encoding TEXT Input file encoding. -o, --output FILENAME Output file name, use - for stdout. --output-encoding TEXT Output file encoding. -k, --keys TEXT Output field names. Comma separated string list. -p, --path TEXT Path of the data. --help Show this message and exit.ExamplesExample 1INPUT:[ [1,2,3], [2,3,4] ]OUTPUT:1,2,3 2,3,4COMMAND:cat input.txt | json2csv -o output.txtExample 2INPUT:[ {"f1": 11, "f2": 12, "f3": 13}, {"f1": 21, "f3": 23, "f2": 22} ]OUTPUT:11,12,13 21,22,23COMMAND:cat input.txt | json2csv -o output.txt -k f1,f2,f3Example 3INPUT:{ "data": { "list": [ [1,2,3], [2,3,4], ] } }OUTPUT:1,2,3 2,3,4COMMAND:cat input.txt | json2csv -o output.txt -p data.listReleasesv0.2.1 2022-01-10Fix license file missing problem.v0.2.0 2019-11-05Rename from zencore-json2csv to python-json2csv.Fix console application install method in setup.py.v0.1.1 2018-02-27Simple json2csv console utils released.
zen-corpora
Zen-corporaDescriptionZen-corpora provides two main funcitonalities:A memory efficient way to store unique sentences in corpus.Beam text search with RNN model in PyTorch.InstallationThis module requires Python 3.7+. Please install it by running:pipinstallzen-corporaWhy Zen-corpora?Think about how Python stores the corpus below:corpus=[['I','have','a','pen'],['I','have','a','dog'],['I','have','a','cat'],['I','have','a','tie']]It stores each sentence separately, but it's wasting the memory by storing "I have a " 4 times.Zen-corpora solves this problem by storing sentences in a corpus-level trie. For example, the corpus above will be stored as|--I--have--a|--pen|--dog|--cat|--tieIn this way, we can save lots of memory space and sentence search can be a lot faster!Zen-corpora provides Python API to easily construct and interact with a corpus trie. See the following example:>>>importzencorpora>>>fromzencorpora.corpustrieimportCorpusTrie>>>corpus=[['I','have','a','pen'],...['I','have','a','dog'],...['I','have','a','cat'],...['I','have','a','tie']]>>>trie=CorpusTrie(corpus=corpus)>>>print(len(trie))7>>>print(['I','have','a','pen']intrie)True>>>print(['I','have','a','sen']intrie)False>>>trie.insert(['I','have','a','book'])>>>print(['I','have','a','book']intrie)True>>>print(trie.remove(['I','have','a','book']))1>>>print(['I','have','a','book']intrie)False>>>print(trie.remove(['I','have','a','caw']))-1>>>print(trie.make_list())[['i','have','a','pen'],['i','have','a','dog'],['i','have','a','cat'],['i','have','a','tie']]Left-to-Right Beam Text SearchAs shown in SmartReply paper byKannan et al. (2016), corpus trie can be used to perform left-to-right beam search using RNN model. A model encodes input text, then it computes the probability of each pre-defined sentence in the searching space given the encoded input. However, this process is exhaustive. What if we have 1 million sentences in the search space? Without beam search, a RNN model processes 1 million sentences. Thus, the authors used the corpus trie to perform a beam search for their pre-defined sentences. The idea is simple, it starts search from the root of the trie. Then, it only retains beam width number of probable sentences at each level.Zen-corpora provides a class to enable beam search. See the example below.>>>importtorch.nnasnn>>>importtorch>>>importos>>>fromzencorporaimportSearchSpace>>>corpus_path=os.path.join('data','search_space.csv')>>>data=...# assume data contains torchtext Field, encoder and decoder>>>space=SearchSpace(...src_field=data.input_field,...trg_field=data.output_field,...encoder=data.model.encoder,...decoder=data.model.decoder,...corpus_path=corpus_path,...hide_progress=False,...score_function=nn.functional.log_softmax,...device=torch.device('cpu'),...)# you can hide a progress bar by setting hide_progress = FalseConstructCorpusTrie:100%|...|34105/34105[00:01<00:00,21732.69sentence/s]>>>src=['this','is','test']>>>result=space.beam_search(src,2)>>>print(len(result))2>>>print(result)[('is this test?',1.0),('this is test!',1.0)]>>>result=space.beam_search(src,100)>>>print(len(result))100LicenseThis project is licensed under Apache 2.0.
zencrc
ZenCRCA command-line tool for CRC32 stuff.InstallationThis program is packaged as a python package using setuptools and can be installed usingpiporpipsi. For extended testing running in a virtualenv might be a good idea.In package directory, run:$ pipsi install .or:$ pip install .pipsiis a great alternative to regular pip Hereas it installs each package you install, in an isolated area. Adn it doesn't require sudo or Admin access to work it's magic. More detailed functionality can be found @pipsi github repo.UsageThis section will explain all the functions and options available in ZenCRC:Basic help:$ zencrc --helpA more concise version of this help can be hound by using the--helpor-hoption.Append Mode$ zencrc -a {file}You can append a CRC32 checksum to a filename by using the--appendor-aoption. Takes a positional argument {file} or {files} at the end of the command. The CRC will be appended to the end of the file in the following format:filename.ext --> filename [CRC].extSo, therefore: $ zencrc -a [LNS]Gin no Saji [720p-BD-AAC].mkvwill return: [LNS]Gin no Saji [720p-BD-AAC] [72A89BC1].mkvCurrently no functionality exists to change the format in which the CRC is appended but will be added in v0.9Verify Mode$ zencrc -v {file}You can verify a CRC32 checksum in a filename by using the--verifyor-voption. Takes a positional argument {file} or {files} at the end of the command. This will calculate the CRC32 checksum of a file, check it against the CRC in the filename of said file, output the status of that file and the CRC that the program calculated. If the filename does not contain a CRC the program will still calculate and output the CRC32 of that file. Currently no functionality exists to only check files with a CRC32 in their name (except some convoluted, yet clever, regex) but such funtionality may be added in future versions.
zenda
No description available on PyPI.
zen-dash
What is Zen Dash?Zen Dash, a python package, simplifies the building analytics/BI dashboard process while providing enterprise-level scalability, stability, and performance. You can use FastAPI and Flask to host the application.How to run demoLearn more about how to run demo here:LinkHow to create zen dashLearn more about how to crate zen dash:LinkHow to ContributePlease visit this page to learn how to contribute:LinkWhy did I build Zen Dash?There are many dashboarding solutions, like shiny (R, python), Streamlit, and others. I have used all of these solutions. I enjoy building solutions. However, all of these tools and libraries lack one vital point. They are not enterprise-ready solutions. They are fragile and not scalable solutions. So, whenever I built analytics/ BI Dashboards, Even after many bandages, dashboards were delicate and unstable.Before explaining the problem and its reasons, I like to describe the issues we are trying to solve.Some of our dashboards need to be used by more than 200 people simultaneously and require a sub-second response to be practical. They will use more than 8 hours straight for work. The dashboard should not crash during usage.more than a hundred field team members are using our other dashboard, which is spread across multiple countries. Their internet speed is not ideal like office, so the analytics dashboard needs to be anti-fragile. They should be able to share their data with our customers on the ground without worrying about the dashboard crashing or refreshing data.the Third group of the dashboard is far more complex and sensitive since we integrated it into one of our company production environments. This dashboard needs to better code isolation for testing, so they deployed it, so we can be sure it is not breaking the company production environment.So, I invested time in the zen-dash, which addresses these problems. Before jumping into the zen-dash, I want to show you what we have done to make anti-fragile systems using other analytics solutions. I will also explain why it is failing. You might want to try a zen dash to elevate pains in these stages.Building plumber or fastapi/flask to offload computation. Good thing: application becomes somewhat better at responding Bad thing: now, code is in multiple locations, which makes it difficult to onboard and debug. If we use shiny with python, it is difficult for some team members to understand what they are doing.Rather than dashboards, we started to provide more reports. However, this, in turn, created more work to report on building and maintenance.Building simpler dashboards with limited functionality or exploration. We also get pushed back on this type of dashboard because people want to know more than what limited functionality dashboard what returnsUsing Tableau to deliver data to field or high requirements teams. Tableau is not a perfect solution, either. It limits what analytics tools we can use.If you see these signs in your analytics dashboarding solutions, you face a similar and painful situation.Why are other tools creating such issues?Let me explain in technical detail. Here I am focusing on specifically Shiny and Streamlit because both face similar situations. After all, they are using the same architecture and software design. In a single word, it is due to incompatible architecture and software design.They are facing two main issues.WebsocketLack of separation between UI and backend (specifically with shiny).Websocket is one great tool where you can connect the front and back end once and send as much data as you want between them.The chat system uses Websocket. Websocket simplifies your communication architecture when you use WebSocket over the rest API. It is an effective tool for communicating real-time data with lower overhead. Websocket keeps the connection open to the backend server so that new data transfers don't require creating a new link and overhead related to it.However, this constant connection makes a delicate system. It can break for any reason. So, when shiny apps WebSocket crashes, it grays out the screen, and you must start again from scratch. You will lose all filter selections you have made. I believe this is done to reduce overhead to the complexity of code on the shiny side. Because of it, it is not an enterprise-grade solution. Due to these design selections, we are facing problems 1 and 2.Another issue with the WebSocket will create an artificial choke point at the backend level since R is a single thread with poor async support. In R, code usually runs sequentially. You have to use the API pool to redistribute the load on other machines. However, R is choking data push even when an API pool is used. So if we offload work on other services using the rest API pool, it has to wait until all processes are finished. Even if threading support is in python, we are not thinking about additional processing we need where threading is not a good option. The best option is to do multiprocessing. To do that, you need a different type of software design. I have seen situations regularly where shiny renders time well over 1 min when you have too many things to render on the page. These limitations are attributing problems 2 and 3If WebSocket is used, you select only one machine/service to respond. So single service needs to process all requests and respond to them. Because of it, we had to create situation one to address this. These design is attributing problem 1The second biggest issue with shiny is not separating UI from the backend. This design is excellent for rapid development, but if misused, it can slow down your application. It creates problems precisely when you want to render a large data example table. Shiny, rather than sending data for the table, the backend converts to HTML code and pushes HTML code as a string through WebSocket. This process becomes cumbersome very quickly because sending just data will be light, but sending HTML as a string adds quite a bit of load. In addition, you have to use eval or similar code to evaluate the code base, which adds security venerability and other overhead processes. It also slowdowns applications.How much does zen dash make a difference?A dashboard that regularly took more than 60 seconds to load response data within ten or fewer seconds.Our dashboard has become more stable. If the internet connection is slow or fast, it doesn't impact the entire application.The dashboard can run for days at a time without stability issues.How do we achieve it? After understanding the limitation of the WebSocket, I decided to remove the WebSocket for communicating information. Instead, it will send data using a traditional HTTP request.UI is prebuilt in angular and complies, where you can provide what to render using angular flex and material design.Docshttps://zen-reportz.github.io/zen_dash/index.html
zendatastorage
A Python package providing useful and important features to process data from .zenf files and to parse them to local python types.Simple UseRead Var from .zenf fileTest.zenf File["TestVar"] = Hello;Main.py Filefrom zendatastorage import Interpret with open("Test.zenf", "r") as f: VarDict = Interpret(f) print(VarDict["TestVar"])OutputHello
zender
Renderer