package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
zmgtest1
UNKNOWN
zmha-py
Zm-pyA loose python wrapper around theZoneMinderAPI. As time goes on additional functionality will be added to this API client.AcknowledgmentsNot to be confused with ZoneMinder's Pythonic wrapperpyzm, this zm-py project (with a hyphen) is tailored for theHome Assistant ZoneMinder Integrationzm-py is based on code that was originally part ofHome Assistant. Historical sources and authorship information is available as part of the Home Assistant project:ZoneMinder PlatformZoneMinder CameraZoneMinder SensorZoneMinder SwitchInstallationPyPIpipinstallzm-pyUsagefromzoneminder.zmimportZoneMinderSERVER_HOST="{{host}}:{{port}}"USER="{{user}}"PASS="{{pass}}"SERVER_PATH="{{path}}"zm_client=ZoneMinder(server_host=SERVER_HOST,server_path=SERVER_PATH,username=USER,password=PASS,verify_ssl=False)#Zoneminder authenticationzm_client.login()#Get all monitorsmonitors=zm_client.get_monitors()formonitorinmonitors:print(monitor)>>>Monitor(id='monitor_id',name='monitor_name',controllable='is_controllable')#Move camera downcontrollable_monitors=[mforminmonitorsifm.controllable]formonitorincontrollable_monitors:zm_client.move_monitor(monitor,"right")
zmicroservices
No description available on PyPI.
zmipc
zmipcDescriptionA Zero-copy Memory-sharing based IPC which intends to be handy in some cases where socket-based communications do not work well.Getting StartedThe usage of zmipc intends to be straight-forward. Here is an example:fromzmipcimportZMClientsender=ZMClient()receiver=ZMClient()sender.add_publication(topic='test')receiver.add_subscription(topic'test')sender.publish(topic='test',msg='Hello World!')print(receiver.receive(topic='test'))
zmkx
zmkx-sdkzmkx.app的 Python 实现,包含一个供二次开发的库和一个简单的 CLI 客户端。安装需要 Python 3.8 以上。pip3install-Uzmkx快速上手本仓库提供了一个最简单的examples/set_image.py演示换图功能:python3examples/set_image.py你的图片.jpg命令行本仓库实现了一个命令行工具zmkx来操作设备,命令格式如下:zmkx [-s SERIAL] command ...完整命令说明请参考zmkx -h。列出设备$ zmkx list * HelloWord HW-75 Dynamic (序列号: 34314704001A002B) * HelloWord HW-75 Keyboard (序列号: 55895648066BFF53)监控电机状态$ zmkx knob --monitor 控制模式: 角度 | 当前角度: 23.7° | 当前速度: -0.01 rad/s | 目标角度: 30.6° | 目标速度: 1.76 rad/s | 目标电压: 0.035 V换图$ zmkx eink --set 图片.jpg --dither相关链接zmkx.appZMK for HW-75peng-zhihui/HelloWord-Keyboard协议MIT License
zml
Featureszero markup templatesclean syntaxextensiblecomponentsnamespaceslean coderendering of rest datatransclusionsintegration of external defined objects (IPFS)sustainable linked data by using immutable objects (decentral storage)This version requires Python 3 or later.
z-ml-utils
UNKNOWN
zmon-cli
ZMON CLICommand line client for the Zalando Monitoring solution (ZMON).InstallationRequires Python 3.4+$sudopip3install--upgradezmon-cliExampleCreating or updating a single check definition from its YAML file:$zmoncheck-definitionsupdateexamples/check-definitions/zmon-stale-active-alerts.yamlReleaseUpdate zmon_cli/__init__.py in a PR with new version and mergeApprove Release step manually in the CDP pipeline
zmongo
一个封装的mongo连接器参数说明参数名数据类型默认值描述更新日志发布时间发布版本发布说明19-01-100.1.0发布第一版本项目仅供所有人学习交流使用, 禁止用于商业用途
zmongo-filter
Mongo FilterPermite filtrar múltiples objetos de primer nivel de tipo Embebido y 1 de tipo RefenceField, aparir de un dict agregando como primera palabra el nombre del modelo seguido del campo de búsqueda, modeloA_campo_busqueda.Los campos de búsqueda pueden ser de tipo serializables y es capaz de reconocer valoresboolenviados comostr. Si envía el siguiente campo"active":"true"la query se ejecutara como"active":TrueParametros:principal_modelsrefence_modelsparamsModelo que contiene las referencias y embebidosModelo RefenceFielddict con keys para filtrarEjemplo:model_a:{ "id": 1, "name": "abc", "nid": "12323", "addres": EmbeddedDocumentField(model_b), "nid_type": ReferenceField(model_c, dbref=True) }model_b:[ { "id": 1, "name": "cll qwer", "description": "" }, { "id": 2, "name": "cll abc", "description": "" } ]model_c:{ { "id": 1, "name": "C.C", "description": " }, { "id": 2, "name": "C.E", "description": " } }Parámetros de búsqueda:{ "id": 1, "model_b_name": "cll abc", "model_c_name": "C.C" }QuerySet :zmongo.queryset(model_a, model_c, Params)repuesta:{ "id": 1, "name": "", "addres": [addres[0]], "nid_type": (object) }InstallationIf you're running python3 on most systems, you can install the package with the following command:pip3 install zmongo-filterUsagezmongo.queryset(model_a, model_c, Params)
zmon-worker
ZMON’s Python worker is doing the heavy lifting of executing tasks against entities, and evaluating all alerts assigned to check. Tasks are picked up from Redis and the resulting check values plus alert state changes are written back to Redis.Local DevelopmentStart Redis on localhost:6379:$dockerrun-p6379:6379-itredisInstall the required development libraries:$sudoapt-getinstallbuild-essentialpython2.7-devlibpq-devlibldap2-devlibsasl2-devlibsnappy-dev$sudopip2install-rrequirements.txtStart the ZMON worker process:$python2-mzmon_worker_monitorYou can query the worker monitor via RPC:$python2-mzmon_worker_monitor.rpc_clienthttp://localhost:23500/zmon_rpclist_statsRunning Unit Tests$sudopip2install-rtest_requirements.txt$python2setup.pytestBuilding the Docker Image$dockerbuild-tzmon-worker.$dockerrun-itzmon-workerRunning the Docker imageThe Docker image supports many configuration options via environment variables. Configuration options are explained in theZMON Documentation.
zmop
No description available on PyPI.
zmote
Unofficial zmote.io interface=======================This module serves as a Python interface for the [zmote.io](http://zmote.io/)IoT gadget- it's basically a USB-powered, WiFI connected IR blaster.The module was written using the[zmote.io API documentation](http://www.zmote.io/apis) and tested against tworeal devices.----#### OverviewThis module supports the discovery of devices via multicast and interactingwith devices via HTTP or TCP; in all instances communication is directlywith the device (and not via the zmote.io cloud application).#### To install for use standalone/in your project<code>pip install zmote</code>##### To passively discover all devices on your network until timeout (30 seconds)<code>python -m zmote.discoverer</code>##### To actively discover two devices on your local network<code>python -m zmote.discoverer -l 2 -a</code>##### To passively discover a particular device on your local network (e.g. in case of DHCP)<code>python -m zmote.discoverer -u CI001f1234</code>##### To put a device into learn mode via TCP<code>python -m zmote.connector -t tcp -d 192.168.1.1 -c learn</code>##### To tell a device to send an IR signal via HTTP<code>python -m zmote.connector -t http -d 192.168.1.1 -c send -p 1:1,0,36000,1,1,32,32,64,32,32,64,32,3264</code>### To install for further developmentPrerequisites:* [virtualenvwrapper](https://virtualenvwrapper.readthedocs.io/en/latest/])#### Clone the repo<code>git clone https://github.com/initialed85/zmotecd zmote</code>#### Build the virtualenv<code>mkvirtualenv zmotepip install -r requirements-dev.txt</code>#### Run the tests<code>py.test -v</code>
zmp
zmpZoe’s Modules
zmpe
zabbix-monitoring-programs-executionThe program controls the execution of any programs, scripts or commands OS and sends the execution result to zabbix, and in case of an execution error, it additionally can notify via telegram.NOTE: Any programs, scripts or commands OS that is controlled by zm.py in the future, I will call - process.Work logicLogging is done in stdout.All zm.py settings are performed through environment variables.Telegram notifications can be turned off withZM_TELEGRAM_NOTIF=False. In this case, you will only receive alerts from Zabbix in which you can also set up Telegram alerts, but zm.py has more informative alerts.Send data to can be turned off with ZabbixZM_ZABBIX_SEND=False. In this case, you will only receive alerts to Telegram. Send process time execution to can be turned off with ZabbixZM_ZABBIX_SEND_TIME=False. Only error messages are sent to Telegram. Messages about the successful completion of the process are not sent to Telegram (so that there is no flood).In case of successful completion of the process, the process execution time and the successful result are sent to Zabbix. The value of successful result is set to ZM_ZABBIX_OK.In case of the process execution error, execution time = 0 and the unsuccessful result are sent to Zabbix. The value of unsuccessful result is set to ZM_ZABBIX_NOT_OK. You can run zm.py in a Docker container.SettingsENVDefaultDescriptionZM_DEBUGFalseEnable DEBUG mode? (True or False).HOSTNAMEUnknownFor Telegram message to see which host this message is from. In Linux, such a variable is usually already set.Zabbix settingsZM_ZABBIX_SENDTrueShould app send data to Zabbix? (True or False).ZM_ZABBIX_SEND_TIMETrueShould app send execution time to Zabbix? (True or False).ZM_ZABBIX_OK0OK value for Zabbix.ZM_ZABBIX_NOT_OK1Not OK value for Zabbix.ZM_ZABBIX_IPNoneZabbix server ip address.ZM_ZABBIX_HOST_NAMENoneZabbix "Host name". How is the host named in Zabbix. (See picture after table).ZM_ZABBIX_ITEM_NAMENoneHow is the trapped item key named in Zabbix.ZM_ZABBIX_ITEM_TIME_NAMENoneHow is the trapped item for execution time key named in Zabbix.Telegram settingsZM_TELEGRAM_NOTIFTrueShould app send telegram alerts? or log messages only to stdout. (True or False).ZM_TELEGRAM_TIMEOUT10Telegram connection timeout.ZM_TELEGRAM_BOT_TOKENNoneTelegram bot token. It usually looks like this1470616475:AAHFSvznxxLTDedQBSiRVrYVP49ixkghpRT. You need to create a bot in Telegram usingBotFatherand you can also get a bot token there.ZM_TELEGRAM_CHATNoneTelegram chat (ID) to which the bot will send messages. If this is a private chat, then usually the ID looks like a positive number. If it is a channel or group then ID is a negative number.NOTE: ParameterZM_ZABBIX_HOST_NAMEyou can see hereInstall and runInstall Python3Python DownloadCustomize ZabbixIn this example,ZM_ZABBIX_ITEM_NAMEwill be calleddocker-rmi-shandZM_ZABBIX_ITEM_TIME_NAME-docker-rmi-sh-time. This name will be written in theKeyfield.Create trapped itemsZM_ZABBIX_ITEM_NAMEand if you needZM_ZABBIX_ITEM_TIME_NAME.Create trigger forZM_ZABBIX_ITEM_NAMEwith that Expression:{172.26.12.168:docker-rmi-sh.last()}=1 or {172.26.12.168:docker-rmi-sh.nodata(25h)}<>0The trigger fires when there was an error while executing the process or when the process has not run for more than 25 hours.You can see Graphs for items: menuMonitoring-Latest data-Filter.Hostschoose desired host - there isGraphin the item line.Or you cat create your own graphs.SettingsYou must set environment variables on the computer where the zm.py will run and under the account under which zm.py will run.There are many ways to define environment variables.RunIn this example, I write all the necessary variables in file.bash_profile.export ZM_ZABBIX_IP="172.26.12.86" export ZM_ZABBIX_HOST_NAME="172.26.12.168" export ZM_ZABBIX_ITEM_NAME="docker-rmi-sh" export ZM_ZABBIX_ITEM_TIME_NAME="docker-rmi-sh-time" export ZM_TELEGRAM_BOT_TOKEN="1470616475:AAHFSvznxxLTDedQBSiRVrYVP49ixkghpRT" export ZM_TELEGRAM_CHAT="123456789"1) As scriptmkdir /usr/share/zabbix-monitoring-programs-execution cd /usr/share/zabbix-monitoring-programs-execution git clone https://github.com/MinistrBob/zabbix-monitoring-programs-execution.git . pip3 install -r requirements.txt python3 /usr/share/zabbix-monitoring-programs-execution/zm.py <process>2) As cronjob (or if you use sudo -s or su)If you use cronjob or if you use sudo -s or su you will needsourcecommandMAILTO="" 0 3 * * * source /home/user/.bash_profile; python3 /usr/share/zabbix-monitoring-programs-execution/zm.py /usr/share/local/docker-rmi.sh 2>&1For developersGet and install requirements (requirements.txt)c:\MyGit\zabbix-monitoring-programs-execution\venv\Scripts\pip.exe freeze | Out-File -Encoding UTF8 c:\MyGit\zabbix-monitoring-programs-execution\requirements.txtpip install -r c:\MyGit\zabbix-monitoring-programs-execution\requirements.txtPublish the package on pypi.orgpython setup.py sdist twine upload dist/*TelegramsendMessagehttps://api.telegram.org/bot{token}/sendMessageExample message (html):MESSAGE: вќЊ Test<b>bold</b>,<strong>bold</strong><i>italic</i>,<em>italic</em><ahref="URL">inline URL</a><code>inline fixed-width code</code><pre>pre-formatted fixed-width code block</pre>
zm-py
Zm-pyA loose python wrapper around theZoneMinderAPI. As time goes on additional functionality will be added to this API client.AcknowledgmentsNot to be confused with ZoneMinder's Pythonic wrapperpyzm, this zm-py project (with a hyphen) is tailored for theHome Assistant ZoneMinder Integrationzm-py is based on code that was originally part ofHome Assistant. Historical sources and authorship information is available as part of the Home Assistant project:ZoneMinder PlatformZoneMinder CameraZoneMinder SensorZoneMinder SwitchInstallationPyPIpipinstallzm-pyUsagefromzoneminder.zmimportZoneMinderSERVER_HOST="{{host}}:{{port}}"USER="{{user}}"PASS="{{pass}}"SERVER_PATH="{{path}}"zm_client=ZoneMinder(server_host=SERVER_HOST,server_path=SERVER_PATH,username=USER,password=PASS,verify_ssl=False)# Zoneminder authenticationzm_client.login()# Get all monitorsmonitors=zm_client.get_monitors()formonitorinmonitors:print(monitor)>>>Monitor(id='monitor_id',name='monitor_name',controllable='is_controllable')# Move camera downcontrollable_monitors=[mforminmonitorsifm.controllable]formonitorincontrollable_monitors:zm_client.move_monitor(monitor,"right")
zmq
PyZMQ provides Python bindings for libzmq.
zmq-ai-client-python
A ZMQ client interface for llama server
zmqbus
A bus implementation for Python using ZeroMQzmqbusis package for Python that allows for communication between collaborative processes in a bus fashion on the same machine. The bus allows connected parties to subscribe and publish messages, with the ability to direct messages to specific connections. It also supports request/response messages for remote procedure calls (RPC), where connections can send request and receive responses from well-known addresses registered in the bus.IMPORTANT:zmqbusis still beta software. Use in production is strongly discouraged.RequirementsPython 3.7 or abovepyzmqInstallationYou can installzmqbususing pip:pip install zmqbusUsageSeedemo.pyfor usage examples.LicenseSee theLICENSE.txt filefor details.Bug? Critics? Suggestions?Go tohttps://github.com/flaviovs/zmqbus
zmqc
UNKNOWN
zmq_cache
Look aside cache implemented based on ZeroMQ
zmq_cache_client
ZeroMQ look aside cache Python client
zmqcli
zmqclizmqcli is a small but powerful command-line interface toØMQwritten in Python 3.It allows you to create a socket of a given type, bind or connect it to multiple addresses, set options on it, and receive or send messages over it using standard I/O, in the shell or in scripts.It's useful for debugging and experimenting with most possible network topologies.Installationpip install zmqcliUsagezmqcli [-h] [-v] [-0] [-r | -w] (-b | -c) SOCK_TYPE [-o SOCK_OPT=VALUE...] address [address ...]Executing the command as a module is also supported:python -m zmqcli [-h] [-v] [-0] [-r | -w] (-b | -c) SOCK_TYPE [-o SOCK_OPT=VALUE...] address [address ...]ModeWhether to read from or write to the socket. For PUB/SUB sockets, this option is invalid since the behavior will always be write and read respectively. For REQ/REP sockets, zmqcli will alternate between reading and writing as part of the request/response cycle.-r, --readRead messages from the socket onto stdout.-w, --writeWrite messages from stdin to the socket.Behavior-b, --bindBind to the specified address(es).-c, --connectConnect to the specified address(es).Socket ParametersSOCK_TYPEWhich type of socket to create. Must be one of `PUSH`, `PULL`, `PUB`, `SUB`, `REQ`, `REP` or `PAIR`. See `man zmq_socket` for an explanation of the different types. `DEALER` and `ROUTER` sockets are currently unsupported.-o SOCK_OPT=VALUE, --option SOCK_OPT=VALUESocket option names and values to set on the created socket. Consult `man zmq_setsockopt` for a comprehensive list of options. Note that you can safely omit the `ZMQ_` prefix from the option name. If the created socket is of type `SUB`, and no `SUBSCRIBE` options are given, the socket will automatically be subscribed to everything.addressOne or more addresses to bind/connect to. Must be in full ZMQ format (e.g. `tcp://<host>:<port>`)Exampleszmqcli -rc SUB 'tcp://127.0.0.1:5000'Subscribe totcp://127.0.0.1:5000, reading messages from it and printing them to the console. This will subscribe to all messages by default (you don't need to set an emptySUBSCRIBEoption). Alternatively:zmqcli -rc SUB -o SUBSCRIBE='com.organization.' 'tcp://127.0.0.1:5000'This will subscribe to all messages starting withcom.organization..ls | zmqcli -wb PUSH 'tcp://*:4000'Send the name of every file in the current directory as a message from a PUSH socket bound to port 4000 on all interfaces. Don't forget to quote the address to avoid glob expansion.zmqcli -rc PULL 'tcp://127.0.0.1:5202' | tee $TTY | zmqcli -wc PUSH 'tcp://127.0.0.1:5404'Read messages coming from a PUSH socket bound to port 5202 (note that we're connecting with a PULL socket), echo them to the active console, and forward them to a PULL socket bound to port 5404 (so we're connecting with a PUSH).zmqcli -n 10 -0rb PULL 'tcp://*:4123' | xargs -0 grep 'pattern'Bind to a PULL socket on port 4123, receive 10 messages from the socket (with each message representing a filename), and grep the files for'pattern'. The-0option means messages will be NULL-delimited rather than separated by newlines, so that filenames with spaces in them are not considered two separate arguments by xargs.echo "hello" | zmqcli -c REQ 'tcp://127.0.0.1:4000'Send the stringhellothrough a REQ socket connected to localhost on port 4000, print whatever you get back, and finish. In this way, REQ sockets can be used for a rudimentary form of RPC in shell scripts.coproc zmqcli -b REP 'tcp://*:4000' tr -u '[a-z]' '[A-Z]' <&p >&p & echo "hello" | zmqcli -c REQ 'tcp://127.0.0.1:4000'First, start a REP socket listening on port 4000. Thecoprocshell command runs this as a shell coprocess, which allows us to run the next line, tr. This will read its input from the REP socket's output, translate all lowercase characters to uppercase, and send them back to the REP socket's input. This, again, is run in the background. Finally, connect a REQ socket to that REP socket and send the stringhellothrough it: you should just see the stringHELLOprinted on stdout.Pingy exchange exampleSending ping lines from ping command -> PUSH -> PULL -> PUB -> SUB -> stdout:Source of ping records:ping google.com | zmqcli -w -c PUSH tcp://127.0.0.1:5001Broker:zmqcli -r -b PULL tcp://*:5001 | zmqcli -w -b PUB tcp://*:5002Consumer:python -m zmqcli -r -c SUB tcp://127.0.0.1:5002the consumer can be run in multiple instances reporting always the same records.CreditsBased on the work ofZachary Voase, the author of the originalzmqctool.(Un)licenseThis is free and unencumbered software released into the public domain.Anyone is free to copy, modify, publish, use, compile, sell, or distribute this software, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means.In jurisdictions that recognize copyright laws, the author or authors of this software dedicate any and all copyright interest in the software to the public domain. We make this dedication for the benefit of the public at large and to the detriment of our heirs and successors. We intend this dedication to be an overt act of relinquishment in perpetuity of all present and future rights to this software under copyright law.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.For more information, please refer tohttp://unlicense.org/
zmqcs
zmqcszmqcsis apython package that implements a client server infrastructure based on the zeromq library. To do so it fixes some properties:There are 2 types of concurrent communications:Request - response: Used to send commands from the client to the server. Server always answer when the command has finished. Is up to the developer to let the answer return before ending the comand in case the command is takes long to execute.Pub - Sub: The server has the ability to send packages to the client with data. The client has to subscribe to the topics he wants to receive data and define a callback to be executed every time data is received for that topicA full detailed example on how to use the library can be found athttps://github.com/IFAEControl/zmqCSExampleAll the packages are JSON formatted.
zmqdecorators
Decorators for pyZMQ to make it almost as easy to use a DBUS (requires Bonjour for discovery magic)Forked from <https://github.com/HelsinkiHacklab/reactor>## Requirementssudo apt-get install python python-pip python-zmq python-tornado libavahi-compat-libdnssd1 sudo pip install pybonjour || sudo pip install –allow-external pybonjour –allow-unverified pybonjour pybonjourThough pybonjour will be installed by pip automagically if you install this package with pip.Remember to enable global site packages for the ZMQ bindings if using virtualenv.Pro tip for thos wishing to work on the code <http://guide.python-distribute.org/pip.html#installing-from-a-vcs> (short version checkout thenpip install -e ./)
zmqdump
UNKNOWN
zmqf
UNKNOWN
zmqfan
# Helper Code for ZMQ UsagePrimary purpose is shuffling around JSON.## zmqsubThis module is for decoding & encoding JSON and receiving & sending it over sockets.It’s a little bit dodgy at the moment, needs some shine & polish.Particularly, it needs better error handling and a more flexible API. Not yet 1.0, so the API could change. If you need stability, use a git commit id or an exact version number in PyPI.### zmqsub unit testscd zmqfan nosetests -vv tests/
zmqfirewall
No description available on PyPI.
zmqflp
zmqflpImprovements to the Freelance protocol-based zeromq server/client (Python) The client and server talk using cbor2, so the api accepts dictionaries as input.To create a zmqflp server:# create the server object (it runs in an asyncio zmq context)self.server=zmqflp_server.ZMQFLPServer(self.config.identity,self.config.zmq_port)# use the following code to process messages received by the server and send them backasyncdefprocess_messages(self):(serialized_request,orig_headers)=awaitself.server.receive()ifserialized_request=='EXIT':awaitself.server.send(orig_headers,'exiting')returnFalseelifserialized_request!="PING":try:request=serialized_requestresponse=self.process_request(request)awaitself.server.send(orig_headers,response)returnTrueexceptExceptionase:logging.exception(e)returnFalsereturnTrueTo create a client without using a context manager:# create the client object (this does NOT run in an asyncio context)self.client=zmqflp_client.ZMQFLPClient(self.config.list_of_servers)# to send and receive with the clientmsg_to_send={'message':'hello!','other-key':'stuff goes here'}status=self.client.send_and_receive(msg_to_send)To create a client using a context manager (for example, to run on AWS Lambda):# create the client object (this does NOT run in an asyncio context)withzmqflp_client.ZMQFLPClient(self.config.list_of_servers)asclient:# to send and receive with the clientmsg_to_send={'message':'hello!','other-key':'stuff goes here'}status=client.send_and_receive(msg_to_send)
zmq_helpers
UNKNOWN
zmq_legos
A collection of usefull zeromq legos. Building blocks for larger applications.MPDAsyncio implementation using zeromq of the [Majordomo Protocol](https://rfc.zeromq.org/spec:7/MDP/).Currently on my laptop can serve around 4K msg/secondWith a few tweeks:can control eager scheduling of messages to workers (useful for messages that take unequal time)READY additionally returns json dict of worker configuration
zmq-lightweight-messaging
No description available on PyPI.
zmq-message-patterns
Library to quickly build ZeroMQ based Python applications.IntroductionLibrary to make writing applications usingZeroMQmessage patterns throughPyZMQeasier.TODO: explainZMessageandZNodeclasses.PipelineA ventilator sends jobs to multiple worker processes, which send the results to a sink.Channels:ventilator -> worker: jobs for the workersventilator -> sink: IDs of jobs sent to workers, so sink knows if all jobs have completedworker -> sink: resultssink -> worker: sink sends shutdown command, when finishedDiagram:ventilator-------------+ | | +-------+------+ | | | | | worker worker worker ... | || || || | ++------++----++ | || | sink----------------+An is a fully functional example in the examples directory (examples/pipeline_example.py).
zmqmin
Simplified python interface for ZMQ.
zmqnumpy
Zmqnumpy module implements a series of functions used to exchange numpy ndarrays between U{zeromq<http://www.zeromq.org>} sockets. Serializtion of numpy arrays happens using the numpy.ndarray.tostring method which preserves portability to standard C binary format, enabling data exchange with different programming languages. A very simple protocol is defined in order to exchange array data, the multipart messages will be composed of:identifier string namethe numpy array element type (dtype) in its string representationnumpy array shape encoded as a binary numpy.int32 arraythe array data encoded as string using numpy.ndarray.tostring()This protocol guarantees that numpy array can be carried around and recostructed uniquely without errors on both ends of a connected pair enabling an efficient interchange of data between processes and nodes.@author: Marco Bartolini @contact:[email protected]@version: 0.1
zmq_object_exchanger
UNKNOWN
zmq-ops
1. 简介zmq_ops是Avatar训练框架依赖的一个组件,通过把它集成到tensorflow中,可以使得tensorflow支持在线实时学习和训练。它的主要功能包括:符合tensorflow io接口标准,能够和tensorflow集成在一起提供单向数据传输的PUSH-PULL模式,也支持双向数据传输的REQ-ROUTER模式2. 安装2.1 安装依赖condainstallzeromq condainstalltensorflow2.2 从源码安装# 编译前要设置conda环境路径exportCONDA_ENV_PATH=/path/to/conda/envcdzmq_ops pythonsetup.pyinstall2.3 二进制安装pipinstallzmq-ops3. 使用3.1 ZmqReaderzmq reader主要提供ZMQ中的PUSH-PULL模式中的PULL端,它提供了3个OP:zmq_reader_init(end_point, hwm):初始化zmq readerzmq_reader_next(resource, types, shapes):读取下一组数据zmq_reader_readable(resource):判断zmq reader是否可读3.2 ZmqServerzmq server主要提供ZMQ中的REQ-ROUTER模式中的ROUTER端,它提供了3个OPzmq_server_init(end_point, hwm):初始化zmq serverzmq_server_recv_all(resource, types, shapes, min_cnt, max_cnt):尽量从zmq server多读取数据,最少min_cnt条数据,最多max_cnt条数据,并把数据组成一个batch返回,返回client_id和tensorszmq_server_send_all(resource, client_id, tensors):把tensors按照client_id发送给不同的客户端具体使用案例可以参考zmq_reader_test.py和zmq_server_test.py文件
zmq-plugin
UNKNOWN
zmqpubsub
# zmqpubsubdemoPython publisher & subscriber to send JSON payloads over ZMQ.## Installation 1. Install pip3 -sudo apt-get update-sudo apt-get -y install python3-pip2. Install pyzmq -sudo pip3 install pyzmq### Create a Publisher: -zmqpubsub.Publisher(ip=’x.x.x.x’, port=’yyyy’)### Create a Subscriber -zmqpubsub.Subscriber(ip=’x.x.x.x’, port=’yyyy’)Change Log0.0.1 (05/05/2022)First Release0.0.2 (20/07/2022)Added the unsubscribe option.0.0.3 (20/07/2022)Implemented try catch statement0.0.8 (21/10/2022)fixed an issue such that it is possible now to subscribe to many topics on the same subscriber.
zmqpy
UNKNOWN
zmq_py
No description available on PyPI.
zmqrpc
UNKNOWN
zmq_rpc
zmq_rpc0mq RPC calls implemented with pyzmq; part of thezmq_rpcproject.InstallingInstall from pip:$ pip install zmq_rpcPlaying aroundRun the demo server:python zmq_rpc/server.pyRun the demo client:python zmq_rpc/client.pyWatch them talk! =)ContributorsIn order of appearance in the project:Houzéfa Abbasbhay, <[email protected]>Jérémie Gavrel, <[email protected]>Changelog1.0.1Update the demo server so our demo actually works.Doc updates.0.4Packaging tweaks following pypi changes (no multiple source packs).0.3Fix imports.Dependencies: Move tomsgpackinstead ofmsgpack-python.0.2Take logging set-up out of the lib itself.0.1Initial version.
zmq-service-tools
Python library with several tools for zmq services. With this library a server with multiple local or distributed workers can be created very easy.Features:local and / or distributed workers and serverssecure authentication with private / public keysautomatic worker startupdynamic add or remove workerseasy worker configuration for new tasksUsageImports:importservice_toolsCreate Certificates:Options:–path=<path_to_certificates>: path where the certificates are generated–users=<user>: optional; specify for which users a certificate is generated; available options are: all, server, client; default is client–overwrite=<overwrite>: optional; if True overwrite existing directory. default is Falsecreate server certificate:python3generate_certificates.py--path=<path_to_certificates>users=server--overwrite=<overwrite>create user certificate:python3generate_certificates.py--path=<path_to_certificates>users=user--overwrite=<overwrite>create server and user certificate:python3generate_certificates.py--path=<path_to_certificates>users=all--overwrite=<overwrite>ServerA server handles all client requests. The server deals the requests to connected workers which process the request. Several workers can be connected to the server so that requests are computed parallel on all workers.Server configA server is generated with a config file.fromservice_toolsimportServerserver_config_path=r'resources\server_config.ini'# create a new server objectserver=Server(config_path=server_config_path)# start the serverserver.start()In the config file are the following informations:id: uuid of the servername: name of the serverip: ip address of the server; Example 'localhost', '127.0.0.1'port: port of the server; Example: '8006', '8007' If no port is specified a free port between 6001 and 6050 is automatically chosen and written in the config file.backend_port: backend_port of the server; Example: '9006', '9007' If no port is specified a free port between 9001 and 9050 is automatically chosen and written in the config file.public_keys_dir: path to public keyssecret_keys_dir: path to secret keysnum_workers: number of workersauto_start: Bool if workers start automatically when server is startedworker_config_paths: list with paths to the worker config files. If only one worker_config_paths is defined but multiple wokers, this worker_config is copied and a new id for each worker is generated and written in the configworker_script_path: path to script which is executed to start a worker; see also: Python script for automatic worker startlog_dir: directory where logs are createdlogging_mode: logging mode: 'DEBUG' 'INFO' 'WARN' 'ERROR'; see python loggingExample for server config:[main]id=71c4810e-d0f7-4f06-aa50-2fa8b8cc4be2name=test serverip=localhostport=8006backend_port=9028public_keys_dir=resources\public_keyssecret_keys_dir=resources\private_keys[workers]num_workers=4auto_start=Falseworker_config_paths=["resources\worker1_config.ini", "resources\worker2_config.ini"]worker_script_path=resources\test_start_worker.py[logging]log_dir=resources\logging_dirlogging_mode=DEBUGThe server config file can also be generated by a method:fromservice_tools.serverimportServerConfig,Servernew_server_config=ServerConfig.create_new(config_path=r'resources\config.ini',name='test server',ip='localhost',port=8006,public_keys_dir=r'resources\public_keys',secret_keys_dir=r'resources\private_keys',log_dir=r'resources\logging_dir',logging_mode='INFO',num_workers=4,auto_start=False,worker_config_paths=[r'resources\worker1_config.ini'],worker_script_path=r'resources\test_start_worker.py')# start the server with the new created server configserver=Server(config_path=new_server_config.config_path)server.start()WorkerWorkers, like servers, are create with a config file:fromservice_toolsimportWorkerworker_config_path=r'resources\worker_config.ini'# create a new workerworker=Worker(config_path=server_config_path)# start the workerworker.start()In the config file are the following information:id: uuid of the workername: name of the workerip: ip address of the worker; Example 'localhost', '127.0.0.1'port: port of the worker; Example: '9006', '9007'public_keys_dir: path to public keys; optionalsecret_keys_dir: path to secret keys; optionalpython_path: path to the python executable with which the worker should be started; optionallog_dir: directory where logs are createdlogging_mode: logging mode: 'DEBUG' 'INFO' 'WARN' 'ERROR'; see python loggingExample for worker config:[main]id=ee8ac862-ac8d-45eb-a056-effa21c00889name=test workerip=localhostport=9028public_keys_dir=Nonesecret_keys_dir=Nonepython_path=[logging]log_dir=resources\logging_dirlogging_mode=INFOThe worker config file can also be generated by a method:fromservice_tools.workerimportWorkerConfignew_worker_config=WorkerConfig.create_new(config_path=r'resources\worker1_config.ini',name='test worker',ip=None,port=None,public_keys_dir=None,secret_keys_dir=None,log_dir=r'resources\logging_dir',logging_mode='INFO')# start the server with the new created server configworker=Worker(config_path=new_worker_config.config_path)worker.start()Worker functionalityWorkers by default have no functionality. To add functionality create a class inherited from the Worker class. There are many ways how functionality can be added.One way ist to create a Message class:classMessage(object):def__init__(self,*args,**kwargs):self.method=kwargs.get('method',None)self.args=kwargs.get('args',list())self.kwargs=kwargs.get('args',dict())The client then sends a message with a method specified as string. The worker receives the message and selects its method to execute with:method=getattr(self,message.method)This method is then executed with the args and kwargs in the message and the return value is returned to the client:return(method(*message.args,**message.kwargs))Example: create a worker with the functionality ‘return_non_sense’ which returns ‘non sense’Worker:fromservice_toolsimportWorkerclassExtendedWorker(Worker):def__init__(self,*args,**kwargs):Worker.__init__(self,*args,**kwargs)defreturn_non_sense(self,*args,**kwargs):return'non sense'Client:importzmqclassMessage(object):def__init__(self,*args,**kwargs):self.method=kwargs.get('method',None)self.args=kwargs.get('args',list())self.kwargs=kwargs.get('args',dict())ctx=zmq.Context.instance()client=ctx.socket(zmq.REQ)client_public,client_secret=zmq.auth.load_certificate(r'resources\private_keys\5520471c-66c1-4605-80dc-a9ff84d959da.key_secret')server_public,_=zmq.auth.load_certificate(r'resources\public_keys\server.key')client.curve_secretkey=client_secretclient.curve_publickey=client_publicclient.curve_serverkey=server_publicclient.connect('tcp://localhost:8006')message=Message(method='return_non_sense')client.send_pyobj(message)return_value=client.recv_pyobj()Python script for automatic worker startHere is a script which starts a worker when executed. The path to the config is given by –config_file argument.fromconfigparserimportConfigParserimportargparseimportosimportuuid# Import a worker. In real case this would be a custom worker with additional functionality.fromsrc.service_tools.workerimportWorker# Import of message is necessary!fromsrc.service_tools.messageimportMessageif__name__=='__main__':parser=argparse.ArgumentParser()parser.add_argument('--config_file',required=True,help="worker config file",type=str)args=parser.parse_args()config_file=args.config_fileprint(f'reading config file:{config_file}')ifnotos.path.isfile(config_file):raiseFileExistsError(f'{config_file}does not exist')config=ConfigParser()config.read(config_file)try:name=config.get('main','name',fallback=None)exceptExceptionase:name=''try:id=uuid.UUID(config.get('main','id',fallback=None))exceptExceptionase:id=''try:ip=config.get('main','ip',fallback=None)exceptExceptionase:print('ip in{self.config_path}does not exist. Assume localhost...')ip='localhost'print(f'starting worker:\nname:{name}\nid:{id}\nip:{ip}')new_worker=Worker(config_path=config_file)new_worker.start()RequirementsPython 3.7+.Windows SupportSummary: On Windows, usepyinstead ofpython3for many of the examples in this documentation.This package fully supports Windows, along with Linux and macOS, but Python is typicallyinstalled differently on Windows. Windows users typically access Python through thepylauncher rather than apython3link in theirPATH. Within a virtual environment, all platforms operate the same and use apythonlink to access the Python version used in that virtual environment.DependenciesDependencies are defined in:requirements.inrequirements.txtdev-requirements.indev-requirements.txtVirtual EnvironmentsIt is best practice during development to create an isolatedPython virtual environmentusing thevenvstandard library module. This will keep dependant Python packages from interfering with other Python projects on your system.On *Nix:$python3-mvenvvenv$sourcevenv/bin/activateOn Windowscmd:>py-mvenvvenv>venv\Scripts\activate.batOnce activated, it is good practice to update core packaging tools (pip,setuptools, andwheel) to the latest versions.(venv)$python-mpipinstall--upgradepipsetuptoolswheelPackagingThis project is designed as a Python package, meaning that it can be bundled up and redistributed as a single compressed file.Packaging is configured by:pyproject.tomlsetup.pyMANIFEST.inTo package the project as both asource distributionand awheel:(venv)$pythonsetup.pysdistbdist_wheelThis will generatedist/fact-1.0.0.tar.gzanddist/fact-1.0.0-py3-none-any.whl.Read more about theadvantages of wheelsto understand why generating wheel distributions are important.Upload Distributions to PyPISource and wheel redistributable packages can beuploaded to PyPIor installed directly from the filesystem usingpip.To upload to PyPI:(venv)$python-mpipinstalltwine(venv)$twineuploaddist/*TestingAutomated testing is performed usingtox. tox will automatically create virtual environments based ontox.inifor unit testing, PEP8 style guide checking, and documentation generation.# Run all environments. # To only run a single environment, specify it like: -e pep8 # Note: tox is installed into the virtual environment automatically by pip-sync command above.(venv)$toxUnit TestingUnit testing is performed withpytest. pytest has become the defacto Python unit testing framework. Some key advantages over the built inunittestmodule are:Significantly less boilerplate needed for tests.PEP8 compliant names (e.g.pytest.raises()instead ofself.assertRaises()).Vibrant ecosystem of plugins.pytest will automatically discover and run tests by recursively searching for folders and.pyfiles prefixed withtestfor any functions prefixed bytest.Thetestsfolder is created as a Python package (i.e. there is an__init__.pyfile within it) because this helpspytestuniquely namespace the test files. Without this, two test files cannot be named the same, even if they are in different sub-directories.Code coverage is provided by thepytest-covplugin.When running a unit test tox environment (e.g.tox,tox-epy37, etc.), a data file (e.g..coverage.py37) containing the coverage data is generated. This file is not readable on its own, but when thecoveragetox environment is run (e.g.toxortox-e-coverage), coverage from all unit test environments is combined into a single data file and an HTML report is generated in thehtmlcovfolder showing each source file and which lines were executed during unit testing. Openhtmlcov/index.htmlin a web browser to view the report. Code coverage reports help identify areas of the project that are currently not tested.Code coverage is configured inpyproject.toml.To pass arguments topytestthroughtox:(venv)$tox-epy37---kinvalid_factorialCode Style CheckingPEP8is the universally accepted style guide for Python code. PEP8 code compliance is verified usingflake8. flake8 is configured in the[flake8]section oftox.ini. Extra flake8 plugins are also included:pep8-naming: Ensure functions, classes, and variables are named with correct casing.Automated Code FormattingCode is automatically formatted usingblack. Imports are automatically sorted and grouped usingisort.These tools are configured by:pyproject.tomlTo automatically format code, run:(venv)$tox-efmtTo verify code has been formatted, such as in a CI job:(venv)$tox-efmt-checkGenerated DocumentationDocumentation that includes theREADME.rstand the Python project modules is automatically generated using aSphinxtox environment. Sphinx is a documentation generation tool that is the defacto tool for Python documentation. Sphinx uses theRSTmarkup language.This project uses thenapoleonplugin for Sphinx, which renders Google-style docstrings. Google-style docstrings provide a good mix of easy-to-read docstrings in code as well as nicely-rendered output."""Computes the factorial through a recursive algorithm. Args: n: A positive input value. Raises: InvalidFactorialError: If n is less than 0. Returns: Computed factorial. """The Sphinx project is configured indocs/conf.py.Build the docs using thedocstox environment (e.g.toxortox-edocs). Once built, opendocs/_build/index.htmlin a web browser.Generate a New Sphinx ProjectTo generate the Sphinx project shown in this project:# Note: Sphinx is installed into the virtual environment automatically by pip-sync command # above.(venv)$mkdirdocs(venv)$cddocs(venv)$sphinx-quickstart--no-makefile--no-batchfile--extensionssphinx.ext.napoleon# When prompted, select all defaults.Modifyconf.pyappropriately:# Add the project's Python package to the path so that autodoc can find it.importosimportsyssys.path.insert(0,os.path.abspath('../src'))...html_theme_options={# Override the default alabaster line wrap, which wraps tightly at 940px.'page_width':'auto',}Modifyindex.rstappropriately:.. include:: ../README.rst apidoc/modules.rstProject StructureTraditionally, Python projects place the source for their packages in the root of the project structure, like:fact ├── fact │ ├── __init__.py │ ├── cli.py │ └── lib.py ├── tests │ ├── __init__.py │ └── test_fact.py ├── tox.ini └── setup.pyHowever, this structure isknownto have bad interactions withpytestandtox, two standard tools maintaining Python projects. The fundamental issue is that tox creates an isolated virtual environment for testing. By installing the distribution into the virtual environment,toxensures that the tests pass even after the distribution has been packaged and installed, thereby catching any errors in packaging and installation scripts, which are common. Having the Python packages in the project root subverts this isolation for two reasons:Callingpythonin the project root (for example,python-mpytest tests/)causes Python to add the current working directory(the project root) tosys.path, which Python uses to find modules. Because the source packagefactis in the project root, it shadows thefactpackage installed in the tox environment.Callingpytestdirectly anywhere that it can find the tests will also add the project root tosys.pathif thetestsfolder is a a Python package (that is, it contains a__init__.pyfile).pytest adds all folders containing packagestosys.pathbecause it imports the tests like regular Python modules.In order to properly test the project, the source packages must not be on the Python path. To prevent this, there are three possible solutions:Remove the__init__.pyfile fromtestsand runpytestdirectly as a tox command.Remove the__init__.pyfile from tests and change the working directory ofpython-mpytesttotests.Move the source packages to a dedicatedsrcfolder.The dedicatedsrcdirectory is therecommended solutionbypytestwhen using tox and the solution this blueprint promotes because it is the least brittle even though it deviates from the traditional Python project structure. It results is a directory structure like:fact ├── src │ └── fact │ ├── __init__.py │ ├── cli.py │ └── lib.py ├── tests │ ├── __init__.py │ └── test_fact.py ├── tox.ini └── setup.pyType HintingType hintingallows developers to include optional static typing information to Python source code. This allows static analyzers such asPyCharm,mypy, orpytypeto check that functions are used with the correct types before runtime.ForPyCharm in particular, the IDE is able to provide much richer auto-completion, refactoring, and type checking while the user types, resulting in increased productivity and correctness.This project uses the type hinting syntax introduced in Python 3:deffactorial(n:int)->int:Type checking is performed by mypy viatox-emypy. mypy is configured insetup.cfg.LicensingLicensing for the project is defined in:LICENSE.txtsetup.pyThis project uses a common permissive license, the MIT license.You may also want to list the licenses of all of the packages that your Python project depends on. To automatically list the licenses for all dependencies inrequirements.txt(and their transitive dependencies) usingpip-licenses:(venv)$tox-elicenses...NameVersionLicensecolorama0.4.3BSDLicenseexitstatus1.3.0MITLicensePyCharm ConfigurationTo configure PyCharm 2018.3 and newer to align to the code style used in this project:Settings | Search “Hard wrap at”Editor | Code Style | General | Hard wrap at: 99Settings | Search “Optimize Imports”Editor | Code Style | Python | Imports☑ Sort import statements☑ Sort imported names in “from” imports☐ Sort plain and “from” imports separately within a group☐ Sort case-insensitivelyStructure of “from” imports◎ Leave as is◉ Join imports with the same source◎ Always split importsSettings | Search “Docstrings”Tools | Python Integrated Tools | Docstrings | Docstring Format: GoogleSettings | Search “Force parentheses”Editor | Code Style | Python | Wrapping and Braces | “From” Import Statements☑ Force parentheses if multilineIntegrate Code FormattersTo integrate automatic code formatters into PyCharm, reference the following instructions:black integrationThe File Watchers method (step 3) is recommended. This will runblackon every save.isort integrationThe File Watchers method (option 1) is recommended. This will runisorton every save.
zmq-ses-communications
No description available on PyPI.
zmqTools
python使用MessageQueue技術進行跨平台跨程序間的訊息傳遞, 這裡我們主要時使zmq將Client的金融交易資料傳遞至Server端, 並由Server端與券商API串接進行交易.在Client端的Package我們使用的類型為dealer模式建置, 但Server端接收時則使用Router的非同步模式, 以利訊息可以順利接收而必免堵塞.
zmq-tubes
ZMQ TubesZMQ Tubes is a managing system for ZMQ communication. It can manage many ZMQ sockets by one interface. The whole system is hierarchical, based on topics (look atMQTT topics).ClassesTubeMessage- This class represents a request/response message. Some types of tubes require a response in this format.Tube- This class wraps a ZMQ socket. It represents a connection between client and server.TubeMonitor- The class can sniff of the ZMQTube communication.TubeNode- This represents an application interface for communication via tubes.Asyncio / ThreadingThe library support bot method. Asyncio from Python 3.7.fromzmq_tubesimportTubeNode,Tube# Asyncio classesfromzmq_tubes.threadsimportTubeNode,Tube# Threads classesUsage:Node definitions in yml fileWe can define all tubes for one TubeNode by yml file. Next examples require install these packagesPyYAML,pyzmqandzmq_tubes.Client service (asyncio example)# client.ymltubes:-name:Client REQaddr:ipc:///tmp/req.pipetube_type:REQtopics:-foo/bar-name:Client PUBaddr:ipc:///tmp/pub.pipetube_type:PUBtopics:-foo/pub/## client.pyimportasyncioimportyamlfromzmq_tubesimportTubeNode,TubeMessageasyncdefrun():withopen('client.yml','r+')asfd:schema=yaml.safe_load(fd)node=TubeNode(schema=schema)asyncwithnode:print(awaitnode.request('foo/bar','message 1'))awaitnode.publish('foo/pub/test','message 2')if__name__=="__main__":asyncio.run(run())>pythonclient.py topic:foo/bar,payload:responseServer service (threads example)# server.ymltubes:-name:server ROUTERaddr:ipc:///tmp/req.pipetube_type:ROUTERserver:Truetopics:-foo/bar-name:server SUBaddr:ipc:///tmp/pub.pipetube_type:SUBserver:Truetopics:-foo/pub/## server.pyimportyamlfromzmq_tubes.threadsimportTubeNode,TubeMessagedefhandler(request:TubeMessage):print(request.payload)ifrequest.tube.tube_type_name=='ROUTER':returnrequest.create_response('response')defrun():withopen('server.yml','r+')asfd:schema=yaml.safe_load(fd)node=TubeNode(schema=schema)node.register_handler('foo/#',handler)withnode:node.start().join()if__name__=="__main__":run()>pythonserver.py message1message2YAML definitionThe yaml file starts with a root elementtubes, which contains list of all our tube definitions.name- string - name of the tube.addr- string - connection or bind address in formattransport://address(see morehttp://api.zeromq.org/2-1:zmq-connect)server- bool - is this tube server side (bind toaddr) or client side (connect toaddr)tube_type- string - type of this tube (see morehttps://zguide.zeromq.org/docs/chapter2/#Messaging-Patterns)identity- string - (optional) we can setup custom tube identityutf8_decoding- bool - (default = True), if this is True, the payload is automatically UTF8 decode.sockopts- dict - (optional) we can setup sockopts for this tube (see morehttp://api.zeromq.org/4-2:zmq-setsockopt)monitor- string - (optional) bind address of tube monitor (see moreDebugging / Monitoring)Request / ResponseThis is a simple scenario, the server processes the requests serially.Server:fromzmq_tubesimportTube,TubeNode,TubeMessageasyncdefhandler(request:TubeMessage):print(request.payload)return'answer'# or return request.create_response('response')tube=Tube(name='Server',addr='ipc:///tmp/req_resp.pipe',server=True,tube_type='REP')node=TubeNode()node.register_tube(tube,'test/#')node.register_handler('test/#',handler)awaitnode.start()# output: 'question'Client:fromzmq_tubesimportTube,TubeNodetube=Tube(name='Client',addr='ipc:///tmp/req_resp.pipe',tube_type='REQ')node=TubeNode()node.register_tube(tube,'test/#')response=awaitnode.request('test/xxx','question')print(response.payload)# output: 'answer'The methodrequestaccepts the optional parameterutf8_decoding. When we set this parameter toFalsein previous example, the returned payload is not automatically decoded, we get bytes.Subscribe / PublisherServer:fromzmq_tubesimportTube,TubeNode,TubeMessageasyncdefhandler(request:TubeMessage):print(request.payload)tube=Tube(name='Server',addr='ipc:///tmp/sub_pub.pipe',server=True,tube_type='SUB')node=TubeNode()node.register_tube(tube,'test/#')node.register_handler('test/#',handler)awaitnode.start()# output: 'message'Client:fromzmq_tubesimportTube,TubeNodetube=Tube(name='Client',addr='ipc:///tmp/sub_pub.pipe',tube_type='PUB')# In the case of publishing, the first message is very often# lost. The workaround is to connect the tube manually as soon as possible.tube.connect()node=TubeNode()node.register_tube(tube,'test/#')node.publish('test/xxx','message')Request / RouterThe server is asynchronous. It means it is able to process more requests at the same time.Server:importasynciofromzmq_tubesimportTube,TubeNode,TubeMessageasyncdefhandler(request:TubeMessage):print(request.payload)ifrequest.payload=='wait':awaitasyncio.sleep(10)returnrequest.create_response(request.payload)tube=Tube(name='Server',addr='ipc:///tmp/req_router.pipe',server=True,tube_type='ROUTER')node=TubeNode()node.register_tube(tube,'test/#')node.register_handler('test/#',handler)awaitnode.start()# output: 'wait'# output: 'message'Client:importasynciofromzmq_tubesimportTube,TubeNodetube=Tube(name='Client',addr='ipc:///tmp/req_router.pipe',tube_type='REQ')asyncdeftask(node,text):print(awaitnode.request('test/xxx',text))node=TubeNode()node.register_tube(tube,'test/#')asyncio.create_task(task(node,'wait'))asyncio.create_task(task(node,'message'))# output: 'message'# output: 'wait'Dealer / ResponseThe client is asynchronous. It means it is able to send more requests at the same time.Server:fromzmq_tubesimportTube,TubeNode,TubeMessageasyncdefhandler(request:TubeMessage):print(request.payload)return'response'# or return requset.create_response('response')tube=Tube(name='Server',addr='ipc:///tmp/dealer_resp.pipe',server=True,tube_type='REP')node=TubeNode()node.register_tube(tube,'test/#')node.register_handler('test/#',handler)awaitnode.start()# output: 'message'Client:fromzmq_tubesimportTube,TubeNode,TubeMessagetube=Tube(name='Client',addr='ipc:///tmp/dealer_resp.pipe',tube_type='DEALER')asyncdefhandler(response:TubeMessage):print(response.payload)node=TubeNode()node.register_tube(tube,'test/#')node.register_handler('test/#',handler)awaitnode.send('test/xxx','message')# output: 'response'Dealer / RouterThe client and server are asynchronous. It means it is able to send and process more requests/responses at the same time.Server:importasynciofromzmq_tubesimportTube,TubeNode,TubeMessageasyncdefhandler(request:TubeMessage):print(request.payload)ifrequest.payload=='wait':awaitasyncio.sleep(10)returnrequest.create_response(request.payload)tube=Tube(name='Server',addr='ipc:///tmp/dealer_router.pipe',server=True,tube_type='ROUTER')node=TubeNode()node.register_tube(tube,'test/#')node.register_handler('test/#',handler)awaitnode.start()# output: 'wait'# output: 'message'Client:fromzmq_tubesimportTube,TubeNode,TubeMessagetube=Tube(name='Client',addr='ipc:///tmp/dealer_router.pipe',tube_type='DEALER')asyncdefhandler(response:TubeMessage):print(response.payload)node=TubeNode()node.register_tube(tube,'test/#')node.register_handler('test/#',handler)awaitnode.send('test/xxx','wait')awaitnode.send('test/xxx','message')# output: 'message'# output: 'wait'Dealer / DealerThe client and server are asynchronous. It means it is able to send and process more requests/responses at the same time.Server:fromzmq_tubesimportTube,TubeNode,TubeMessagetube=Tube(name='Server',addr='ipc:///tmp/dealer_dealer.pipe',server=True,tube_type='DEALER')asyncdefhandler(response:TubeMessage):print(response.payload)node=TubeNode()node.register_tube(tube,'test/#')node.register_handler('test/#',handler)awaitnode.send('test/xxx','message from server')# output: 'message from client'Client:fromzmq_tubesimportTube,TubeNode,TubeMessagetube=Tube(name='Client',addr='ipc:///tmp/dealer_dealer.pipe',tube_type='DEALER')asyncdefhandler(response:TubeMessage):print(response.payload)node=TubeNode()node.register_tube(tube,'test/#')node.register_handler('test/#',handler)awaitnode.send('test/xxx','message from client')# output: 'message from server'Debugging / MonitoringWe can assign a monitor socket to our zmq tubes. By this monitor socket, we can sniff zmq communication or get a zmq tube configuration.tubes:-name:ServerRouteraddr:ipc:///tmp/router.pipemonitor:ipc:///tmp/test.monitortube_type:ROUTERserver:yestopics:-foo/#This is example of a yaml definition. We can use the same monitor socket for more tubes in the same tubeNode. When we add the monitor attribute to our tube definition, the application automatically create a new socket monitor:/tmp/test.monitor. Your application works as a server side. The logs are sent to the socket only for the time, when the monitoring tool is running.Monitoring toolAfter enabling of the monitoring in the application, we can use the monitoring tool for sniff.# get the server tube configuration>zmqtube-monitorget_schemaipc:///tmp/display.monitortubes:-addr:ipc:///tmp/router.pipemonitor:ipc:///tmp/test.monitorname:ServerRouterserver:'yes'tube_type:ROUTER# the log tube communication. Logs will be saved to dump.rec as well.>zmqtube-monitorlogs-d./dump.recipc:///tmp/display.monitor0.28026580810546875ServerRouter<foo/testRequest0.0901789665222168ServerRouter>foo/testResponse# The format of output# <relative time> <tube name> <direction> <topic> <message>`Simulation of the client sideWhen we have a dump file (e.g.dump.rec), we can simulate the communication with our app. The first step is prepare the mock client schema file. For this, We can get the tube node configuration from our application and after that edit it.>zmqtube-monitorget_schemaipc:///tmp/display.monitor>mock_schema.yaml >vimmock_schema.yaml ...# Now, we have to update the file mock_schema.yaml.# We change configuration to the mock client configuration.# The names of the tubes must be the same as are in your app.# We can remove monitoring attribute and change server and# tube_type attributes. In this mock file, the topics are not# required, they are ignored.>catmock_schema.yaml tubes: -addr:ipc:///tmp/router.pipename:ServerRoutertube_type:REQNow, we can start the simulation of the client communication.>zmqtube-monitorsimulatemock_schema.yamldump.recIf the response of our app is not the same as tool expects (the response saved in dump file), then the monitoring tool warns us.We can modify speed of the simulation by the parameter--speed.In the default configuration, is the simulation run the same speed as original communication (parameter--speed=1).Speeddescription0no blocking simulation0.5twice faster than original1original speed2twice slower than originalExample of programming declaration of the monitoring.importzmqfromzmq_tubes.threadsimportTube,TubeNode,TubeMessage,TubeMonitordefhandler(request:TubeMessage):print(request.payload)returnrequest.create_response('response')resp_tube=Tube(name='REP',addr='ipc:///tmp/rep.pipe',server='yes',tube_type=zmq.REP)req_tube=Tube(name='REQ',addr='ipc:///tmp/rep.pipe',tube_type=zmq.REQ)node=TubeNode()node.register_tube(resp_tube,f"foo/#")node.register_tube(req_tube,f"foo/#")node.register_handler(f"foo/#",handler)node.register_monitor(resp_tube,TubeMonitor(addr='ipc:///tmp/test.monitor'))withnode:print(node.request('foo/xxx','message 2'))
zmsai
A command line utility for topic discovery and doc-linking within the Zettelkasten using AI approaches.InstallationInstallzmsaiby executing the following command-$pip3installzmsaiTest RunTest run using dummy docs (see./custom)$zmsaitestUsageTo learnntopics in your Zettelkasten at/path/to/your/zettelkasten/-$zmsairun-tn-p"/path/to/your/zettelkasten/"This will create a metadata filemeta.zmsstoring all the distributions exhibited by the documents in your Zettelkasten.[Runningthemodel]itmaytakeawhile.Hangtight![Datastored]...kbYou can delete your metadata file by executing-$zmsaideleteThese learnt distributions can be printed usingzmsai display. You can pass an additional argument-w, the number of top most occuring words that you want to print from the distributions involing words.To display doc-topic distribution-$zmsaidisplay-ddtTo display topic-word distribution-$zmsaidisplay-wn-dtwTo display doc-word distribution-$zmsaidisplay-wn-ddwTo display vocabulary-$zmsaidisplay-wn-dvocTo display all distributions at once-$zmsaidisplay-wn-dallor simply$zmsaidisplayThis will take default value of 5 fornwordsargument.TroubleshootingIf you getModuleNotFoundError: No module named 'sklearn'error withdisplay, try installingscikit-learnmanually.$sudopip3install-Uscikit-learnAlternatively, if you're on ubuntu, try executing the following command-$zmsaifix-ubuntuFeel free to raise an issue if you feel stuck.Manualusage:zmsai[-h][--path[PATH]][--topics[TOPICS]][--nwords[NWORDS]][--distro[DISTRO]][task]positionalarguments:taskProvidetasktoperform[default:'run'][values:'run','delete','display','man','test','fix-ubuntu']optionalarguments:-h,--helpshowthishelpmessageandexit--path[PATH],-p[PATH]Providedirectoryoftextfiles.[with:'run'][default:'./custom']--topics[TOPICS],-t[TOPICS]Howmanytopicsdoyouexpect?[with:'run'][default:'number of docs']--nwords[NWORDS],-w[NWORDS]Howmanywordspertopic/docdoyouwanttodisplay?[with:'display'][default:5]--distro[DISTRO],-d[DISTRO]Whatdistributionsdoyouwanttodisplay?[with:'display'][default:all][values:'dt','tw','dw','voc','all']Dependency Graphattrs==20.2.0-pytest==6.1.1[requires:attrs>=17.4.0]iniconfig==1.1.1-pytest==6.1.1[requires:iniconfig]joblib==0.17.0-scikit-learn==0.23.2[requires:joblib>=0.11]-sklearn==0.0[requires:scikit-learn]numpy==1.19.2-scikit-learn==0.23.2[requires:numpy>=1.13.3]-sklearn==0.0[requires:scikit-learn]-scipy==1.5.3[requires:numpy>=1.14.5]-scikit-learn==0.23.2[requires:scipy>=0.19.1]-sklearn==0.0[requires:scikit-learn]pip==20.1.1pluggy==0.13.1-pytest==6.1.1[requires:pluggy>=0.12,<1.0]py==1.9.0-pytest==6.1.1[requires:py>=1.8.2]pyparsing==2.4.7-packaging==20.4[requires:pyparsing>=2.0.2]-pytest==6.1.1[requires:packaging]setuptools==46.4.0six==1.15.0-packaging==20.4[requires:six]-pytest==6.1.1[requires:packaging]threadpoolctl==2.1.0-scikit-learn==0.23.2[requires:threadpoolctl>=2.0.0]-sklearn==0.0[requires:scikit-learn]toml==0.10.1-pytest==6.1.1[requires:toml]wheel==0.34.2ContributionAre welcome.LicenseGNU General Public License v3 (GPLv3)
zmsgcentre
消息中心, 避免代码的强耦合性, 支持多线程zmsgcentre是什么zmsgcentre是一个消息中心, 整个程序所有的模块和函数引用都不需要直接调用了, 而是通过消息中心进行转发, 极大的降低代码耦合性为什么需要降低代码耦合性在开发大型程序的时候难免会出现代码的强耦合性, 这是一个危险的设计, 假设你在维护你的程序时, 将某个函数名或参数改变了, 那么所有调用这个函数的代码都必须同时更改, 如果是少数则改代码很简单.Q: 如果是几十个甚至上百上千个地方引用了这个函数怎么办?A: 你告诉我你时间多没关系慢慢改.Q: 万一忘记了一两处没改怎么办?A: ctrl+f全局替换就行了.Q: 如果你的函数名是多用的呢, 如: 函数名为'AA', 或者某个函数名为'AAX', 某一处字符串为'AABC', 你还能淡定么?A: 正则表达式替换...Q: 新的函数名或参数是否会和其他函数产生冲突, 你的任何一处代码更改, 需花费10倍以上的时间来评估整个程序的健壮性, 更改后多个模块之间是否会冲突, 出现隐藏bug如何调试A: emmmmm.....如何使用zmsgcentre使用zmsgcentre你需要明白sender角色, receiver角色以及msg_tagsender可以理解为一个广播电台.msg_tag就是广播电台的频率receiver就是接收电台消息的接收器它有什么优点一次发送广播可以被多个接收器接收.线程安全.支持标签级别线程锁.允许在接收器回调函数中创建和销毁接收器, 销毁消息标签.理解简单, 无需消息订阅系统那么复杂的创建topic和消费者观念.使用方便, 发送者和接受器只需要一行代码定义, 不影响代码阅读性, 让代码逻辑更简单代码开源, 精简, 整个模块只有3.5kb.一个简单的实例importzmsgcentre# 导入zmsgcentre模块defreceiver_func(a):print(a)zmsgcentre.create_receiver('test_tag',receiver_func)# 创建一个接收器, 接收消息标签为'test_tag'的内容zmsgcentre.send('test_tag','广播数据')# 发送广播到'test_tag'多模块实例manager.pyimportzmsgcentreimporttest_Aimporttest_Bdefsend(data):returnzmsgcentre.send('test_tag',data)#发送广播到'test_tag'if__name__=='__main__':result=send('主模块发送来的消息')print(result)# send返回一个列表, 这个列表包含所有接收器的返回值(无序), 无接收器返回一个空列表test_A.pyimportzmsgcentredefreceiver_func_a(data):print(data)return'test_A已收到'zmsgcentre.create_receiver('test_tag',receiver_func_a)# 创建接收器, 指定消息标签, 绑定回调函数test_B.pyimportzmsgcentredefreceiver_func_b(data):print(data)return'test_B已收到'zmsgcentre.create_receiver('test_tag',receiver_func_b)# 创建接收器, 指定消息标签, 绑定回调函数使用装饰器简化代码manager.pyimportzmsgcentreimporttest_Aimporttest_B@zmsgcentre.sender('test_tag')# 创建广播器, 指定广播的消息标签, 绑定广播入口为send函数defsend(data):pass# 无需任何代码, 写了也不会执行这里的代码, 因为没必要.if__name__=='__main__':result=send('主模块发送来的消息')print(result)[email protected]('test_tag')# 创建接收器, 指定消息标签, 绑定回调函数defreceiver_func_a(data):print(data)return'test_A已收到'[email protected]('test_tag')# 创建接收器, 指定消息标签, 绑定回调函数defreceiver_func_b(data):print(data)return'test_B已收到'注: 即使使用了装饰器定义了一个接收器, 仍然可以主动调用它, 如上可以不通过消息中心直接调用receiver_func_b接收器优先级importzmsgcentre# priority表示优先级, 你可以简单理解为接收器到广播器的距离, 数字越小越先收到消息, 数字相等的接收器收到消息顺序随机@zmsgcentre.receiver('test_tag',priority=999)defreceiver_func(a):print(a)更新日志发布时间发布版本发布说明19-01-292.0.1发送器新增一个参数stop_send_flag, 如果有一个接收器返回这个标记则停止发送(内部使用is判断)并且返回True19-01-102.0.0新增了接收器优先级优化了接收器函数中创建和销毁接收器的逻辑, 用户不需要再去判断是否强制使用线程来创建销毁接收器优化了多线程安全更改了所有函数名18-10-051.0.3这是v1版本的最后一版, 将不再更新, 请使用最新版本本项目仅供所有人学习交流使用, 禁止用于商业用途
zmtestlib
Failed to fetch description. HTTP Status Code: 404
zmt-geometric
No description available on PyPI.
zmtmkutils
py-zmtmk-utilsSimple utilities.
zmtools
zmtoolsA conglomeration of functions reused in my programs; maybe they can help you too. The docstrings should explain all you need to know.
zmung
Documentation:https://github.com/ascidev/asciicolor
zmux
zmuxA tmux parameterizerZmux provides a simple way to parameterize commands across several tmux panes. A simple example:$ zmux launch "ls {directory}" Supply up to 6 values for directory: ., zmux 💅 Creating tmux layout 🚀 Sending command to pane 1/2 🚀 Sending command to pane 2/2% ls . LICENSE setup.py zmux README.md tests zmux.egg-info ______________________________________________________ % ls zmux __init__.py __main__.py __pycache__ cli.pyInstallationInstallation usingpipxis recommended:pipx install zmuxOr usingpip:pip install zmuxUsageFor help, run:zmux --helpYou can also use:python -m zmux --helpDevelopmentTo contribute to this tool, first checkout the code. Then create a new virtual environment:cd zmux python -m venv venv source venv/bin/activateNow install the dependencies and test dependencies:pip install -e '.[test]'To run the tests:pytest
zmxtools
ZmxToolsA toolkit to read Zemax files.Currently, this is limited to unpacking ZAR archives. For further processing of the archive's contents, e.g. ZMX or AGF glass files, please check thelist of related softwarebelow.FeaturesUnpack a Zemax OpticStudio® Archive ZAR file using theunzarcommand.Repack a ZAR file as a standard zip file using theunzar -zcommand.Use as a pure Python 3 library.Fully typed with annotations and checked with mypy,PEP561 compatibleInstallationPrerequisitesPython 3.8 or higherpip, the Python package managerTo installzmxtools, just run the following command in a command shell:pipinstallzmxtoolsThezmxtoolslibrary will color-code test output when thecoloredlogspackage is installed. You can optionally install it withpipinstallcoloredlogsUsageThis package can be used directly from a terminal shell or from your own Python code. Example files can be found on manufacturer's sites such asThorlabs Inc.Command line shellThe commandunzaris added to the path upon installation. It permits the extraction of the zar-file to a sub-directory as well as its conversion to a standard zip-file. For example, extracting to the sub-directorymylensis done usingunzar mylens.zarRepacking the same zar-archive as a standard zip-archivemylens.zipis done with:unzar mylens.zar -zMultiple input files and an alternative output directory can be specified:unzar -i *.zar -o some/where/else/Find out more information and alternative options using:unzar -hAs a Python libraryExtraction and repacking can be done programmatically as follows:fromzmxtoolsimportzarzar.extract('mylens.zar')zar.repack('mylens.zar')zar.read('mylens.zar')Pythonpathlib.Pathobjects can be used instead of strings.Online documentationThe latest version of the API Documentation is published onhttps://zmxtools.readthedocs.io/. The documentation is generated automatically in thedocs/ directoryfrom the source code.Contributing to the source codeThe complete source code can be found ongithub: https://github.com/tttom/zmxtools. Check outContributingfor details.LicenseThis code is distributed under theagpl3: GNU Affero General Public LicenseCreditsWouter Vermaelenfor decoding the ZAR header and finding LZW compressed contents.Bertrand Bordagefor sharing thisgist.This project was generated withwemake-python-package. Current template version is:cfbc9ea21c725ba5b14c33c1f52d886cfde94416. See what isupdatedsince then.Related SoftwareOptical ToolKitreads Zemax .zmx files.RayTracingreads Zemax .zmx files.Zemax Glassreads Zemax .agf files.RayOpticsreads Zemax .zmx and CODE-V .seq files.RayOptreads Zemax .zmx as well as OSLO files.OpticsPydoes not read Zemax .zmx files but reads CODE-V .seq files and glass information from data downloaded fromhttps://www.refractiveindex.info/.OpticalGlassreads glass manufacturer Excel sheets.
zmysql
zmysqlZen mysql library.
zmz
No description available on PyPI.
zmzget
This tool is used to fetch update info from zimuzu. About TV serials. Any Problem/PR is welcomed. Mail:[email protected]
zmzmdr
UNKNOWN
zn
znZincDevelopment EnvironmentSetupFollow these steps to create a development environment for Zinc:cd ~/projects git clone [email protected]:blinkdog/zn.git cd zn python3.7 -m venv ./env source env/bin/activate pip install --upgrade pip pip install -r requirements.txtMaintenanceIf you install a new package usingpip installthen update therequirements.txtfile with the following command:pip freeze --all >requirements.txtWorkingThe helper scriptsnakedefines some common project tasks:Try one of the following tasks: snake clean # Remove build cruft snake coverage # Perform coverage analysis snake dist # Create a distribution tarball and wheel snake lint # Run static analysis tools snake publish # Publish the module to Test PyPI snake rebuild # Test and lint the module snake test # Test the moduleThe taskrebuilddoesn't really build (no need to compile Python), but it does run the unit tests and lint the project.Version BumpingIf you need to increase the version number of the project, don't forget to edit the following:CHANGELOG.md setup.py
znail
No description available on PyPI.
znake
Znake is a build system for Python projects.
znanija
UNKNOWN
znbdownload
Upload media files to S3 and add support for private files.FeaturesInstalling and Uninstalling PackagesInstalling in editable mode from local directory.$pipinstall-e/path/to/znbdownload/You can remove the -e to install the package in the corresponding Python path, for example: /env/lib/python3.7/site-packages/znbdownload.List installed packages and uninstall.$piplist$pipuninstallznbdownloadInstalling from git using https.$pipinstallgit+https://github.com/requests/requests.git#egg=requests$pipinstallgit+https://github.com/alexisbellido/znbdownload.git#egg=znbdownloadThis package could be added to a pip requirements.txt file from its git repository or source directory.git+https://github.com/alexisbellido/znbdownload.git#egg=znbdownload-e/path-to/znbdownload/or from PyPi, in this case passing a specific version.znbdownload==0.2ZnbDownload will require, and install if necessary, Django, boto3 and django-storages.Updating Django SettingsAdd the following to INSTALLED_APPS'znbdownload.apps.ZnbDownloadConfig'Make sure these two are also installed.'storages''django.contrib.staticfiles'Amazon S3Some notes to use S3 for storing Django files.Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.More onS3 access permissions.Option 1 (preferred): Resource-based policy.A bucket configured to be allow publc read access and full control by a IAM user that will be used from Django.Create a IAM user. Write down the arn and user credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY).Don’t worry about adding a user policy as you will be using a bucket policy to refer to this user by its arn.Create an S3 bucket at url-of-s3-bucket.Assign it the following CORS configuration in the permissions tab.<CORSConfigurationxmlns="http://s3.amazonaws.com/doc/2006-03-01/"><CORSRule><AllowedOrigin>*</AllowedOrigin><AllowedMethod>GET</AllowedMethod><MaxAgeSeconds>3000</MaxAgeSeconds><AllowedHeader>Authorization</AllowedHeader></CORSRule></CORSConfiguration>Go to permissions, public access settings for the bucket and set these options to false or you won’t be able to use * as Principal in the bucket policy:BlocknewpublicACLsanduploadingpublicobjects(Recommended)RemovepublicaccessgrantedthroughpublicACLs(Recommended)Blocknewpublicbucketpolicies(Recommended)Blockpublicandcross-accountaccessifbuckethaspublicpolicies(Recommended)and the following bucket policy (use the corresponding arn for the bucket and for the IAM user that will have full control).{"Version":"2012-10-17","Id":"name-of-bucket","Statement":[{"Sid":"PublicReadForGetBucketObjects","Effect":"Allow","Principal":"*","Action":"s3:GetObject","Resource":"arn:aws:s3:::name-of-bucket/*"},{"Sid":"FullControlForBucketObjects","Effect":"Allow","Principal":{"AWS":"arn:aws:iam::364908532015:user/name-of-user"},"Action":"s3:*","Resource":["arn:aws:s3:::name-of-bucket","arn:aws:s3:::name-of-bucket/*"]}]}Option 2: user policy.A user configured to control an specific bucket.Create an S3 bucket at url-of-s3-bucket.Assign it the following CORS configuration in the permissions tab.<?xmlversion="1.0"encoding="UTF-8"?><CORSConfigurationxmlns="http://s3.amazonaws.com/doc/2006-03-01/"><CORSRule><AllowedOrigin>*</AllowedOrigin><AllowedMethod>GET</AllowedMethod><MaxAgeSeconds>3000</MaxAgeSeconds><AllowedHeader>Authorization</AllowedHeader></CORSRule></CORSConfiguration>Create a user in IAM and assign it to this policy.{"Version":"2012-10-17","Statement":[{"Sid":"Stmt1394043345000","Effect":"Allow","Action":["s3:*"],"Resource":["arn:aws:s3:::url-of-s3-bucket/*"]}]}Then create the user credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) to connect from Django.
znbstatic
Custom Django storage backend.FeaturesStorage of assets managed by collectstatic on Amazon Web Services S3.Versioning using a variable from Django’s settings (https://example.com/static/css/styles.css?v=1.2)Installing and Uninstalling PackagesInstalling in editable mode from local directory.$pipinstall-e/path/to/znbstatic/You can remove the -e to install the package in the corresponding Python path, for example: /env/lib/python3.7/site-packages/znbstatic.List installed packages and uninstall.$piplist$pipuninstallznbstaticInstalling from git using https.$pipinstallgit+https://github.com/requests/requests.git#egg=requests$pipinstallgit+https://github.com/alexisbellido/znbstatic.git#egg=znbstaticThis package could be added to a pip requirements.txt file from its git repository or source directory.git+https://github.com/alexisbellido/znbstatic.git#egg=znbstatic-e/path-to/znbstatic/or from PyPi, in this case passing a specific version.znbstatic==0.2Znbstatic will require, and install if necessary, Django, boto3 and django-storages.Updating Django SettingsAdd the following to INSTALLED_APPS'znbstatic.apps.ZnbStaticConfig'Make sure these two are also installed.'storages''django.contrib.staticfiles'Add the znbstatic.context_processors.static_urls context processor to the correspoding template engine.Update or insert the following attributes.# if hosting the static files locally. # STATICFILES_STORAGE = ‘znbstatic.storage.VersionedStaticFilesStorage’ # STATIC_URL = ‘/static/’# use the following if using Amazon S3 STATICFILES_STORAGE = ‘znbstatic.storage.VersionedS3StaticFilesStorage’AWS_ACCESS_KEY_ID = ‘your-access-key-id’ AWS_SECRET_ACCESS_KEY = ‘your-secret-access-key’AWS_STORAGE_STATIC_BUCKET_NAME = ‘static.example.com’# where is this used? AWS_S3_HOST = ‘s3.amazonaws.com’S3_USE_SIGV4 = True AWS_QUERYSTRING_AUTH = False AWS_DEFAULT_ACL = ‘public-read’ STATIC_URL = ‘https://s3.amazonaws.com/%s/’ % AWS_STORAGE_STATIC_BUCKET_NAMEZNBSTATIC_VERSION = ‘0.1’Amazon S3Some notes to use S3 for storing Django files.Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.More onS3 access permissions.Option 1 (preferred): Resource-based policy.A bucket configured to be allow publc read access and full control by a IAM user that will be used from Django.Create a IAM user. Write down the arn and user credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY).Don’t worry about adding a user policy as you will be using a bucket policy to refer to this user by its arn.Create an S3 bucket at url-of-s3-bucket.Assign it the following CORS configuration in the permissions tab.<CORSConfigurationxmlns="http://s3.amazonaws.com/doc/2006-03-01/"><CORSRule><AllowedOrigin>*</AllowedOrigin><AllowedMethod>GET</AllowedMethod><MaxAgeSeconds>3000</MaxAgeSeconds><AllowedHeader>Authorization</AllowedHeader></CORSRule></CORSConfiguration>Go to permissions, public access settings for the bucket and set these options to false or you won’t be able to use * as Principal in the bucket policy:BlocknewpublicACLsanduploadingpublicobjects(Recommended)RemovepublicaccessgrantedthroughpublicACLs(Recommended)Blocknewpublicbucketpolicies(Recommended)Blockpublicandcross-accountaccessifbuckethaspublicpolicies(Recommended)and the following bucket policy (use the corresponding arn for the bucket and for the IAM user that will have full control).{"Version":"2012-10-17","Id":"name-of-bucket","Statement":[{"Sid":"PublicReadForGetBucketObjects","Effect":"Allow","Principal":"*","Action":"s3:GetObject","Resource":"arn:aws:s3:::name-of-bucket/*"},{"Sid":"FullControlForBucketObjects","Effect":"Allow","Principal":{"AWS":"arn:aws:iam::364908532015:user/name-of-user"},"Action":"s3:*","Resource":["arn:aws:s3:::name-of-bucket","arn:aws:s3:::name-of-bucket/*"]}]}Option 2: user policy.A user configured to control an specific bucket.Create an S3 bucket at url-of-s3-bucket.Assign it the following CORS configuration in the permissions tab.<?xmlversion="1.0"encoding="UTF-8"?><CORSConfigurationxmlns="http://s3.amazonaws.com/doc/2006-03-01/"><CORSRule><AllowedOrigin>*</AllowedOrigin><AllowedMethod>GET</AllowedMethod><MaxAgeSeconds>3000</MaxAgeSeconds><AllowedHeader>Authorization</AllowedHeader></CORSRule></CORSConfiguration>Create a user in IAM and assign it to this policy.{"Version":"2012-10-17","Statement":[{"Sid":"Stmt1394043345000","Effect":"Allow","Action":["s3:*"],"Resource":["arn:aws:s3:::url-of-s3-bucket/*"]}]}Then create the user credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) to connect from Django.
znck
UNKNOWN
znc-web-logs
No description available on PyPI.
zndraw
ZnDrawInstall viapip install zndraw. If you havepip install pywebviewinstalled, ZnDraw will open in a dedicated window.[!IMPORTANT] ZnDraw has undergone a major change with version 0.3.0. The current version is not fully compatible with Windows. We are investigating solutions to make it work again. Furthermore, if you encounter ZnDraw 0.3.0 to be slower at times, this issue will also be mitigated in future releases.CLIYou can use ZnDraw to view a file using the CLIzndraw traj.xyz. Supported file formats include everything thatase.iocan read and additionallyh5files in the H5MD standard.If you want to view the frames while they are added to the scene you can usezndraw -mp traj.xyz. Seezndraw --helpfor more CLI options.PythonZnDraw provides a Python interface. Thezndraw.ZnDrawobject offersappend,extendas well as assignment operations. More information is available in the example notebook.fromzndrawimportZnDrawimportasevis=ZnDraw()vis.socket.sleep(2)# give it some time to fully connectvis[0]=ase.Atoms("H2O",positions=[[0.75,-0.75,0],[0.75,0.75,0],[0,0,0]])ZnDraw also provides an interface to the Pythonlogginglibrary, including support for formatters and different logging levels.importlogginglog=logging.getLogger(__name__)log.addHandler(vis.get_logging_handler())log.critical("Critical Message")ModifierYou can registermodifierto change the scene via the interactions menu.importtypingastfromzndrawimportZnDrawfromzndraw.modifyimportUpdateSceneimportasevis=ZnDraw()classMyModifier(UpdateScene):discriminator:t.Literal["MyModifier"]="MyModifier"defrun(self,vis:ZnDraw,**kwargs)->None:vis.append(molecule("H2O"))vis.register_modifier(MyModifier,default=True,run_kwargs={})User InterfaceDevelopmentZnDraw is developed usinghttps://python-poetry.org/. Furthermore, the javascript packages have to be installed usinghttps://www.npmjs.com/.cdzndraw/static/ npminstall
znester
UNKNOWN
znester1026
No description available on PyPI.
znetwork
znetworkZen network library.
znflow
ZnFlowTheZnFlowpackage provides a basic structure for building computational graphs based on functions or classes. It is designed as a lightweight abstraction layer tolearn graph computing.build your own packages on top of it.InstallationpipinstallznflowUsageConnecting FunctionsWith ZnFlow you can connect functions to each other by using the@nodifydecorator. Inside theznflow.DiGraphthe decorator will return aFunctionFutureobject that can be used to connect the function to other nodes. TheFunctionFutureobject will also be used to retrieve the result of the function. Outside theznflow.DiGraphthe function behaves as a normal [email protected]_mean(x,y):return(x+y)/2print(compute_mean(2,8))# >>> 5withznflow.DiGraph()asgraph:mean=compute_mean(2,8)graph.run()print(mean.result)# >>> 5withznflow.DiGraph()asgraph:n1=compute_mean(2,8)n2=compute_mean(13,7)n3=compute_mean(n1,n2)graph.run()print(n3.result)# >>> 7.5Connecting ClassesIt is also possible to connect classes. They can be connected either directly or via class attributes. This is possible by returningznflow.Connectionsinside theznflow.DiGraphcontext manager. Outside theznflow.DiGraphthe class behaves as a normal class.In the following example we use a dataclass, but it works with all Python classes that inherit fromznflow.Node.importznflowimportdataclasses@znflow.nodifydefcompute_mean(x,y):return(x+y)/[email protected](znflow.Node):x:floaty:floatresults:float=Nonedefrun(self):self.results=(self.x+self.y)/2withznflow.DiGraph()asgraph:n1=ComputeMean(2,8)n2=compute_mean(13,7)# connecting classes and functions to a Noden3=ComputeMean(n1.results,n2)graph.run()print(n3.results)# >>> 7.5Dask SupportZnFlow comes with support forDaskto run your graph:in parallel.through e.g. SLURM (seehttps://jobqueue.dask.org/en/latest/api.html).with a nice GUI to track progress.All you need to do is install ZnFlow with Daskpip install znflow[dask]. We can then extend the example from above. This will runn1andn2in parallel. You can investigate the graph on the Dask dashboard (typicallyhttp://127.0.0.1:8787/graphor via the client object in Jupyter.)importznflowimportdataclassesfromdask.distributedimportClient@znflow.nodifydefcompute_mean(x,y):return(x+y)/[email protected](znflow.Node):x:floaty:floatresults:float=Nonedefrun(self):self.results=(self.x+self.y)/2withznflow.DiGraph()asgraph:n1=ComputeMean(2,8)n2=compute_mean(13,7)# connecting classes and functions to a Noden3=ComputeMean(n1.results,n2)client=Client()deployment=znflow.deployment.Deployment(graph=graph,client=client)deployment.submit_graph()n3=deployment.get_results(n3)print(n3)# >>> ComputeMean(x=5.0, y=10.0, results=7.5)We need to get the updated instance from the Dask worker viaDeployment.get_results. Due to the way Dask works, an inplace update is not possible. To retrieve the full graph, you can useDeployment.get_results(graph.nodes)instead.Working with listsZnFlow supports some special features for working with lists. In the following example we want tocombinetwo [email protected](size:int)->list:returnlist(range(size))print(arange(2)+arange(3))>>>[0,1,0,1,2]withznflow.DiGraph()asgraph:lst=arange(2)+arange(3)graph.run()print(lst.result)>>>[0,1,0,1,2]This functionality is restricted to lists. There are some further features that allow combiningdata: list[list]by either usingdata: list = znflow.combine(data)which has an optionalattribute=Noneargument to be used in the case of classes or you can simply usedata: list = sum(data, []).Attributes AccessInside thewith znflow.DiGraph()context manager, accessing class attributes yieldsznflow.Connectorobjects. Sometimes, it may be required to obtain the actual attribute value instead of aznflow.Connectorobject. It is not recommended to run class methods inside thewith znflow.DiGraph()context manager since it should be exclusively used for building the graph and not for actual computation.In the case of properties or other descriptor-based attributes, it might be necessary to access the actual attribute value. This can be achieved using theznflow.get_attributemethod, which supports all features fromgetattrand can be imported as such:fromznflowimportget_attributeasgetattrHere's an example of how to useznflow.get_attribute:importznflowclassPOW2(znflow.Node):"""Compute the square of x."""x_factor:float=0.5results:float=None_x:float=None@propertydefx(self):[email protected](self,value):# using "self._x = value * self.x_factor" inside "znflow.DiGraph()" would run# "value * Connector(self, "x_factor")" which is not possible (TypeError)# therefore we use znflow.get_attribute.self._x=value*znflow.get_attribute(self,"x_factor")defrun(self):self.results=self.x**2withznflow.DiGraph()asgraph:n1=POW2()n1.x=4.0graph.run()assertn1.results==4.0Instead, you can also use theznflow.disable_graphdecorator / context manager to disable the graph for a specific block of code or theznflow.Propertyas a drop-in replacement forproperty.GroupsIt is possible to create groups ofznflow.nodifyorznflow.Nodesindependent from the graph structure. To create a group you can usewith graph.group(<name>). To access the group members, usegraph.get_group(<name>) -> [email protected]_mean(x,y):return(x+y)/2graph=znflow.DiGraph()withgraph.group("grp1"):n1=compute_mean(2,4)assertn1.uuidingraph.get_group("grp1")Supported FrameworksZnFlow includes tests to ensure compatibility with:"Plain classes"dataclassesZnInitattrsIt is currentlynotcompatible with pydantic. I don't know what pydantic does internally and wasn't able to find a workaround.
znframe
ZnFrame - ASE-like Interface based on dataclassesThis package is designed for light-weight applications that require a structure for managing atomic structures. It's primary focus lies on the conversion to / from JSON, to send data around easily.fromznframeimportFramefromase.buildimportmoleculeframe=Frame.from_atoms(molecule("NH3"))print(frame.to_json())Installationpip install znframe
znfrostock
znfrostockStock package by 2nfro.com
znfthink
No description available on PyPI.
znh5md
ZnH5MD - High Performance Interface for H5MD TrajectoriesZnH5MD allows easy access to simulation results from H5MD trajectories. It provides a Python interface and can convert existing data to H5MD files as well as export to other formats.pip install znh5md["dask"]ExampleIn the following example we investigate an H5MD dump from LAMMPS with 1000 atoms and 201 configurations:importznh5mdtraj=znh5md.DaskH5MD("file.h5",time_chunk_size=500,species_chunk_size=100)print(traj.file.time_dependent_groups)# ['edges', 'force', 'image', 'position', 'species', 'velocity']print(traj.force)# DaskDataSet(value=dask.array<array, shape=(201, 1000, 3), ...)print(traj.velocity.slice_by_species(species=1))# DaskDataSet(value=dask.array<reshape, shape=(201, 500, 3), ...)print(traj.position.value)# dask.array<array, shape=(201, 1000, 3), dtype=float64, chunksize=(100, 500, 3), ...># You can iterate through the dataforitemintraj.position.batch(size=27,axis=0):forxinitem.batch(size=17,axis=1):print(x.value.compute())ASE AtomsYou can use ZnH5MD to store ASE Atoms objects in the H5MD format.ZnH5MD does not support all features of ASE Atoms objects. It s important to note that unsupported parts are silently ignored and no error is raised.The ASEH5MD interface will not provide any time and step information.If you have a list of Atoms with different PBC values, you can useznh5md.io.AtomsReader(atoms, use_pbc_group=True). This will create apbcgroup inbox/that also containsstepandtime. This is not an official H5MD specification so it can cause issues with other tools. If you don't specify this, the pbc of the first atoms in the list will be applied.importznh5mdimportaseatoms:list[ase.Atoms]db=znh5md.io.DataWriter(filename="db.h5")db.initialize_database_groups()db.add(znh5md.io.AtomsReader(atoms))# or znh5md.io.ChemfilesReaderdata=znh5md.ASEH5MD("db.h5")data.get_atoms_list()==atomsCLIZnH5MD provides a small set of CLI tools:znh5md view <file.h5>to view the File usingase.visualizeznh5md export <file.h5> <file.xyz>to export the file to.xyzor any other supported file formatznh5md convert <file.xyz> <file.h5>to save afile.xyzasfile.h5in the H5MD standard.More examplesA complete documentation is still work in progress. In the meantime, I can recommend looking at the tests, especiallytest_znh5md.pyto learn more about slicing and batching.
znhello
No description available on PyPI.
zninit
ZnInit - Automatic Generation of__init__based on DescriptorsThis package provides a base class fordataclasslike structures with the addition of usingDescriptors. The main functionality is the automatic generation of an keyword-only__init__based on selected descriptors. The descriptors can e.g. overwrite__set__or__get__or have custom metadata associated with them. TheZnInitpackage is used byZnTrackto enable lazy loading data from files as well as distinguishing between different types of descriptors such aszn.paramsorzn.outputs. An example can be found in theexamplesdirectory.ExampleThe most simple use case is a replication of a dataclass like structure.fromzninitimportZnInit,DescriptorclassHuman(ZnInit):name:str=Descriptor()language:str=Descriptor("EN")# This will generate the following init:def__init__(self,*,name,language="EN"):self.name=nameself.language=languagefabian=Human(name="Fabian")# orfabian=Human(name="Fabian",language="DE")The benefit of usingZnInitcomes with using descriptors. You can subclass thezninit.Descriptorclass and only add certain kwargs to the__init__defined ininit_descriptors: list. Furthermore, apost_initmethod is available to run code immediately after initializing the class.fromzninitimportZnInit,DescriptorclassInput(Descriptor):"""A Parameter"""classMetric(Descriptor):"""An Output"""classHuman(ZnInit):_init_descriptors_=[Input]# only add Input descriptors to the __init__name:str=Input()language:str=Input("DE")date:str=Metric()# will not appear in the __init__def_post_init_(self):self.date="2022-09-16"julian=Human(name="Julian")print(julian)# Human(language='DE', name='Julian')print(julian.date)# 2022-09-16print(Input.get_dict(julian))# {"name": "Julian", "language": "DE"}One benefit ofZnInitis that it also allows for inheritance.fromzninitimportZnInit,DescriptorclassAnimal(ZnInit):age:int=Descriptor()classCat(Animal):name:str=Descriptor()billy=Cat(age=4,name="Billy")
znipy
ZnIPy - Easy imports from Jupyter NotebooksSeeImporting Jupyter Notebooks as Modulesfor more information.fromznipyimportNotebookLoadermodule=NotebookLoader().load_module("JupyterNotebook.ipnyb")hello_world=module.HelloWorld()or with direct importsimportznipyznipy.register()fromJupyterNotebookimportHelloWorldhello_world=HelloWorld()
znjson
ZnJSONPackage to Encode/Decode some common file formats to jsonAvailable viapip install znjsonIn comparison topicklethis allows having readable json files combined with some serialized data.Exampleimportnumpyasnpimportjsonimportznjsondata=json.dumps(obj={"data_np":np.arange(2),"data":[xforxinrange(10)]},cls=znjson.ZnEncoder,indent=4)_=json.loads(data,cls=znjson.ZnDecoder)The resulting*.jsonfile is partially readable and looks like this:{"data_np":{"_type":"np.ndarray_small","value":[0,1]},"data":[0,1,2,3,4]}Custom ConverterZnJSON allows you to easily add custom converters. Let's write a serializer fordatetime.datetime.fromznjsonimportConverterBasefromdatetimeimportdatetimeclassDatetimeConverter(ConverterBase):"""Encode/Decode datetime objectsAttributes----------level: intPriority of this converter over others.A higher level will be used first, if thereare multiple converters availablerepresentation: strAn unique identifier for this converter.instance:Used to select the correct converter.This should fulfill isinstance(other, self.instance)or __eq__ should be overwritten."""level=100representation="datetime"instance=datetimedefencode(self,obj:datetime)->str:"""Convert the datetime object to str / isoformat"""returnobj.isoformat()defdecode(self,value:str)->datetime:"""Create datetime object from str / isoformat"""returndatetime.fromisoformat(value)This allows us to use this new serializer:znjson.config.register(DatetimeConverter)# we need to register the new converter firstjson_string=json.dumps(dt,cls=znjson.ZnEncoder,indent=4)json.loads(json_string,cls=znjson.ZnDecoder)and will result in{"_type":"datetime","value":"2022-03-11T09:47:35.280331"}If you don't want to register your converter to be used everywhere, simply use:json_string=json.dumps(dt,cls=znjson.ZnEncoder.from_converters(DatetimeConverter))
znlib
znlibThis package provides you with a CLI to list your installed zincware libraries.When installing viapip install znlib[zntrack]your output should look something like:>>> znlib Available zincware packages: ✓ znlib (0.1.0) ✓ zntrack (0.4.3) ✗ mdsuite ✓ znjson (0.2.1) ✓ zninit (0.1.1) ✓ dot4dict (0.1.1) ✗ znipy ✗ supercharge ✗ znvis ✗ symdetFurthermore,znlibprovides you with some exampleZnTrackNodes.fromznlib.examplesimportMonteCarloPiEstimatormcpi=MonteCarloPiEstimator(n_points=1000).write_graph(run=True)print(mcpi.load().estimate)>>>3.128The idea of theznlibpackage is to provide a collection ofZnTrackNodes from all different fields of research. Every contribution is very welcome. For new Nodes:Fork this repository.Create a file under the directoryznlib/examplesMake a Pull request.
znlp
znlp自然语言处理 中文分词 词性标注 命名实体识别 依存句法分析 新词发现 关键词短语提取 自动摘要 文本分类聚类 拼音简繁
znl-tools
Failed to fetch description. HTTP Status Code: 404
znop
ZnopLibrary that solves discrete math operations of the group Zn, provides both as calculator program or third party library.The group Zn consists of the elements {0, 1, 2, . . . , n−1} with addition mod n as the operation. You can also multiply elements of Zn, but you do not obtain a group: The element 0 does not have a multiplicative inverse, for instance. However, if you confine your attention to the units in Zn — the elements which have multiplicative inverses — you do get a group under multiplication mod n. It is denoted Un, and is called the group of units in Zn.Program UsageDescribe how to install the calculator and its commands.Note: This program willalways create aznop_db.jsonfile if it doesn't existin the directory you execute the program, this file is aimed to save your last ~30 commands and the Zn group (default n=10) set on your program.Install from sourceMake sure to have python > v3.6 installed.$ git clone https://github.com/paaksing/Znop.git$ cd Znop$ python setup.py install$ znop.Install using pipMake sure to have python > v3.6 installed andpipinstalled.$ pip install znop.$ znop.Install as executableFind the latest executable in this repository'sReleases.Download it to local machineExecute it.CommandsAll payload passed to the commands should strictly match this regex:[a-zA-Z0-9\+\-\*\(\)\^]CommandDescriptionset n=<setnumber>Set the set number of Zreduce<expression>Reduce a Zn expression or equationsolve<equation>Solve an one-dimensional Zn equationhelpUsage of this programquitQuit this programExample(n=10)reduce(3x*9)+78-4x 3x+8(n=10)setn=6OK(n=6)solvex^2+3x+2=0x∈{1,2,4,5}(n=6)quitLibrary UsageDescribe usage and API of this library.Requirements and installationPython 3.6 (due to requirements of f-strings)Install usingpip install znopAPI DocumentationThis library consists of 3 modules:coreandexceptions. All objects in this library can be "copied" or "reinstantiated" by doingeval(repr(obj))where obj is anznopobject.str()will return the string representation of the object andrepr()will return the string representation of the object in python syntax.Import the object from the respective modules e.g.:from znop.core import ZnEquationznop.core.ZnTermRepresents a term in the group of ZnTerm__init__(n: int, raw: str): Create an instance of ZnTerm, arguments: n (set number), raw (raw string of term, e.g.'2x').__add__, __sub__, __mul__, __neg__, __eq__: This object supports [+,-,*] operations between ZnTerms, with the exception of multiplications that it can multiply a ZnExpression by doing distributive, it will always return a new ZnTerm. Supports-(negate) operation and returns a new ZnTerm. It also supports equality comparison==between ZnTerms.eval(values: Dict[str, int]): Evaluate the variables in the term, receives a mapping of variable name to value e.g.{'x': 6}, and return a new ZnTerm.znop.core.ZnExpression__init__(n: int, raw: str): Create an instance of ZnExpression, arguments: n (set number), raw (raw string of expression, e.g.'2x+x-3'). This expression is automatically reduced to its simplest form.__mul__, __eq__: This objects supports*between ZnExpressions and ZnTerms by doing distributive, It also supports equality comparison==between ZnExpressions.reduce(): Reduce the expression to the simplest form, this function is automatically called on instantiation.eval(values: Dict[str, int]): Evaluate the variables in the expression, receives a mapping of variable name to value e.g.{'x': 6}, and return a new ZnExpression.znop.core.ZnEquation__init__(n: int, raw: str): Create an instance of ZnEquation, arguments: n (set number), raw (raw string of equation, e.g.'2x^2+3=0'). This equation is automatically reduced to its simplest form.reduce(): Reduce the equation to the simplest form, this function is automatically called on instantiation.solve(): Solve the equation by returning a list of solutions (ints). If the equation cannot be solved, thenResolveErrorwill be raised.znop.exceptions.ZSetErrorOperation between ZnInt of different Z set.znop.exceptions.ZVarErrorOperation between ZnInt of different variables outside products.znop.exceptions.ParseErrorIndicates a parsing error, reason will be stated whenParseErroris thrown.znop.exceptions.ResolveErrorCould not resolve equation.
zn-operation-table
Zn equivalence classes operation table generatorSimple package to obtain Zn's equivalence classes sum or product operation tables.The build_table method takes an positive integer (n) and 'sum' or 'prod' as first and second arguments. It will return a nxn matrix with the result of operating every item with each other.Installation:pip install zn_operation_tableExamplebuild_table(3, 'sum')Will return:[[0,1,2], [1,2,0], [2,0,1]]build_table functionbuild_table(n, operation, headers, inversibles)n: positive integer.operation: 'sum' for class sum and 'prod' for class product.headers: for row and column headers.inversibles: tu use the given set's inversibles for the given operation.
znotify
notify-py-sdkSend notifications toNotifyInstallationpipinstallznotifyUsagefromznotifyimportClientclient=Client.create("user_id")client.send("Hello World")DevelopmentRun testspython-munittestdiscover
znp
znpznp stands for Zathura Next Or Previous. You can also use znp to add a given file to an instance ofzathura.UsageNext or PreviousThe main goal of znp is to provide an easy way to go to the next or previous file from within zathura. As ofyetthis functionality is not a part of zathura's functionality. However, after installingznpyou can add the following to yourzathurarcto setNandPto go to the next or previous file:map N exec "znp -n '$FILE'" map P exec "znp -p '$FILE'" " Please note the ^ ^ apostrophes around $FILE. " These are necessary for files with whitespaceNotethat if your system does not use extended window manager hints (ewmh), or you do not have the ewmh python package installed; then, this command may fail if you have two instances of zathura open in the same directory. This is not something that I have a reasonable fix for and there is no way to reliably determine the instance issuing the next or previous command. The only way I can think of fixing this would require patching zathura to include expansion of a$PIDvariable from the exec function and include that in the zathurarc command. However, I am a not a programmer so reviewing the code base and getting this functionality added may take me some time.Adding filesznp can act as a zathura wrapper and add a given file to an existing instance:znpfile.pdf znp/path/to/file.pdfYou can give znp a relative or absolute path to file. znp will insert the current working directory to make a relative path absolute. No variable expansion will performed by znp as it expects$HOMEand such to get expanded by the shell calling znp.The above works best when only one instance of zathura exists. However, if multiple exist then zathura will use the user definedprompt_cmdset in$XDG_CONFIG_HOME/znp/znp.confto present a list of zathura instances to open the file in. The default isfzfbut you may usedmenuorrofi. Here is how this looks in practice:To aviod any prompting you can pass the desired pid to use with the-Pflag:znp-P123456file.pdf znp-P123456/path/to/file.pdfThis would require a bit more work on your part but it may be useful in scripting.QuerySpeaking of scripting, I added the-q, --queryflag for my personal scripting purposes.The--queryflag will take theFILEargument given to znp and search all zathura pids for thefirst(see the note in thenext or previoussection) one that has that file open and return it's pid. I make use of this to track my last read pdf, epub, cbz/r, zip, etc. using the returned pid to kill the assumed instance issuing the command. Basically a session tracker so to speak. Maybe there are other purposes for this or maybe thezathura.pymodule would be useful as a standalone module for interacting with zathura via dbus. No clue, let me know.User configYou can set the default command prompt in$XDG_CONFIG_HOME/znp/znp.conflike so:prompt_cmd = dmenuNotethere are no quotes. You can also skip the spaces if you like.If you have any args/flags you want to use with your command prompt add them like so:prompt_args = -l 20 -iSimply provide the args/flags as you would normally when using your chosen prompt_cmd.NoteIf your prompt_args contain an=sign then please escape it with a backslash otherwise you will get an error.Installationznp is available viapypiand can install it via pip in the usual way:pipinstallznpUse the following if you are installing on a system running X and usingewmh:pipinstallznp[x11]Ensure~/.local/binis in your$PATHotherwise znp will not be callable from zathura unless you give the full path to znp.Dependenciespython-magic- used to detect the file type of the next file to prevent zathura from opening an unreadable file, e.g. log files, markdown files, etc.psutil- used to get zathura pids.Optional Dependencyewmh- used to get the pid of window calling znp. This is a bit hacky but does allow for the core functionality (opening the next or previous file) to work without issue. Provided under the[x11]branch.
znslice
ZnSliceA lightweight library (without external dependencies) for:advanced slicing.cache__getitem__(self, item)lazy load__getitem__(self, item)InstallationpipinstallznsliceUsageAdvanced Slicing and CacheConvert List toznslice.LazySequenceto allow advanced slicing.importznslicelst=znslice.LazySequence.from_obj([1,2,3],indices=[0,2])print(lst[[0,1]].tolist())# [1, 3]importznsliceimportcollections.abcclassMapList(collections.abc.Sequence):def__init__(self,data,func):[email protected]__getitem__(self,item:int):print(f"Loading item ={item}")returnself.func(self.data[item])def__len__(self):returnlen(self.data)data=MapList([0,1,2,3,4],lambdax:x**2)assertdata[0]==0assertdata[[1,2,3]]==[1,4,9]# calling data[:] will now only compute data[4] and load the remaining data from cacheassertdata[:]==[0,1,4,9,16]Lazy Database LoadingYou can useznsliceto lazy load data from a database. This is useful if you have a large database and only want to load a small subset of the data.In the following we will use theasepackage to generateAtomsobjects stored in a database and load them lazily.importase.ioimportase.dbimportznsliceimporttqdmimportrandom# create a databasewithase.db.connect("data.db",append=False)asdb:for_inrange(10):atoms=ase.Atoms('CO',positions=[(0,0,0),(0,0,random.random())])db.write(atoms,group="data")# load the database lazilyclassReadASEDB:def__init__(self,file):[email protected](advanced_slicing=True,# this getitem supports advanced slicingnlazy=True# we want to lazy load the data)def__getitem__(self,item):data=[]withase.db.connect(self.file)asdatabase:ifisinstance(item,int):print(f"get{item= }")returndatabase[item+1].toatoms()foridxintqdm.tqdm(item):data.append(database[idx+1].toatoms())returndatadef__len__(self):withase.db.connect(self.file)asdb:returnlen(db)db=ReadASEDB("data.db")data=db[::2]# LazySequence([<__main__.ReadASEDB>], [[0, 2, 4, 6, 8]])data.tolist()# list[ase.Atoms]# supports addition, advanced slicing, etc.data=db[::2]+db[1::2]
z_number
UNKNOWN
znu-nlp
No description available on PyPI.
zny
READMEzny's test of pypi
zny-yespower-0-5
No description available on PyPI.
znz-spider
No description available on PyPI.
zo
zo
zobepy
zobepyzobepy - zobe’s unsorted library.This is an unsorted library made by zobe.usagepippip install zobepyimportimport zobepyand usebsf = zobepy.BinarySizeFormatter(3000) print(bsf.get())testprerequisitespip install -e ‘.[dev]’unittestpython -m unittest discoverpytestpytesttoxtoxwatch htmlcov/index.html for coverage after toxbuild documentprerequisitespip install -e ‘.[doc]’makecd docs make html
zocalo
M. Gerstel, A. Ashton, R.J. Gildea, K. Levik, and G. Winter, “Data Analysis Infrastructure for Diamond Light Source Macromolecular & Chemical Crystallography and Beyond”, in Proc. ICALEPCS’19, New York, NY, USA, Oct. 2019, pp. 1031-1035.Zocalo is an automated data processing system designed at Diamond Light Source. This repository contains infrastructure components for Zocalo.The idea of Zocalo is a simple one - to build a messaging framework, where text-based messages are sent between parts of the system to coordinate data analysis. In the wider scope of things this also covers things like archiving, but generally it is handling everything that happens after data aquisition.Zocalo as a wider whole is made up of two repositories (plus some private internal repositories when deployed at Diamond):DiamondLightSource/python-zocalo- Infrastructure components for automated data processing, developed by Diamond Light Source. The package is available throughPyPiandconda-forge.DiamondLightSource/python-workflows- Zocalo is built on the workflows package. It shouldn’t be necessary to interact too much with this package, as the details are abstracted by Zocalo. workflows controls the logic of how services connect to each other and what a service is, and actually send the messages to a message broker. Currently this is anActiveMQbroker (viaSTOMP) but support for aRabbitMQbroker (viapika) is being added. This is also available onPyPiandconda-forge.As mentioned, Zocalo is currently built on top of ActiveMQ. ActiveMQ is an apache project that provides amessage brokerserver, acting as a central dispatch that allows various services to communicate. Messages are plaintext, but from the Zocalo point of view it’s passing aroung python objects (json dictionaries). Every message sent has a destination to help the message broker route. Messages may either be sent to a specific queue or broadcast to multiple queues. These queues are subscribed to by the services that run in Zocalo. In developing with Zocalo, you may have to interact with ActiveMQ or RabbitMQ, but it is unlikely that you will have to configure it.Zocalo allows for the monitoring of jobs executingpython-workflowsservices or recipe wrappers. Thepython-workflowspackage contains most of the infrastructure required for the jobs themselves and more detailed documentation of its components can be found in thepython-workflowsGitHub repositoryandthe Zocalo documentation.Core ConceptsThere are two kinds of task run in Zocalo:servicesandwrappers. A service should handle a discrete short-lived task, for example a data processing job on a small data packet (e.g. finding spots on a single image in an X-ray crystallography context), or inserting results into a database. In contrast, wrappers can be used for longer running tasks, for example running data processing programs such asxia2orfast_ep.Aservicestarts in the background and waits for work. There are many services constantly running as part of normal Zocalo operation. In typical usage at Diamond there are ~100 services running at a time.Awrapperon the other hand, is only run when needed. They wrap something that is not necessarily aware of Zocalo - e.g. downstream processing software such as xia2 have no idea what zocalo is, and shouldn’t have to. A wrapper takes a message, converts to the instantiation of command line, runs the software - typically as a cluster job, then reformats the results into a message to send back to Zocalo. These processes have no idea what Zocalo is, but are being run by a script that handles the wrapping.At Diamond, everything goes to one service to start with: theDispatcher. This takes the initial request message and attaches useful information for the rest of Zocalo. The implementation of the Dispatcher at Diamond is environment specific and not public, but it does some things that would be useful for a similar service to do in other contexts. At Diamond there is interaction with theISPyB databasethat stores information about what is run, metadata, how many images, sample type etc. Data stored in the database influences what software we want to be running and this information might need to be read from the database in many, many services. We obviously don’t want to read the same thing from many clients and flood the database, and don’t want the database to be a single point of failure. The dispatcher front-loads all the database operations - it takes the data collection ID (DCID) and looks up in ISPyB all the information that could be needed for processing. In terms of movement through the system, it sits between the initial message and the services:message->Dispatcher->[Services]At end of processing there might be information that needs to go back into the databases, for which Diamond has a special ISPyB service to do the writing. If the DB goes down, that is fine - things will queue up for the ISPyB service and get processed when the database becomes available again, and written to the database when ready. This isolates us somewhat from intermittent failures.The only public Zocalo service at present isSchlockmeister, a garbage collection service that removes jobs that have been requeued mutliple times. Diamond operates a variety of internal Zocalo services which perform frequently required operations in a data analysis pipeline.Working with ZocaloGraylogis used to manage the logs produced by Zocalo. Once Graylog and the message broker server are running then services and wrappers can be launched with Zocalo.Zocalo provides the following command line tools::zocalo.go: trigger the processing of a recipezocalo.wrap: run a command while exposing its status to Zocalo so that it can be trackedzocalo.service: start a new instance of a servicezocalo.shutdown: shutdown either specific instances of Zocalo services or all instances for a given type of servicezocalo.queue_drain: drain one queue into another in a controlled mannerServices are available throughzocalo.serviceif they are linked through theworkflows.servicesentry point insetup.py. For example, to start a Schlockmeister service:$zocalo.service-sSchlockmeisterQ: How are services started?A: Zocalo itself is agnostic on this point. Some of the services are self-propagating and employ simple scaling behaviour - in particular the per-image-analysis services. The services in general all run on cluster nodes, although this means that they can not be long lived - beyond a couple of hours there is a high risk of the service cluster jobs being terminated or pre-empted. This also helps encourage programming more robust services if they could be killed.Q: So if a service is terminated in the middle of processing it will still get processed?A: Yes, messages are handled in transactions - while a service is processing a message, it’s marked as “in-progress” but isn’t completely dropped. If the service doesn’t process the message, or it’s connection to ActiveMQ gets dropped, then it get’s requeued so that another instance of the service can pick it up.Repeat Message FailureHow are repeat errors handled? This is a problem with the system - if e.g. an image or malformed message kills a service then it will get requeued, and will eventually kill all instances of the service running (which will get re-spawned, and then die, and so forth).We have a special service that looks for repeat failures and moves them to a special “Dead Letter Queue”. This service is calledSchlockmeister, and is the only service at time of writing that has migrated to the public zocalo repository. This service looks inside the message that got sent, extracts some basic information from the message in as safe a way as possible and repackages to the DLQ with information on what it was working on, and the “history” of where the message chain has been routed.
zocalo-dls
Diamond-specific Zocalo ToolsZocaloservices and wrappers which can be used across teams atDiamond Light Source.There are specialised versions of the services provided by Zocalo, and services which are useful at Diamond but are not central to Zocalo itself.Much of the data analysis work at Diamond is directed by and presented to users throughISPyB. Therefore, we provide some central services which enable cooperation between the data analysis pipelines and the ISPyB database at Diamond.The code in this repository is actively used for processing of many different experiments at Diamond. The hope is that soon it will be used across many areas of science here and perhaps elsewhere.Please take this code inspiration for how to implement Zocalo at other facilities.Installationpip install zocalo-dlsThis will add several service and wrapper entry points which should appear with:zocalo.service --help zocalo.wrap --helpContributingThis package is maintained by a core team at Diamond Light Source.To contribute, fork this repository and issue a pull request.Pre-commithooks are provided, please check code against these before submitting. Install with:pre-commit installHistory0.3.0 (2019-07-30)Add wrapper to run DAWNAdd wrapper to run Jupyter notebooks0.2.0 (2019-07-30)Publish zocalo-dls to pypi.orgAdd ISPyB service0.1.0 (2019-07-30)Working to set this package up with best practices for Diamond teams to followGeneric wrapper for GDA tasks
zocrypt
ZocryptIntended mainly for use byZeroAndOne Developersfor protection of data with 6 level encryption.Based on our projectsecret message encoder decoderInstallingpip install zocryptUsage>>> from zocrypt import encrypter,decrypter,key >>> text="5 Mangoes are better than 6 Mangoes" >>> key=key.generate() >>> encryptedtext=encrypter.encrypt_text(text,key) '`"V`O/i|;^a^.~k|4~k|;a|R#`k|l`V~#^#^V~Hk~V|l/a|k^"~V/O/i^;|a^.`k3' >>> decrypter.decrypt_text(encryptedtext,key) '5 Mangoes are better than 6 Mangoes'