package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
zigpy-zboss
zigpy-zbosszigpy-zbossis a Python library project that adds support for common Nordic Semiconductor nRF modules tozigpy (a open source Python Zigbee stack project)and other Network Co-Processor radios that uses firmware based onZOI (ZBOSS Open Initiative) by DSR.Together with the zigpy library and a home automation software application with compatible Zigbee gateway implementation, (such as for example theHome Assistant's ZHA integration component), you can directly control Zigbee devices from most product manufacturers, like; IKEA, Philips Hue, Inovelli, LEDVANCE/OSRAM, SmartThings/Samsung, SALUS/Computime, SONOFF/ITEAD, Xiaomi/Aqara, and many more.Hardware requirementsNordic Semi USB adapters and development kits/boards based on nRF52840 SoC are used as reference hardware in the zigpy-zboss project:nRF52840 donglenRF52840 development kitFirmwareDevelopment and testing in zigpy-zboss project is done with a firmware image built using theZBOSS NCP Host samplefrom Nordic Semi:nrf-zboss-ncp- Compiled ZBOSS NCP Host firmware image required to be flash on the nRF52840 device.Releases via PyPITagged versions will also be released via PyPIhttps://pypi.org/project/zigpy-zboss/https://pypi.org/project/zigpy-zboss/#historyhttps://pypi.org/project/zigpy-zboss/#filesExternal links, documentation, and other development referencesZBOSS NCP Serial Protocol (v1.5) prepared by DSR Corporation for ZOIhttps://developer.nordicsemi.com/nRF_Connect_SDK/doc/latest/nrf/index.html- Specifically see the ZBOSS NCP sample, Zigbee CLI examples, and the ZBOSS NCP Host user guide.https://developer.nordicsemi.com/nRF_Connect_SDK/doc/latest/nrf/samples/zigbee/ncp/README.htmlhttps://infocenter.nordicsemi.com/index.jsp?topic=%2Fsdk_tz_v4.1.0%2Fzigbee_only_examples.htmlhttps://developer.nordicsemi.com/nRF_Connect_SDK/doc/zboss/3.6.0.0/zboss_ncp_host_intro.htmlhttps://developer.nordicsemi.com/nRF_Connect_SDK/doc/latest/nrf/protocols/zigbee/architectures.html#ug-zigbee-platform-design-ncp-detailshttps://developer.nordicsemi.com/nRF_Connect_SDK/doc/latest/nrf/samples/zigbee/ncp/README.html#zigbee-ncp-samplehttps://developer.nordicsemi.com/nRF_Connect_SDK/doc/latest/nrf/protocols/zigbee/tools.html#ug-zigbee-tools-ncp-hosthttps://developer.nordicsemi.com/nRF_Connect_SDK/doc/zboss/3.6.0.0/zboss_ncp_host.htmlhttps://developer.nordicsemi.com/nRF_Connect_SDK/doc/zboss/3.11.2.1/zboss_ncp_host_intro.htmlhttps://developer.nordicsemi.com/nRF_Connect_SDK/doc/latest/nrf/protocols/zigbee/tools.htmlhttps://developer.nordicsemi.com/nRF_Connect_SDK/doc/latest/nrf/protocols/zigbee/tools.html#ug-zigbee-tools-ncp-hostZBOSS NCP Host (v2.2.1) source codehttps://github.com/zigpy/zigpy/issues/394- Previous development discussion about ZBOSS radio library for zigpy.https://github.com/zigpy/zigpy/discussions/595- Reference collections for Zigbee Stack and related dev dockshttps://github.com/MeisterBob/zigpy_nrf52- Other attemt at making a zigpy library for nFR52https://gist.github.com/tomchy/04ac4ff78d6e117d33ab92d9cc1de779- Another attemt at making a zigpy controller for nFROther radio libraries for zigpy to use as reference projectsNote! The initial development of the zigpy-zboss radio library for zigpy stems from information learned from the work in thezigpy-znpproject.zigpy-znpThezigpy-znpzigpy radio library for Texas Instruments Z-Stack ZNP interface and has been the primary reference to base the zigpy-zboss radio library on. zigpy-znp is very stable with TI Z-Stack 3.x.x, (zigpy-znp also offers some stand-alone CLI toolsthat are unique for Texas Instruments hardware and Zigbee stack).zigpy-deconzThezigpy-deconzis another mature radio library for Dresden Elektronik'sdeCONZ Serial Protocol interfacethat is used by the deconz firmware for their ConBee and RaspBee seriies of Zigbee Coordinator adapters. Existing zigpy developers previous advice has been to also look at zigpy-deconz since it is somewhat similar to the ZBOSS serial protocol implementation.zigpy deconz parserzigpy-deconz-parserallow developers to parse Home Assistant's ZHA component debug logs using the zigpy-deconz radio library if you are using a deCONZ based adapter like ConBee or RaspBee.bellowsThebellowsis made Silicon LabsEZSP (EmberZNet Serial Protocol)interface and is another mature zigpy radio library project worth taking a look at as a reference, (as both it and some other zigpy radio libraires have some unique features and functions that others do not).How to contributeIf you are looking to make a code or documentation contribution to this project suggest that you try to follow the steps in the contributions guide documentation from the zigpy project and its wiki:https://github.com/zigpy/zigpy/blob/dev/Contributors.mdhttps://github.com/zigpy/zigpy/wikiAlso see:https://github.com/firstcontributions/first-contributions/blob/master/README.mdhttps://github.com/firstcontributions/first-contributions/blob/master/github-desktop-tutorial.mdRelated projectszigpyzigpyis aZigbee protocol stackintegration project to implement theZigbee Home Automationstandard as a Python library. Zigbee Home Automation integration with zigpy allows you to connect one of many off-the-shelf Zigbee adapters using one of the available Zigbee radio library modules compatible with zigpy to control Zigbee devices. There is currently support for controlling Zigbee device types such as binary sensors (e.g. motion and door sensors), analog sensors (e.g. temperature sensors), lightbulbs, switches, and fans. Zigpy is tightly integrated withHome Assistant'sZHA componentand provides a user-friendly interface for working with a Zigbee network.zigpy-cli (zigpy command line interface)zigpy-cliis a unified command line interface for zigpy radios. The goal of this project is to allow low-level network management from an intuitive command line interface and to group useful Zigbee tools into a single binary.ZHA Device HandlersZHA deviation handling in Home Assistant relies on the third-partyZHA Device Handlersproject (also known unders zha-quirks package name on PyPI). Zigbee devices that deviate from or do not fully conform to the standard specifications set by theZigbee Alliancemay require the development of customZHA Device Handlers(ZHA custom quirks handler implementation) to for all their functions to work properly with the ZHA component in Home Assistant. These ZHA Device Handlers for Home Assistant can thus be used to parse custom messages to and from non-compliant Zigbee devices. The custom quirks implementations for zigpy implemented as ZHA Device Handlers for Home Assistant are a similar concept to that ofHub-connected Device Handlers for the SmartThings platformas well as that ofzigbee-herdsman converters as used by Zigbee2mqtt, meaning they are each virtual representations of a physical device that expose additional functionality that is not provided out-of-the-box by the existing integration between these platforms.ZHA integration component for Home AssistantZHA integration component for Home Assistantis a reference implementation of the zigpy library as integrated into the core ofHome Assistant(a Python based open source home automation software). There are also other GUI and non-GUI projects for Home Assistant's ZHA components which builds on or depends on its features and functions to enhance or improve its user-experience, some of those are listed and linked below.ZHA ToolkitZHA Toolkitis a custom service for "rare" Zigbee operations using theZHA integration componentinHome Assistant. The purpose of ZHA Toolkit and its Home Assistant 'Services' feature, is to provide direct control over low level zigbee commands provided in ZHA or zigpy that are not otherwise available or too limited for some use cases. ZHA Toolkit can also; serve as a framework to do local low level coding (the modules are reloaded on each call), provide access to some higher level commands such as ZNP backup (and restore), make it easier to perform one-time operations where (some) Zigbee knowledge is sufficient and avoiding the need to understand the inner workings of ZHA or Zigpy (methods, quirks, etc).ZHA Device Exporterzha-device-exporteris a custom component for Home Assistant to allow the ZHA component to export lists of Zigbee devices.ZHA Network Visualization Cardzha-network-visualization-cardwas a custom Lovelace element for Home Assistant which visualize the Zigbee network for the ZHA component.ZHA Network Cardzha-network-cardwas a custom Lovelace card for Home Assistant that displays ZHA component Zigbee network and device information in Home Assistant
zigpy-zigate
zigpy-zigatezigpy-zigateis a Python 3 implementation for theZigpyproject to implementZiGatebasedZigbeeradio devices.https://github.com/zigpy/zigpy-zigateZiGate is a open source ZigBee adapter hardware that was initially launched on Kickstarter by @fairecasoimemehttps://www.zigate.frhttps://www.kickstarter.com/projects/1361563794/zigate-universal-zigbee-gateway-for-smarthomeHardware and firmware compatibilityThe ZiGate USB adapter communicates via a PL-2303HX USB to Serial Bridge Controller module by Prolific. There's also a Wi-Fi adapter to communicate with ZiGate over network.Note! ZiGate open source ZigBee USB and GPIO adapter hardware requires ZiGate 3.1a firmware or later to work with this zigpy-zigate module, however ZiGate 3.1d firmware or later is recommended as it contains a specific bug-fix related to zigpy. See all available official ZiGate firmware releaseshere (link).Known working ZiGate compatible Zigbee radio modulesZiGate + USB / ZiGate USB-TTLZiGate + USB-DIN / ZiGate USB-DINPiZiGate + / PiZiGate (ZiGate HAT/Shield module for Raspberry Pi compatible GPIO header)Tip! PiZiGate are not limited to Raspberry Pi series as works with all computers with a Raspberry Pi compatible GPIO header.ZiGate Ethernet (ZiGate Ethernet serial-to-IP server)(Note! Requires thePiZiGate + radio module)Tip!ZiGate Ethernetcan as alternativly also be used viaESPHome serial bridge firmware for ESP32as an option.ZiGate + WiFi Pack / ZiGate WiFi Pack (ZiGate WiFi serial-to-IP server)Tip!ZiGate compatible WiFi modulecan also be used to convert radio board fromZiGate USB-TTLinto this "ZiGate WiFi Pack".Experimental ZiGate compatible Zigbee radio modulesOpen Lumi Gateway-DIY ZiGate WiFi bridge hacked from an Xiaomi Lumi Gateway with modded OpenWRT firmwarePort configurationTo configureusbZiGate (USB TTL or DIN) port, just specify the port, example :/dev/ttyUSB0Alternatively you could manually set port toautoto enable automatic usb port discoveryTo configurepizigateport, specify the port, example :/dev/serial0or/dev/ttyAMA0To configurewifiZiGate, manually specify IP address and port, example :socket://192.168.1.10:9999pizigatedoes require some additional adjustements on Raspberry Pi 3/Zero, and 4:Raspberry Pi 3 and Raspberry Pi Zero configuration adjustementsRaspberry Pi 4 configuration adjustementsFlasher (ZiGate Firmware Tool)zigpy-zigate has an integrated Python "flasher" tool to flash firmware updates on your ZiGate (NXP Jennic JN5168).Thanks to Sander Hoentjen (tjikkun) zigpy-zigate now has an integrated firmware flasher tool!tjikkun original zigate-flasher repoSee all available official ZiGate firmware releaseshere (link).Flasher Usageusage:python3-mzigpy_zigate.tools.flasher[-h]-p{/dev/ttyUSB0}[-wWRITE][-sSAVE][-u][-d][--gpio][--din]optionalarguments:-h,--helpshowthishelpmessageandexit-p{/dev/ttyUSB0},--serialport{/dev/ttyUSB0}Serialport,e.g./dev/ttyUSB0-wWRITE,--writeWRITEFirmwarebintoflashontothechip-sSAVE,--saveSAVEFiletosavethecurrentlyloadedfirmwareto-u,--upgradeDownloadandflashthelastestavailablefirmware-d,--debugSetlogleveltoDEBUG--gpioConfigureGPIOforPiZiGateflash--dinConfigureUSBforZiGateDINflashTesting new releasesTesting a new release of the zigpy-zigate library before it is released in Home Assistant.If you are using Supervised Home Assistant (formerly known as the Hassio/Hass.io distro):Addhttps://github.com/home-assistant/hassio-addons-developmentas "add-on" repositoryInstall "Custom deps deployment" addonUpdate config like:pypi: - zigpy-zigate==0.5.1 apk: []where 0.5.1 is the new versionStart the addonIf you are instead using some custom python installation of Home Assistant then do this:Activate your python virtual envUpdate package withpippip install zigpy-zigate==0.5.1Releases via PyPITagged versions are also released via PyPIhttps://pypi.org/project/zigpy-zigate/https://pypi.org/project/zigpy-zigate/#historyhttps://pypi.org/project/zigpy-zigate/#filesDeveloper referencesDocuments that layout the serial protocol used for ZiGate serial interface communication can be found here:https://github.com/fairecasoimeme/ZiGate/tree/master/Protocolhttps://github.com/doudz/zigatehttps://github.com/Neonox31/zigatehttps://github.com/nouknouk/node-zigateHow to contributeIf you are looking to make a contribution to this project we suggest that you follow the steps in these guides:https://github.com/firstcontributions/first-contributions/blob/master/README.mdhttps://github.com/firstcontributions/first-contributions/blob/master/github-desktop-tutorial.mdSome developers might also be interested in receiving donations in the form of hardware such as Zigbee modules or devices, and even if such donations are most often donated with no strings attached it could in many cases help the developers motivation and indirect improve the development of this project.Related projectsZigpyZigpyisZigbee protocol stackintegration project to implement theZigbee Home Automationstandard as a Python 3 library. Zigbee Home Automation integration with zigpy allows you to connect one of many off-the-shelf Zigbee adapters using one of the available Zigbee radio library modules compatible with zigpy to control Zigbee based devices. There is currently support for controlling Zigbee device types such as binary sensors (e.g., motion and door sensors), sensors (e.g., temperature sensors), lightbulbs, switches, and fans. A working implementation of zigbe exist inHome Assistant(Python based open source home automation software) as part of itsZHA componentZHA Device HandlersZHA deviation handling in Home Assistant relies on the third-partyZHA Device Handlersproject. Zigbee devices that deviate from or do not fully conform to the standard specifications set by theZigbee Alliancemay require the development of customZHA Device Handlers(ZHA custom quirks handler implementation) to for all their functions to work properly with the ZHA component in Home Assistant. These ZHA Device Handlers for Home Assistant can thus be used to parse custom messages to and from non-compliant Zigbee devices. The custom quirks implementations for zigpy implemented as ZHA Device Handlers for Home Assistant are a similar concept to that ofHub-connected Device Handlers for the SmartThings platformas well as that ofzigbee-herdsman converters as used by Zigbee2mqtt, meaning they are each virtual representations of a physical device that expose additional functionality that is not provided out-of-the-box by the existing integration between these platforms.ZHA integration component for Home AssistantZHA integration component for Home Assistantis a reference implementation of the zigpy library as integrated into the core ofHome Assistant(a Python based open source home automation software). There are also other GUI and non-GUI projects for Home Assistant's ZHA components which builds on or depends on its features and functions to enhance or improve its user-experience, some of those are listed and linked below.ZHA Custom Radioszha-custom-radiosadds support for custom radio modules for zigpy to[Home Assistant's ZHA (Zigbee Home Automation) integration component]. This custom component for Home Assistant allows users to test out new modules for zigpy in Home Assistant's ZHA integration component before they are integrated into zigpy ZHA and also helps developers new zigpy radio modules without having to modify the Home Assistant's source code.ZHA Customzha_customis a custom component package for Home Assistant (with its ZHA component for zigpy integration) that acts as zigpy commands service wrapper, when installed it allows you to enter custom commands via to zigy to example change advanced configuration and settings that are not available in the UI.ZHA Mapzha-mapfor Home Assistant's ZHA component can build a Zigbee network topology map.ZHA Network Visualization Cardzha-network-visualization-cardis a custom Lovelace element for Home Assistant which visualize the Zigbee network for the ZHA component.ZHA Network Cardzha-network-cardis a custom Lovelace card for Home Assistant that displays ZHA component Zigbee network and device information in Home AssistantZigzagZigzagis an custom card/panel forHome Assistantthat displays a graphical layout of Zigbee devices and the connections between them. Zigzag can be installed as a panel or a custom card and relies on the data provided by thezha-mapintegration commponent.ZHA Device Exporterzha-device-exporteris a custom component for Home Assistant to allow the ZHA component to export lists of Zigbee devices.
zigpy-znp
zigpy-znpzigpy-znpis a Python library that adds support for commonTexas Instruments ZNP (Zigbee Network Processors)Zigbeeradio modules tozigpy, a Python Zigbee stack project.Together with zigpy and compatible home automation software (namely Home Assistant'sZHA (Zigbee Home Automation) integration component), you can directly control Zigbee devices such as Philips Hue, GE, OSRAM LIGHTIFY, Xiaomi/Aqara, IKEA Tradfri, Samsung SmartThings, and many more.InstallationPython moduleInstall the Python module within your virtual environment:$virtualenv-ppython3.8venv# if you don't already have one$sourcevenv/bin/activate(venv)$pipinstallgit+https://github.com/zigpy/zigpy-znp/# latest commit from Git(venv)$pipinstallzigpy-znp# or, latest stable from PyPIHome AssistantStable releases of zigpy-znp are automatically installed when you install the ZHA component.Testingdevwith Home Assistant CoreUpgrade the package within your virtual environment (requiresgit):(venv)$pipinstallgit+https://github.com/zigpy/zigpy-znp/Launch Home Assistant with the--skip-pipcommand line option to prevent zigpy-znp from being downgraded. Running with this option may prevent newly added integrations from installing required packages.Testingdevwith Home Assistant OSAddhttps://github.com/home-assistant/hassio-addons-developmentas an addon repository.Install the "Custom deps deployment" addon.Add the following to yourconfiguration.yamlfile:apk:[]pypi:-git+https://github.com/zigpy/zigpy-znp/ConfigurationBelow are the defaults with the top-level Home Assistantzha:key.You do not need to copy this configuration, it is provided only for reference:zha:zigpy_config:znp_config:# Only if your stick has a built-in power amplifier (i.e. CC1352P and CC2592)# If set, must be between:# * CC1352/2652: -22 and 19# * CC253x: -22 and 22tx_power:# Only if your stick has a controllable LED (the CC2531)# If set, must be one of: off, on, blink, flash, toggleled_mode:off### Internal configuration, there's no reason to touch these values# Skips the 60s bootloader delay on CC2531 sticksskip_bootloader:True# Timeout for synchronous requests' responsessreq_timeout:15# Timeout for asynchronous requests' callback responsesarsp_timeout:30# Delay between auto-reconnect attempts in case the device gets disconnectedauto_reconnect_retry_delay:5# Pin states for skipping the bootloaderconnect_rts_pin_states:[off,on,off]connect_dtr_pin_states:[off,off,off]ToolsVarious command line Zigbee utilities are a part of the zigpy-znp package and can be run withpython -m zigpy_znp.tools.name_of_tool. More detailed documentation can be found inTOOLS.mdbut a brief description of each tool is included below:energy_scan: Performs a continuous energy scan to check for non-Zigbee interference.flash_read: For CC2531s, reads firmware from flash.flash_write: For CC2531s, writes a firmware.binto flash.form_network: Forms a network with randomized settings on channel 15.network_backup: Backs up the network data and device information into a human-readable JSON document.network_restore: Restores a JSON network backup to the adapter.network_scan: Actively sends beacon requests for network stumbling.nvram_read: Reads all possible NVRAM entries into a JSON document.nvram_reset: Deletes all possible NVRAM entriesnvram_write: Writes all NVRAM entries from a JSON document.Hardware requirementsUSB-adapters, GPIO-modules, and development-boards flashed with TI's Z-Stack are compatible with zigpy-znp:CC2652P/CC2652R/CC2652RB USB stick and dev board hardwareCC1352P/CC1352R USB stick and dev board hardwareCC2538 + CC2592 USB stick and dev board hardware (not recommended, old hardware and end-of-life firmware)CC2531 USB stick hardware (not recommended for Zigbee networks with more than 20 devices)CC2530 + CC2591/CC2592 USB stick hardware (not recommended for Zigbee networks with more than 20 devices)Tip! Adapters listed as "Texas Instruments sticks compatible with Zigbee2MQTT" also works with zigpy-znp.Reference hardware for this projectThese specific adapters are used as reference hardware for development and testing by zigpy-znp developers:TI LAUNCHXL-CC26X2R1runningZ-Stack 3 firmware (based on version 4.40.00.44). You can flashCC2652R_20210120.hexusingTI's UNIFLASH.Electrolama zzh CC2652RandSlaesh CC2652Rsticks runningZ-Stack 3 firmware (based on version 4.40.00.44). You can flashCC2652R_20210120.hexorCC2652RB_20210120.hexrespectively usingcc2538-bsl.CC2531 runningZ-Stack Home 1.2. You can flashCC2531ZNP-Prod.binto your stick directly withzigpy_znp:python -m zigpy_znp.tools.flash_write -i /path/to/CC2531ZNP-Prod.bin /dev/serial/by-id/YOUR-CC2531if your stick already has a serial bootloader.Texas Instruments Chip Part NumbersTexas Instruments (TI) has quite a few different wireless MCU chips and they are all used/mentioned in open-source Zigbee world which can be daunting if you are just starting out. Here is a quick summary of part numbers and key features.Supported newer generation TI chips2.4GHz frequency only chipsCC2652R: 2.4GHz only wireless MCU for IEEE 802.15.4 multi-protocol (Zigbee, Bluetooth, Thread, IEEE 802.15.4g IPv6-enabled smart objects like 6LoWPAN, and proprietary systems). Cortex-M0 core for radio stack and Cortex-M4F core for application use, plenty of RAM. Free compiler option from TI.CC2652RB: Pin compatible "Crystal-less" CC2652R (so you could use it if you were to build your own zzh and omit the crystal) but not firmware compatible.CC2652P: CC2652R with a built-in RF PA. Not pin or firmware compatible with CC2652R/CC2652RB.Multi frequency chipsCC1352R: Sub 1 GHz & 2.4 GHz wireless MCU. Essentially CC2652R with an extra sub-1GHz radio.CC1352P: CC1352R with a built in RF PA.Supported older generation TI chipsCC2538: 2.4GHz Zigbee, 6LoWPAN, and IEEE 802.15.4 wireless MCU. ARM Cortex-M3 core with with 512kB Flash and 32kB RAM.CC2531: CC2530 with a built-in UART/TTL to USB Bridge. Used in the cheap "Zigbee sticks" sold everywhere. Intel 8051 core, 256 Flash, only has 8kB RAM.CC2530: 2.4GHz Zigbee and IEEE 802.15.4 wireless MCU. Intel 8051 core, 256 Flash, only has 8kB RAM.Auxiliary TI chipsCC2591 and CC2592: 2.4 GHz range extenders. These are not wireless MCUs, just auxiliary PA (Power Amplifier) and LNA (Low Noise Amplifier) in the same package to improve RF (Radio Frequency) range of any 2.4 GHz radio chip.Releases via PyPITagged versions will also be released via PyPIhttps://pypi.org/project/zigpy-znp/https://pypi.org/project/zigpy-znp/#historyhttps://pypi.org/project/zigpy-znp/#filesExternal documentation and referencehttp://www.ti.com/tool/LAUNCHXL-CC26X2R1http://www.ti.com/tool/LAUNCHXL-CC1352PHow to contributeIf you are looking to make a code or documentation contribution to this project we suggest that you follow the steps in these guides:https://github.com/firstcontributions/first-contributions/blob/master/README.mdhttps://github.com/firstcontributions/first-contributions/blob/master/github-desktop-tutorial.mdRelated projectsZigpyzigpyisZigbee protocol stackintegration project to implement theZigbee Home Automationstandard as a Python library. Zigbee Home Automation integration with zigpy allows you to connect one of many off-the-shelf Zigbee adapters using one of the available Zigbee radio library modules compatible with zigpy to control Zigbee devices. There is currently support for controlling Zigbee device types such as binary sensors (e.g. motion and door sensors), analog sensors (e.g. temperature sensors), lightbulbs, switches, and fans. Zigpy is tightly integrated withHome Assistant'sZHA componentand provides a user-friendly interface for working with a Zigbee network.
zigrun
zigrunRuns zig expressions, statements, and scripts directly from the command line.Examplezigrun--expr-c'7 + 9'>16# Shorthand for --expr -czigrun--eval'@sizeOf(u64)'>8# Even shorter shorthandzigrun-e'"Foo" ++ "Bar"'>FooBar zigrun--stmt<<EOFstd.debug.print("Foo: {}\n", .{6});// Nice aliasesprint("Bar: {}\n", .{6});println("Foo", .{7});EOF>Foo:6>Bar:7>Foo:7
zigzag
ZigZagZigZag provides functions for identifying the peaks and valleys of a time series. Additionally, it provides a function for computing the maximum drawdown.For fastest understanding,view the IPython notebook demo tutorial.ContributingThis is an admittedly small project. Still, if you have any contributions, pleasefork this project on githuband send me a pull request.
zigzag-dse
ZigZagDocumentationTutorialThis repository presents the novel version of our tried-and-tested HW Architecture-Mapping Design Space Exploration (DSE) Framework for Deep Learning (DL) accelerators. ZigZag bridges the gap between algorithmic DL decisions and their acceleration cost on specialized accelerators through a fast and accurate HW cost estimation.A crucial part in this is the mapping of the algorithmic computations onto the computational HW resources and memories. In the framework, multiple engines are provided that can automatically find optimal mapping points in this search space.InstallationPlease take a look at theInstallationpage of our documentation.Getting StartedPlease take a look at theGetting Startedpage on how to get started using ZigZag.Also, a Jupyter Notebook based demo is prepared for new usershere.Recent changesIn this novel version, we have:Added an interface with ONNX to directly parse ONNX modelsOverhauled our HW architecture definition to:include multi-dimensional (>2D) MAC arrays.include accurate interconnection patterns.include multiple flexible accelerator cores.Enhanced the cost model to support complex memories with variable port structures.Revamped the whole project structure to be more modular.Written the project with OOP paradigms to facilitate user-friendly extensions and interfaces.Publication pointersThe general idea of ZigZagL. Mei, P. Houshmand, V. Jain, S. Giraldo and M. Verhelst, "ZigZag: Enlarging Joint Architecture-Mapping Design Space Exploration for DNN Accelerators," in IEEE Transactions on Computers, vol. 70, no. 8, pp. 1160-1174, 1 Aug. 2021, doi: 10.1109/TC.2021.3059962.paperDetailed latency model explanationL. Mei, H. Liu, T. Wu, H. E. Sumbul, M. Verhelst and E. Beigne, "A Uniform Latency Model for DNN Accelerators with Diverse Architectures and Dataflows," 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), Antwerp, Belgium, 2022, pp. 220-225, doi: 10.23919/DATE54114.2022.9774728.paper,slides,videoThe new temporal mapping search engineA. Symons, L. Mei and M. Verhelst, "LOMA: Fast Auto-Scheduling on DNN Accelerators through Loop-Order-based Memory Allocation," 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS), Washington DC, DC, USA, 2021, pp. 1-4, doi: 10.1109/AICAS51828.2021.9458493.paper,slides,videoApply ZigZag for different design space exploration case studiesP. Houshmand, S. Cosemans, L. Mei, I. Papistas, D. Bhattacharjee, P. Debacker, A. Mallik, D. Verkest, M. Verhelst, "Opportunities and Limitations of Emerging Analog in-Memory Compute DNN Architectures," 2020 IEEE International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 2020, pp. 29.1.1-29.1.4, doi: 10.1109/IEDM13553.2020.9372006.paper,slides,videoV. Jain, L. Mei and M. Verhelst, "Analyzing the Energy-Latency-Area-Accuracy Trade-off Across Contemporary Neural Networks," 2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems (AICAS), Washington DC, DC, USA, 2021, pp. 1-4, doi: 10.1109/AICAS51828.2021.9458553.paper,slides,videoS. Colleman, T. Verelst, L. Mei, T. Tuytelaars and M. Verhelst, "Processor Architecture Optimization for Spatially Dynamic Neural Networks," 2021 IFIP/IEEE 29th International Conference on Very Large Scale Integration (VLSI-SoC), Singapore, Singapore, 2021, pp. 1-6, doi: 10.1109/VLSI-SoC53125.2021.9607013.paper,slides,videoS. Colleman, P. Zhu, W. Sun and M. Verhelst, "Optimizing Accelerator Configurability for Mobile Transformer Networks," 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS), Incheon, Korea, Republic of, 2022, pp. 142-145, doi: 10.1109/AICAS54282.2022.9869945.paper,slides,videoExtend ZigZag to support cross-layer depth-first schedulingL. Mei, K. Goetschalckx, A. Symons and M. Verhelst, " DeFiNES: Enabling Fast Exploration of the Depth-first Scheduling Space for DNN Accelerators through Analytical Modeling," 2023 IEEE International Symposium on High-Performance Computer Architecture (HPCA), 2023paper,slides,githubExtend ZigZag to support multi-core layer-fused schedulingA. Symons, L. Mei, S. Colleman, P. Houshmand, S. Karl and M. Verhelst, “Towards Heterogeneous Multi-core Accelerators Exploiting Fine-grained Scheduling of Layer-Fused Deep Neural Networks”,arXiv e-prints, 2022. doi:10.48550/arXiv.2212.10612.paper,githubS. Karl, A. Symons, N. Fasfous and M. Verhelst, "Genetic Algorithm-based Framework for Layer-Fused Scheduling of Multiple DNNs on Multi-core Systems," 2023 Design, Automation & Test in Europe Conference & Exhibition (DATE), Antwerp, Belgium, 2023, pp. 1-6, doi: 10.23919/DATE56975.2023.10137070.paper,slides,video
zihaopython
UNKNOWN
zihaothird
No description available on PyPI.
zihello
No description available on PyPI.
zihin_nester
UNKNOWN
zi-i18n
zi-i18nA Experimental Internationalizationexample:Translation#in "example.py" filefromzi_i18nimportI18ni18n=I18n("locale","en_US")print(i18n.translate("example.text"))#in "locale/en_US.zi.lang" file<!example.text:"Test">#output: TestPluralization#in "example.py" filefromzi_i18nimportI18ni18n=I18n("locale","en_US")print(i18n.translate("example.plural",count=0))print(i18n.translate("example.plural",count=1))print(i18n.translate("example.plural",count=5))#in "locale/en_US.zi.lang" file<%example.plural:{"zero":"0","one":"1","many":">= 2"}#output:# 0# 1# >= 2
ziim
ZiimNever open a Browser-tab again, copy/Paste your error/Exception to find available solutions online randomly!Ziim will handle everything for you, directly in the CLI after catching an error!AMAZING RIGHT ?How it's worksTheese are steps :Ziim get your error and ask you, where you want to find solutionYou just need to enter number corresponding the forum you want to fetch answersThat's all, Ziim will provide you the available questions matching your error, give you the answers, votes,...YOU GET IT ?\No need to:copy the Exception,Minimize your terminal,Open the browser,Paste it on google or any searchEngine,Open multiple tabs per result,fetching where the solution of your problem could be...Handled ForumsFor Now, Ziim can find on:[Done]StackOverflow[Done]StackExchange[Done]Codeproject[Done]CodeRanch[Done]SitePoint[Done]Quora[Done]RedditYou will have the available list in./parser.jsonRequirementsPython (3.x is recommended)requestslxmlHow to use itLet's see some examples on how to use it :In your codeMake sure you have installed all requirements in ./python/requirements.txt, by running :pip3installziimIn the code :# You import first Ziim Class and instantiate itimportziim# search_level is not required and as default it's 0ziim=ziim.Ziim().gotry:# Your code heretest=12/0# This will throws an errorexceptExceptionases:# Then call ziim hereziim(es)Run in the cli :python3-mziim.exampleAs a CLIJust hit this sample command :# Then hit:ziimnode./example.jsThe commandnode ./example.jswill be executed and the error will be taken to ziim, with this method you can start any kind of process in CLI and use ziimcli to fetch solutions.AuthorSanix-darker
ziion
InstallationPre-RequisitesDebian 11python3.10pip3.10cargo-updateInstall python3.10Ensure that your system is updated and the required packages installed.sudo apt update && sudo apt upgrade -yInstall the required dependencies:sudo apt install build-essential zlib1g-dev libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev libsqlite3-dev wget libbz2-devget python3.10 tar.gzwget https://www.python.org/ftp/python/3.10.0/Python-3.10.0.tgztar -xf Python-3.10.*.tgzcd Python-3.10.*/./configure --enable-optimizationsmake -j4sudo make altinstallInstall Cargo and Solc prerequisitessudo apt install lsb-release wget software-properties-common gnupg pkg-config libssl-dev build-essential cmake git libboost-all-dev libjsoncpp-dev jqInstall rustwget -O rustup.sh https://sh.rustup.rsbash rustup.sh -ysource "$HOME/.cargo/env"Install cargo-updatecargo install cargo-updateInstall ziion clisudo pip3.10 install ziionUpgrade ziion clisudo pip3.10 install ziion -UHow to use itARM/AMDziion --help: Show help message and exit.ziion --version: Show version message and exit.ziion list-metapackages: List metapackages that can be updated with the cli and exit.ziion self-update: Update ziion cli to latest version and exit.ARMziion update [cargo|solc] [--dryrun]: - If cargo|solc packages are installed: Update cargo|solc packages to the latest version if needed. - If cargo|solc packages are not installed: Install latest version of cargo|solc packages.ziion solc-select [version] [--install]: - Changes solc's current version to be used. - if --install is provided the installation of the specified version is forced.ziion solc-select versions: Shows installed version and the one which is currently in use.AMDziion update [cargo] [--dryrun]: - If cargo packages are installed: Update cargo packages to the latest version if needed. - If cargo packages are not installed: Install latest version of cargo packages.
zijin
No description available on PyPI.
zijinlib
===============================================
zikasort
Reorders/reverses/renames files in a folder.Change log0.0.1 (03.02.2022)First Release0.0.2 (03.02.2022)Second ReleaseConfiguring the setup file0.0.3 (03.02.2022)Third ReleaseConfiguring the setup file0.0.4 (03.02.2022)Added main func so it could work, I guess/hope0.0.5 (03.02.2022)Added command_line file so it could work, I guess/hope0.0.6 (03.02.2022)Configuring the setup file0.0.7 (03.02.2022)Configuring the setup file0.0.8 (03.02.2022)Configuring the setup file0.0.9 (03.02.2022)It workksss0.1 (03.02.2022)Usable version :)
zik-client
No description available on PyPI.
zik-dl
usage: zik-dl [-h] [--artists ARTISTS] [--album ALBUM] [--split] url Linux command-line program to download music from YouTube and other websites. positional arguments: url URL of the song/album to download optional arguments: -h, --help show this help message and exit --artists ARTISTS comma-separated artists names like "Louis Armstrong,Ella Fitzgerald" --album ALBUM album name --split split song in multiple songs based on timestamps (youtube video description) --cover cover path like "~/Images/cover1.jpg"
ziku
这是一个字库软件包使用方法:import ziku import cv2 ziku.size(0.4) img=cv2.imread("src/ziku/earthmap.png") img=ziku.write(img,"太阳当空照",x=100,y=100) ziku.size(0.6) img=ziku.write(img,"花儿对我笑",x=100,y=200) cv2.imwrite("ziku.png",img) ziku.size(0.8) img=ziku.write(img,"小鸟说 早早早",x=100,y=300) cv2.imwrite("ziku.png",img) ziku.size(1) img=ziku.write(img,"你为什么背上大书包",x=100,y=400) cv2.imwrite("ziku.png",img)版本0.1.3增加了如下字符:中文字符:“~!@#%……&*()——)+「」「|:“《》?”英文字符:~!|{}
zilch
zilchzilchis a small library for recording and viewing exceptions from Python. This library is inspired by (and uses several of the same functions from) David Cramer’sSentry, but aims to implement just the core features in a smaller code/feature footprint.RequirementssimplejsonWebErrorOptionalZeroMQ(For network based reporting)SQLAlchemy(For the database backend recorder)PyramidandWebHelpers(For the recorder web UI)Basic UsageReporting an ExceptionIn the application that wants to report errors, import zilch and configure the reporter to record directly to the database:from zilch.store import SQLAlchemyStore import zilch.client zilch.client.store = SQLAlchemyStore('sqlite:///exceptions.db')Then to report an exception:from zilch.client import capture_exception try: # do something that explodes except Exception, e: capture_exception()The error will then be recorded in the database for later viewing.Advanced UsageIn larger cluster scenarios, or where latency is important, the reporting of the exception can be handed off toZeroMQto be recorded to a central recorder over the network. Both the client and recording machine must haveZeroMQinstalled.To setup the client for recording:import zilch.client zilch.client.recorder_host = "tcp://localhost:5555"Then to report an exception:from zilch.client import capture_exception try: # do something that explodes except Exception, e: capture_exception()The exception will then be sent to the recorder_host listening at therecorder_hostspecified.Recording Exceptions CentrallyThe recorder usesZeroMQto record exception reports delivered over the network. To run the recorder host, on the machine recording them run:>> zilch-recorder tcp://localhost:5555 sqlite:///exceptions.dbWithout aRecorderrunning,ZeroMQwill hold onto the messages until it is available. After which point, it will begin to block (In the future, an option will be added to configure the disk offloading of messages).The recorder will create the tables necessary on its initial launch.Viewing Recorded Exceptionszilchcomes with aPyramidweb application to view the database of recorded exceptions. Once you have installedPyramidandWebHelpers, you can run the web interface by typing:>> zilch-web sqlite:///exceptions.dbAdditional web configuration parameters are available to designate the host/port that the web application should bind to (viewable by runningzilch-webwith the-hoption).Licensezilchis offered under the MIT license.Authorszilchis made available byBen Bangert.Supportzilch is considered feature-complete as the project owner (Ben Bangert) has no additional functionality or development beyond bug fixes planned. Bugs can be filed on github, should be accompanied by a test case to retain current code coverage, and should be in a Pull request when ready to be accepted into the zilch code-base.For a more full-featured error collector,Sentrynow has a stand-alone client that no longer requires Django calledRaven.zilchwas created beforeRavenwas available, and the author now usesRavenrather thanzilchmost of the time.zilch0.1.3 (01/13/2012)FeaturesApplied pull request from Marius Gedminas to add prefix option support to the error view webapp.0.1.2 (08/07/2011)Bug FixesCleanup session at end of request.0.1.1 (07/25/2011)Bug FixesFix bug with webob imports in client.py0.1 (07/25/2011)FeaturesException reporting via SQLAlchemy and/or ZeroMQRecording Store can be pluggableWSGI Middleware to capture exceptions with WSGI/CGI environment dataWeb User Interface for the recorder to view collected exceptionsEvent tagging to record additional information per exception such as the Hostname, Application, etc.
zilean
zileanZilean is a League of Legends character that can drift through past, present and future. The project is borrowing Zilean's temporal magic to foresee the result of a match.Documentation:here.The project is open to all sorts of contribution and collaboration! Please feel free to clone, fork, PR...anything! If you are interested, contact me!Contact: Johnson [email protected] designed to facilitate data analysis of the RiotMatchTimelineDto. TheMatchTimelineDtois a powerful object that contains information of a specificLeague of Legendsmatch atevery minute mark. Naturally, theMatchTimelineDtobecame anideal object for various machine learning tasks. For example, predicting match results using game statistics before the 16 minute mark.Different from traditional sports, esports such as League of Legends has an innate advantage with respect to the data collection process. Since every play was conducted digitally, it opened up a huge potential to explore and perform all kinds of data analysis.zileanhopes to explore the infinite potentials provided by theRiot Games API,and through the power of computing, make our community a better place.GL:HF!DemoHere is a quick look of how to do League of Legends data analysis withzileanfromzileanimportTimelineCrawler,SnapShots,read_api_keyimportpandasaspd# Use the TimelineCrawler to fetch `MatchTimelineDto`s# from Riot. The `MatchTimelineDto`s have game stats# at each minute mark.# We need a API key to fetch data. See the Riot Developer# Portal for more info.api_key=read_api_key(you_api_key_here)# Crawl 2000 Diamond RANKED_SOLO_5x5 timelines from the Korean server.crawler=TimelineCrawler(api_key,region="kr",tier="DIAMOND",queue="RANKED_SOLO_5x5")result=crawler.crawl(2000,match_per_id=30,file="results.json")# This will take a long time!# We will look at the player statistics at 10 and 15 minute mark.snaps=SnapShots(result,frames=[10,15])# Store the player statistics using in a pandas DataFrameplayer_stats=snaps.summary(per_frame=True)data=pd.DataFrame(player_stats)# Look at the distribution of totalGold difference for `player 0` (TOP player)# at 15 minutes mark.sns.displot(x="totalGold_0",data=data[data['frame']==15],hue="win")Here is an example of some quick machine learning.# Do some simple modellingfromsklearn.model_selectionimporttrain_test_splitfromsklearn.ensembleimportRandomForestClassifier# Define X and y for training datatrain,test=train_test_split(player_stats,test_size=0.33)X_train=train.drop(["matchId","win"],axis=1)y_train=train["win"].astype(int)# Build a default random forest classifierrf=RandomForestClassifier()rf.fit(X_train,y_train)y_fitted=rf.predict(X_train)print(f"Training accuracy:{mean(y_train==y_fitted)}")
zilian-fabric-utils
Failed to fetch description. HTTP Status Code: 404
zilian-mssql-django
SQL Server backend for DjangoWelcome to the Zilian-MSSQL-Django 3rd party backend project!zilian-mssql-djangois a fork ofmssql-django. This project provides an enterprise database connectivity option for the Django Web Framework, with support for Microsoft SQL Server and Azure SQL Database.We'd like to give thanks to the community that made this project possible, with particular recognition of the contributors: OskarPersson, michiya, dlo and the original Google Code django-pyodbc team. Moving forward we encourage partipation in this project from both old and new contributors!We hope you enjoy using the Zilian-MSSQL-Django 3rd party backend.FeaturesSupports Django 3.2 and 4.0Tested on Microsoft SQL Server 2016, 2017, 2019Passes most of the tests of the Django test suiteCompatible withMicosoft ODBC Driver for SQL Server,SQL Server Native Client, andFreeTDSODBC driversSupports AzureSQL serveless db reconnectionDependenciespyodbc 3.0 or newerInstallationInstall pyodbc 3.0 (or newer) and DjangoInstall zilian-mssql-django:pip install zilian-mssql-djangoSet theENGINEsetting in thesettings.pyfile used by your Django application or project to'mssql':'ENGINE': 'mssql'ConfigurationStandard Django settingsThe following entries in a database-level settings dictionary in DATABASES control the behavior of the backend:ENGINEString. It must be"mssql".NAMEString. Database name. Required.HOSTString. SQL Server instance in"server\instance"format.PORTString. Server instance port. An empty string means the default port.USERString. Database user name in"user"format. If not given then MS Integrated Security will be used.PASSWORDString. Database user password.TOKENString. Access token fetched as a user or service principal which has access to the database. E.g. when usingazure.identity, the result ofDefaultAzureCredential().get_token('https://database.windows.net/.default')can be passed.AUTOCOMMITBoolean. Set this toFalseif you want to disable Django's transaction management and implement your own.Trusted_ConnectionString. Default is"yes". Can be set to"no"if required.and the following entries are also available in theTESTdictionary for any given database-level settings dictionary:NAMEString. The name of database to use when running the test suite. If the default value (None) is used, the test database will use the name"test_" + NAME.COLLATIONString. The collation order to use when creating the test database. If the default value (None) is used, the test database is assigned the default collation of the instance of SQL Server.DEPENDENCIESString. The creation-order dependencies of the database. See the official Django documentation for more details.MIRRORString. The alias of the database that this database should mirror during testing. Default value isNone. See the official Django documentation for more details.OPTIONSDictionary. Current available keys are:driverString. ODBC Driver to use ("ODBC Driver 17 for SQL Server","SQL Server Native Client 11.0","FreeTDS"etc). Default is"ODBC Driver 17 for SQL Server".isolation_levelString. Setstransaction isolation levelfor each database session. Valid values for this entry areREAD UNCOMMITTED,READ COMMITTED,REPEATABLE READ,SNAPSHOT, andSERIALIZABLE. Default isNonewhich means no isolation levei is set to a database session and SQL Server default will be used.dsnString. A named DSN can be used instead ofHOST.host_is_serverBoolean. Only relevant if using the FreeTDS ODBC driver under Unix/Linux.By default, when using the FreeTDS ODBC driver the value specified in theHOSTsetting is used in aSERVERNAMEODBC connection string component instead of being used in aSERVERcomponent; this means that this value should be the name of adataserverdefinition present in thefreetds.confFreeTDS configuration file instead of a hostname or an IP address.But if this option is present and its value isTrue, this special behavior is turned off. Instead, connections to the database server will be established usingHOSTandPORToptions, without requiringfreetds.confto be configured.Seehttps://www.freetds.org/userguide/dsnless.htmlfor more information.unicode_resultsBoolean. If it is set toTrue, pyodbc'sunicode_resultsfeature is activated and strings returned from pyodbc are always Unicode. Default value isFalse.extra_paramsString. Additional parameters for the ODBC connection. The format is"param=value;param=value",Azure AD Authentication(Service Principal, Interactive, Msi) can be added to this field.collationString. Name of the collation to use when performing text field lookups against the database. Default isNone; this means no collation specifier is added to your lookup SQL (the default collation of your database will be used). For Chinese language you can set it to"Chinese_PRC_CI_AS".connection_timeoutInteger. Sets the timeout in seconds for the database connection process. Default value is0which disables the timeout.connection_retriesInteger. Sets the times to retry the database connection process. Default value is5.connection_retry_backoff_timeInteger. Sets the back off time in seconds for reries of the database connection process. Default value is5.query_timeoutInteger. Sets the timeout in seconds for the database query. Default value is0which disables the timeout.setencodingandsetdecoding# Example"OPTIONS":{"setdecoding":[{"sqltype":pyodbc.SQL_CHAR,"encoding":'utf-8'},{"sqltype":pyodbc.SQL_WCHAR,"encoding":'utf-8'}],"setencoding":[{"encoding":"utf-8"}],...},Backend-specific settingsThe following project-level settings also control the behavior of the backend:DATABASE_CONNECTION_POOLINGBoolean. If it is set toFalse, pyodbc's connection pooling feature won't be activated.ExampleHere is an example of the database settings:DATABASES={'default':{'ENGINE':'mssql','NAME':'mydb','USER':'user@myserver','PASSWORD':'password','HOST':'myserver.database.windows.net','PORT':'','OPTIONS':{'driver':'ODBC Driver 17 for SQL Server',},},}# set this to False if you want to turn off pyodbc's connection poolingDATABASE_CONNECTION_POOLING=FalseLimitationsThe following features are currently not fully supported:Altering a model field from or to AutoField at migrationDjango annotate functions have floating point arithmetic problems in some casesAnnotate function with existsExists function in order_byRighthand power and arithmetic with datatimesTimezones, timedeltas not fully supportedRename field/model with foreign key constraintDatabase level constraintsMath degrees power or radiansBit-shift operatorsFiltered indexDate extract functionHashing functionsJSONField lookups have limitations, more detailshere.
zilib
zilib-pythonPython wrappers for the Rust zilib library
zilingo-airflow-utils-test
This Python package provides the utility methods to access the data from drive, GCS, BigQuery, S3 and you can also upload the files on it. Triggering status email for your workflow
zilingo-recomm-utils
No description available on PyPI.
zilla
UNKNOWN
zillion
Zillion: Make sense of it allIntroductionZillionis a data modeling and analytics tool that allows combining and analyzing data from multiple datasources through a simple API. It acts as a semantic layer on top of your data, writes SQL so you don't have to, and easily bolts onto existing database infrastructure via SQLAlchemy Core. TheZillionNLP extension has experimental support for AI-powered natural language querying and warehouse configuration.WithZillionyou can:Define a warehouse that contains a variety of SQL and/or file-like datasourcesDefine or reflect metrics, dimensions, and relationships in your dataRun multi-datasource reports and combine the results in a DataFrameFlexibly aggregate your data with multi-level rollups and table pivotsCustomize or combine fields with formulasApply technical transformations including rolling, cumulative, and rank statisticsApply automatic type conversions - i.e. get a "year" dimension for free from a "date" columnSave and share report specificationsUtilize ad hoc or public datasources, tables, and fields to enrich reportsQuery your warehouse with natural language (NLP extension)Leverage AI to bootstrap your warehouse configurations (NLP extension)Table of ContentsInstallationPrimerMetrics and DimensionsWarehouse TheoryQuery LayersWarehouse CreationExecuting ReportsNatural Language QueryingZillion ConfigurationExample - Sales AnalyticsWarehouse ConfigurationReportsAdvanced TopicsSubreportsFormulaMetricsDivisor MetricsDivisor MetricsFormulaDimensionsDataSource FormulasType ConversionsAdHocMetricsAdHocDimensionsAdHocDataTablesTechnicalsConfig VariablesDataSource PrioritySupported DataSourcesMultiprocess ConsiderationsDemo UI / Web APIDocsHow to ContributeInstallationWarning: This project is in an alpha state and is subject to change. Please test carefully for production usage and report any issues.$pipinstallzillion or $pipinstallzillion[nlp]PrimerThe following is meant to give a quick overview of some theory and nomenclature used in data warehousing withZillionwhich will be useful if you are newer to this area. You can also skip below for a usageexampleor warehouse/datasource creationquickstartoptions.In short:Zillionwrites SQL for you and makes data accessible through a very simple API:result=warehouse.execute(metrics=["revenue","leads"],dimensions=["date"],criteria=[("date",">","2020-01-01"),("partner","=","Partner A")])Metrics and DimensionsInZillionthere are two main types ofFieldsthat will be used in your report requests:Dimensions: attributes of data used for labelling, grouping, and filteringMetrics: facts and measures that may be broken down along dimensionsAFieldencapsulates the concept of a column in your data. For example, you may have aFieldcalled "revenue". ThatFieldmay occur across several datasources or possibly in multiple tables within a single datasource.Zillionunderstands that all of those columns represent the same concept, and it can try to use any of them to satisfy reports requesting "revenue".Likewise there are two main types of tables used to structure your warehouse:Dimension Tables: reference/attribute tables containing only related dimensionsMetric Tables: fact tables that may contain metrics and some related dimensions/attributesDimension tables are often static or slowly growing in terms of row count and contain attributes tied to a primary key. Some common examples would be lists of US Zip Codes or company/partner directories.Metric tables are generally more transactional in nature. Some common examples would be records for web requests, ecommerce sales, or stock market price history.Warehouse TheoryIf you really want to go deep on dimensional modeling and the drill-across querying techniqueZillionemploys, I recommend reading Ralph Kimball'sbookon data warehousing.To summarize,drill-across queryingforms one or more queries to satisfy a report request formetricsthat may exist across multiple datasources and/or tables at a particulardimensiongrain.Zillionsupports flexible warehouse setups such assnowflakeorstarschemas, though it isn't picky about it. You can specify table relationships through a parent-child lineage, andZillioncan also infer acceptable joins based on the presence of dimension table primary keys.Zilliondoes not support many-to-many relationships at this time, though most analytics-focused scenarios should be able to work around that by adding views to the model if needed.Query LayersZillionreports can be thought of as running in two layers:DataSource Layer: SQL queries against the warehouse's datasourcesCombined Layer: A final SQL query against the combined data from the DataSource LayerThe Combined Layer is just another SQL database (in-memory SQLite by default) that is used to tie the datasource data together and apply a few additional features such as rollups, row filters, row limits, sorting, pivots, and technical computations.Warehouse CreationThere are multiple ways to quickly initialize a warehouse from a local or remote file:# Path/link to a CSV, XLSX, XLS, JSON, HTML, or Google Sheet# This builds a single-table Warehouse for quick/ad-hoc analysis.url="https://raw.githubusercontent.com/totalhack/zillion/master/tests/dma_zip.xlsx"wh=Warehouse.from_data_file(url,["Zip_Code"])# Second arg is primary key# Path/link to a sqlite database# This can build a single or multi-table Warehouseurl="https://github.com/totalhack/zillion/blob/master/tests/testdb1?raw=true"wh=Warehouse.from_db_file(url)# Path/link to a WarehouseConfigSchema (or pass a dict)# This is the recommended production approach!config="https://raw.githubusercontent.com/totalhack/zillion/master/examples/example_wh_config.json"wh=Warehouse(config=config)Zillion also provides a helper script to boostrap a DataSource configuration file for an existing database. Seezillion.scripts.bootstrap_datasource_config.py. The bootstrap script requires a connection/database url and output file as arguments. See--helpoutput for more options, including the optional--nlpflag that leverages OpenAI to infer configuration information such as column types, table types, and table relationships. The NLP feature requires the NLP extension to be installed as well as the following set in yourZillionconfig file:OPENAI_MODELOPENAI_API_KEYExecuting ReportsThe main purpose ofZillionis to execute reports against aWarehouse. At a high level you will be crafting reports as follows:result=warehouse.execute(metrics=["revenue","leads"],dimensions=["date"],criteria=[("date",">","2020-01-01"),("partner","=","Partner A")])print(result.df)# Pandas DataFrameWhen comparing to writing SQL, it's helpful to think of the dimensions as the target columns of agroup bySQL statement. Think of the metrics as the columns you areaggregating. Think of the criteria as thewhere clause. Your criteria are applied in the DataSource Layer SQL queries.TheReportResulthas a Pandas DataFrame with the dimensions as the index and the metrics as the columns.AReportis said to have agrain, which defines the dimensions each metric must be able to join to in order to satisfy theReportrequirements. Thegrainis a combination ofalldimensions, including those referenced in criteria or in metric formulas. In the example above, thegrainwould be{date, partner}. Both "revenue" and "leads" must be able to join to those dimensions for this report to be possible.These concepts can take time to sink in and obviously vary with the specifics of your data model, but you will become more familiar with them as you start putting together reports against your data warehouses.Natural Language QueryingWith the NLP extensionZillionhas experimental support for natural language querying of your data warehouse. For example:result=warehouse.execute_text("revenue and leads by date last month")print(result.df)# Pandas DataFrameThis NLP feature requires a running instance of Qdrant (vector database) and the following values set in yourZillionconfig file:QDRANT_HOSTOPENAI_API_KEYEmbeddings will be produced and stored in both Qdrant and a local cache. The vector database will be initialized the first time you try to use this by analyzing all fields in your warehouse. An example docker file to run Qdrant is provided in the root of this repo.You have some control over how fields get embedded. Namely in the configuration for any field you can choose whether to exclude a field from embeddings or override which embeddings map to that field. All fields are included by default. The following example would exclude thenet_revenuefield from being embedded and maprevenuemetric requests to thegross_revenuefield.{"name":"gross_revenue","type":"numeric(10,2)","aggregation":"sum","rounding":2,"meta":{"nlp":{// enabled defaults to true"embedding_text":"revenue"// str or list of str}}},{"name":"net_revenue","type":"numeric(10,2)","aggregation":"sum","rounding":2,"meta":{"nlp":{"enabled":false}}},Additionally you may also exclude fields via the following warehouse-level configuration settings:{"meta":{"nlp":{"field_disabled_patterns":[// list of regex patterns to exclude"rpl_ma_5"],"field_disabled_groups":[// list of "groups" to exclude, assuming you have// set group value in the field's meta dict."No NLP"]}},...}If a field is disabled at any of the aforementioned levels it will be ignored. This type of control becomes useful as your data model gets more complex and you want to guide the NLP logic in cases where it could confuse similarly named fields. Any time you adjust which fields are excluded you will want to force recreation of your embeddings collection using theforce_recreateflag onWarehouse.init_embeddings.Note:This feature is in its infancy. It's usefulness will depend on the quality of both the input query and your data model (i.e. good field names)!Zillion ConfigurationIn addition to configuring the structure of yourWarehouse, which will be discussed further below,Zillionhas a global configuration to control some basic settings. TheZILLION_CONFIGenvironment var can point to a yaml config file. Seeexamples/sample_config.yamlfor more details on what values can be set. Environment vars prefixed with ZILLION_ can override config settings (i.e. ZILLION_DB_URL will override DB_URL).The database used to store Zillion report specs can be configured by setting the DB_URL value in yourZillionconfig to a valid database connection string. By default a SQLite DB in /tmp is used.Example - Sales AnalyticsBelow we will walk through a simple hypothetical sales data model that demonstrates basicDataSourceandWarehouseconfiguration and then shows some samplereports. The data is a simple SQLite database that is part of theZilliontest code. For reference, the schema is as follows:CREATETABLEpartners(idINTEGERPRIMARYKEY,nameVARCHARNOTNULLUNIQUE,created_atTIMESTAMPDEFAULTCURRENT_TIMESTAMP);CREATETABLEcampaigns(idINTEGERPRIMARYKEY,nameVARCHARNOTNULLUNIQUE,categoryVARCHARNOTNULL,partner_idINTEGERNOTNULL,created_atTIMESTAMPDEFAULTCURRENT_TIMESTAMP);CREATETABLEleads(idINTEGERPRIMARYKEY,nameVARCHARNOTNULL,campaign_idINTEGERNOTNULL,created_atTIMESTAMPDEFAULTCURRENT_TIMESTAMP);CREATETABLEsales(idINTEGERPRIMARYKEY,itemVARCHARNOTNULL,quantityINTEGERNOTNULL,revenueDECIMAL(10,2),lead_idINTEGERNOTNULL,created_atTIMESTAMPDEFAULTCURRENT_TIMESTAMP);Warehouse ConfigurationAWarehousemay be created from a JSON or YAML configuration that defines its fields, datasources, and tables. The code below shows how it can be done in as little as one line of code if you have a pointer to a JSON/YAMLWarehouseconfig.fromzillionimportWarehousewh=Warehouse(config="https://raw.githubusercontent.com/totalhack/zillion/master/examples/example_wh_config.json")This example config uses adata_urlin itsDataSourceconnectinfo that tellsZillionto dynamically download that data and connect to it as a SQLite database. This is useful for quick examples or analysis, though in most scenarios you would put a connection string to an existing database like you seehereThe basics ofZillion'swarehouse configuration structure are as follows:AWarehouseconfig has the following main sections:metrics: optional list of metric configs for global metricsdimensions: optional list of dimension configs for global dimensionsdatasources: mapping of datasource names to datasource configs or config URLsADataSourceconfig has the following main sections:connect: database connection url or dict of connect paramsmetrics: optional list of metric configs specific to this datasourcedimensions: optional list of dimension configs specific to this datasourcetables: mapping of table names to table configs or config URLsTip: datasource and table configs may also be replaced with a URL that points to a local or remote config file.In this example all four tables in our database are included in the config, two as dimension tables and two as metric tables. The tables are linked through a parent->child relationship: partners to campaigns, and leads to sales. Some tables also utilize thecreate_fieldsflag to automatically createFieldson the datasource from column definitions. Other metrics and dimensions are defined explicitly.To view the structure of thisWarehouseafter init you can use theprint_infomethod which shows all metrics, dimensions, tables, and columns that are part of your data warehouse:wh.print_info()# Formatted print of the Warehouse structureFor a deeper dive of the config schema please see the fulldocs.ReportsExample:Get sales, leads, and revenue by partner:result=wh.execute(metrics=["sales","leads","revenue"],dimensions=["partner_name"])print(result.df)"""sales leads revenuepartner_namePartner A 11 4 165.0Partner B 2 2 19.0Partner C 5 1 118.5"""Example:Let's limit to Partner A and break down by its campaigns:result=wh.execute(metrics=["sales","leads","revenue"],dimensions=["campaign_name"],criteria=[("partner_name","=","Partner A")])print(result.df)"""sales leads revenuecampaign_nameCampaign 1A 5 2 83Campaign 2A 6 2 82"""Example:The output below shows rollups at the campaign level within each partner, and also a rollup of totals at the partner and campaign level.Note:the output contains a special character to mark DataFrame rollup rows that were added to the result. TheReportResultobject contains some helper attributes to automatically access or filter rollups, as well as adf_displayattribute that returns the result with friendlier display values substituted for special characters. The under-the-hood special character is left here for illustration, but may not render the same in all scenarios.fromzillionimportRollupTypesresult=wh.execute(metrics=["sales","leads","revenue"],dimensions=["partner_name","campaign_name"],rollup=RollupTypes.ALL)print(result.df)"""sales leads revenuepartner_name campaign_namePartner A Campaign 1A 5.0 2.0 83.0Campaign 2A 6.0 2.0 82.0􏿿 11.0 4.0 165.0Partner B Campaign 1B 1.0 1.0 6.0Campaign 2B 1.0 1.0 13.0􏿿 2.0 2.0 19.0Partner C Campaign 1C 5.0 1.0 118.5􏿿 5.0 1.0 118.5􏿿 􏿿 18.0 7.0 302.5"""See theReportdocsfor more information on supported rollup behavior.Example:Save a report spec (not the data):First you must make sure you have saved yourWarehouse, as saved reports are scoped to a particularWarehouseID. To save aWarehouseyou must provide a URL that points to the complete config.name="My Unique Warehouse Name"config_url=<someurlpointingtoacompletewarehouseconfig>wh.save(name,config_url)# wh.id is populated after thisspec_id=wh.save_report(metrics=["sales","leads","revenue"],dimensions=["partner_name"])Note: If you built yourWarehousein python from a list ofDataSources, or passed in adictfor theconfigparam on init, there currently is not a built-in way to output a complete config to a file for reference when saving.Example:Load and run a report from a spec ID:result=wh.execute_id(spec_id)This assumes you have saved this report ID previously in the database specified by the DB_URL in yourZillionyaml configuration.Example:Unsupported GrainIf you attempt an impossible report, you will get anUnsupportedGrainException. The report below is impossible because it attempts to break down the leads metric by a dimension that only exists in a child table. Generally speaking, child tables can join back up to parents (and "siblings" of parents) to find dimensions, but not the other way around.# Fails with UnsupportedGrainExceptionresult=wh.execute(metrics=["leads"],dimensions=["sale_id"])Advanced TopicsSubreportsSometimes you need subquery-like functionality in order to filter one report to the results of some other (that perhaps required a different grain). Zillion provides a simplistic way of doing that by using thein reportornot in reportcriteria operations. There are two supported ways to specify the subreport: passing a report spec ID or passing a dict of report params.# Assuming you have saved report 1234 and it has "partner" as a dimension:result=warehouse.execute(metrics=["revenue","leads"],dimensions=["date"],criteria=[("date",">","2020-01-01"),("partner","in report",1234)])# Or with a dict:result=warehouse.execute(metrics=["revenue","leads"],dimensions=["date"],criteria=[("date",">","2020-01-01"),("partner","in report",dict(metrics=[...],dimension=["partner"],criteria=[...]))])The criteria field used inin reportornot in reportmust be a dimension in the subreport. Note that subreports are executed atReportobject initialization time instead of duringexecute-- as such they can not be killed usingReport.kill. This may change down the road.Formula MetricsIn our example above our config included a formula-based metric called "rpl", which is simplyrevenue / leads. AFormulaMetriccombines other metrics and/or dimensions to calculate a new metric at the Combined Layer of querying. The syntax must match your Combined Layer database, which is SQLite in our example.{"name":"rpl","aggregation":"mean","rounding":2,"formula":"{revenue}/{leads}"}Divisor MetricsAs a convenience, rather than having to repeatedly define formula metrics for rate variants of a core metric, you can specify a divisor metric configuration on a non-formula metric. As an example, say you have arevenuemetric and want to create variants forrevenue_per_leadandrevenue_per_sale. You can define your revenue metric as follows:{"name":"revenue","type":"numeric(10,2)","aggregation":"sum","rounding":2,"divisors":{"metrics":["leads","sales"]}}Seezillion.configs.DivisorsConfigSchemafor more details on configuration options, such as overriding naming templates, formula templates, and rounding.Aggregation VariantsAnother minor convenience feature is the ability to automatically generate variants of metrics for different aggregation types in a single field configuration instead of across multiple fields in your config file. As an example, say you have asalescolumn in your data and want to create variants forsales_meanandsales_sum. You can define your metric as follows:{"name":"sales","aggregation":{"mean":{"type":"numeric(10,2)","rounding":2},"sum":{"type":"integer"}}}The final config would not have asalesmetric, but would instead havesales_meanandsales_sum. Note that you can further customize the settings for the generated fields, such as getting a custom name, by specifying that in the nested settings for that aggregation type. In practice it's not a big savings over just defining the metrics separately, but some may prefer this approach.Formula DimensionsExperimental support exists forFormulaDimensionfields as well. AFormulaDimensioncan only use other dimensions as part of its formula, and it also gets evaluated in the Combined Layer database. As an additional restriction, aFormulaDimensioncan not be used in report criteria as those filters are evaluated at the DataSource Layer. The following example assumes a SQLite Combined Layer database:{"name":"partner_is_a","formula":"{partner_name} = 'Partner A'"}DataSource FormulasOur example also includes a metric "sales" whose value is calculated via formula at the DataSource Layer of querying. Note the following in thefieldslist for the "id" param in the "main.sales" table. These formulas are in the syntax of the particularDataSourcedatabase technology, which also happens to be SQLite in our example."fields":["sale_id",{"name":"sales","ds_formula":"COUNT(DISTINCT sales.id)"}]Type ConversionsOur example also automatically created a handful of dimensions from the "created_at" columns of the leads and sales tables. Support for automatic type conversions is limited, but for date/datetime columns in supportedDataSourcetechnologies you can get a variety of dimensions for free this way.The output ofwh.print_infowill show the added dimensions, which are prefixed with "lead_" or "sale_" as specified by the optionaltype_conversion_prefixin the config for each table. Some examples of auto-generated dimensions in our example warehouse include sale_hour, sale_day_name, sale_month, sale_year, etc.As an optimization in the where clause of underlying report queries,Zillionwill try to apply conversions to criteria values instead of columns. For example, it is generally more efficient to query asmy_datetime > '2020-01-01' and my_datetime < '2020-01-02'instead ofDATE(my_datetime) == '2020-01-01', because the latter can prevent index usage in many database technologies. The ability to apply conversions to values instead of columns varies by field andDataSourcetechnology as well.To prevent type conversions, setskip_conversion_fieldstotrueon yourDataSourceconfig.Seezillion.field.TYPE_ALLOWED_CONVERSIONSandzillion.field.DIALECT_CONVERSIONSfor more details on currently supported conversions.Ad Hoc MetricsYou may also define metrics "ad hoc" with each report request. Below is an example that creates a revenue-per-lead metric on the fly. These only exist within the scope of the report, and the name can not conflict with any existing fields:result=wh.execute(metrics=["leads",{"formula":"{revenue}/{leads}","name":"my_rpl"}],dimensions=["partner_name"])Ad Hoc DimensionsYou may also define dimensions "ad hoc" with each report request. Below is an example that creates a dimension that partitions on a particular dimension value on the fly. Ad Hoc Dimensions are a subclass ofFormulaDimensions and therefore have the same restrictions, such as not being able to use a metric as a formula field. These only exist within the scope of the report, and the name can not conflict with any existing fields:result=wh.execute(metrics=["leads"],dimensions=[{"name":"partner_is_a","formula":"{partner_name}= 'Partner A'"])Ad Hoc TablesZillionalso supports creation or syncing of ad hoc tables in your database duringDataSourceorWarehouseinit. An example of a table config that does this is shownhere. It uses the table config'sdata_urlandif_existsparams to control the syncing and/or creation of the "main.dma_zip" table from a remote CSV in a SQLite database. The same can be done in other database types too.The potential performance drawbacks to such an approach should be obvious, particularly if you are initializing your warehouse often or if the remote data file is large. It is often better to sync and create your data ahead of time so you have complete schema control, but this method can be very useful in certain scenarios.Warning: be careful not to overwrite existing tables in your database!TechnicalsThere are a variety of technical computations that can be applied to metrics to compute rolling, cumulative, or rank statistics. For example, to compute a 5-point moving average on revenue one might define a new metric as follows:{"name":"revenue_ma_5","type":"numeric(10,2)","aggregation":"sum","rounding":2,"technical":"mean(5)"}Technical computations are computed at the Combined Layer, whereas the "aggregation" is done at the DataSource Layer (hence needing to define both above).For more info on how shorthand technical strings are parsed, see theparse_technical_stringcode. For a full list of supported technical types seezillion.core.TechnicalTypes.Technicals also support two modes: "group" and "all". The mode controls how to apply the technical computation across the data's dimensions. In "group" mode, it computes the technical across the last dimension, whereas in "all" mode in computes the technical across all data without any regard for dimensions.The point of this becomes more clear if you try to do a "cumsum" technical across data broken down by something like ["partner_name", "date"]. If "group" mode is used (the default in most cases) it will do cumulative sumswithineach partner over the date ranges. If "all" mode is used, it will do a cumulative sum across every data row. You can be explicit about the mode by appending it to the technical string: i.e. "cumsum:all" or "mean(5):group"Config VariablesIf you'd like to avoid putting sensitive connection information directly in yourDataSourceconfigs you can leverage config variables. In yourZillionyaml config you can specify aDATASOURCE_CONTEXTSsection as follows:DATASOURCE_CONTEXTS:my_ds_name:user:user123pass:goodpasswordhost:127.0.0.1schema:reportingThen when yourDataSourceconfig for the datasource named "my_ds_name" is read, it can use this context to populate variables in your connection url:"datasources":{"my_ds_name":{"connect":"mysql+pymysql://{user}:{pass}@{host}/{schema}"...}}DataSource PriorityOnWarehouseinit you can specify a default priority order for datasources by name. This will come into play when a report could be satisfied by multiple datasources.DataSourcesearlier in the list will be higher priority. This would be useful if you wanted to favor a set of faster, aggregate tables that are grouped in aDataSource.wh=Warehouse(config=config,ds_priority=["aggr_ds","raw_ds",...])Supported DataSourcesZillion'sgoal is to support any database technology that SQLAlchemy supports (pictured below). That said the support and testing levels inZillionvary at the moment. In particular, the ability to do type conversions, database reflection, and kill running queries all require some database-specific code for support. The following list summarizes known support levels. Your mileage may vary with untested database technologies that SQLAlchemy supports (it might work just fine, just hasn't been tested yet). Please report bugs and help add more support!SQLite: supportedMySQL: supportedPostgreSQL: supportedDuckDB: supportedBigQuery, Redshift, Snowflake, SingleStore, PlanetScale, etc: not tested but would like to support theseSQLAlchemy has connectors to many popular databases. The barrier to support many of these is likely pretty low given the simple nature of the sql operationsZillionuses.Note that the above is different than the database support for the Combined Layer database. Currently only SQLite is supported there; that should be sufficient for most use cases but more options will be added down the road.Multiprocess ConsiderationsIf you plan to runZillionin a multiprocess scenario, whether on a single node or across multiple nodes, there are a couple of things to consider:SQLite DataSources do not scale well and may run into locking issues with multiple processes trying to access them on the same node.Any file-based database technology that isn't centrally accessible would be challenging when using multiple nodes.Ad Hoc DataSource and Ad Hoc Table downloads should be avoided as they may conflict/repeat across each process. Offload this to an external ETL process that is better suited to manage those data flows in a scalable production scenario.Note that you can still use the default SQLite in-memory Combined Layer DB without issues, as that is made on the fly with each report request and requires no coordination/communication with other processes or nodes.Demo UI / Web APIZillion Web UIis a demo UI and web API for Zillion that also includes an experimental ChatGPT plugin. See the README there for more info on installation and project structure. Please note that the code is light on testing and polish, but is expected to work in modern browsers. Also ChatGPT plugins are quite slow at the moment, so currently that is mostly for fun and not that useful.DocumentationMore thorough documentation can be foundhere. You can supplement your knowledge by perusing thetestsdirectory or theAPI reference.How to ContributePlease See thecontributingguide for more information. If you are looking for inspiration, adding support and tests for additional database technologies would be a great help.
zillionare
zillionareAI量化交易Free software: MIT licenseDocumentation:https://zillionare.readthedocs.io.FeaturesTODOCreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History0.1.0 (2020-11-01)First release on PyPI.
zillionare-backtest
zillionare-backtestzillionare-backtest是大富翁的回测服务器,它跟zillionare-omega,zillionare-omicron,zillionare-alpha,zillionare-trader-client共同构成回测框架。zillionare-backtest的功能是提供账户管理、交易撮合和策略评估。zillionare-backtest使用omicron来提供撮合数据,但您也可以自写开发撮合数据的提供器^1。与普通的回测框架不同,大富翁回测框架并非是侵入式的。在您的策略中,只需要接入我们的trader-client,并在策略发出交易信号时,向backtest server发出对应的交易指令,即可完成回测。当回测完成,转向实盘时,不需要修改策略代码,仅需要指回测服务器url指向zillionare-trader-server即可。zillionare-backtest与zillionare-trader-server的API绝大多数地方是完全兼容的。这种设计意味着,您的策略可以不使用大富翁数据框架,甚至可以不使用zillionare-trader-client(您可以自定义一套接口并实现,使之能同时适配您的交易接口和backtest接口)。因此,您的策略可以在任何时候,切换到最适合的量化框架。功能账户管理当您开始回测时,先通过[start_backtest][backtest.web.interfaces.start_backtest]来创建一个账户。在知道该账户的name与token的情况下,您可以在随后通过[delete_accounts][backtest.web.interfaces.delete_accounts]来删除账户。在回测完成时,请记得调用stop_backtest。在回测完成时,stop_backtest会将资产表更新到回测结束日(否则,只更新到最后一次交易当天,因为服务器完全由客户端来驱动,自己没有时间概念)。但并不会对当前持仓进行卖出操作,原因是:卖出操作将修改transactions表。而新增的transaction并不是策略触发的不利于评估策略的真实情况。如果在回测期出现仅有一笔真实交易,其它都是被终末强平的话,那么此次回测实际上可能在时间上、或者策略周期上没有选好。如果回测系统进行强平,就可能掩盖这种事实。在回测终末期,可能存在股票因跌停而无法卖出的情况;或者股票停牌中,无法卖出。这些情况下,模拟卖出也有难度。交易撮合您可以通过[buy][backtest.web.interfaces.buy], [market_buy][backtest.web.interfaces.market_buy], [sell][backtest.web.interfaces.sell], [market_sell][backtest.web.interfaces.market_sell]和[sell_percent][backtest.web.interfaces.sell_percent]来进行交易。状态跟踪您可以通过[info][backtest.web.interfaces.info]来查看账户的基本信息,比如当前总资产、持仓、本金、盈利等。您还可以通过[positions][backtest.web.interfaces.positions]、[bills][backtest.web.interfaces.bills]来查看账户的持仓、交易历史记录。策略评估[metrics][backtest.web.interfaces.metrics]方法将返回策略的各项指标,比如sharpe, sortino, calmar, win rate, max drawdown等。您还可以传入一个参考标的,backtest将对参考标的也同样计算上述指标。关键概念复权处理您的策略在发出买卖信号时,应该使用与order_time一致的现价,而不是任何复权价。如果您的持仓在持有期间,发生了分红送股,回测服务器会自动将分红送股转换成股数加到您的持仓中。当您最终清空持仓时,可以通过bills接口查询到分红送股的成交情况(记录为XDXR类型的委托)。撮合机制在撮合时,backtest首先从data feeder中获取order_time以后(含)的行情数据。接下来去掉处在涨跌停中的那些bar(如果是委买,则去掉已处在涨停期间的bar,反之亦然)。在剩下的bar中,backtest会选择价格低于委托价的那些bar(如果是委卖,则选择价格高于委托价的那些bar),依顺序匹配委托量,直到委托量全部被匹配为止。最后,backtest将匹配到的bar的量和价格进行加权平均,得到成交均价。当backtest使用zillionare-feed来提供撮合数据时,由于缺少盘口数据,zillionare-feed使用分钟级行情数据中的close价格和volume来进行撮合。因此,可能出现某一分钟的最高价或者最低价可能满足过您的委托价要求,但backtest并未成交撮合的情况。我们这样设计,主要考虑到当股价达到最高或者最低点时,当时的成交量不足以满足委托量。现在backtest的设计,可能策略的鲁棒性更好。作为一个例外,如果委托时的order_time为9:31分之前,backtest将会使用9:31分钟线的开盘价,而不是9:31分的收盘价来进行撮合,以满足部分策略需要以次日开盘价买入的需求。另外,您也应该注意到,zillionare-feed使用分钟线来替代了盘口数据,尽管在绝大多数情形下,这样做不会有什么影响,但两者毕竟是不同的。一般来说,成交量肯定小于盘口的委买委卖量。因此,在回测中出现买卖委托量不足的情况时,对应的实盘则不一定出现。在这种情况下,可以适当调低策略的本金设置。另外一个差异是,分钟成交价必然不等同于盘口成交价,因此会引入一定的误差。不过长期来看,这种误差应该是零均值的,因此对绝大多数策略不会产生实质影响。!!!info 了解backtest的撮合机制后,您应该已经明白,正确设定策略的本金(principal)会使得回测的系统误差更小。委买委卖委买时,委买量必须是100股的整数倍。这个限制与实盘是一致的。同样,您的券商对委卖交易也做了限制,但回测服务器并未对此进行限制。经评估,去掉这个限制并不会对策略的有效性产生任何影响,但会简化策略的编写。保存和查询回测状态如果某次回测数据比较好,您可以save_backtest接口来保存回测状态(transaction, metrics, bills, positions),以及策略参数和描述。通过调用load_backtest来获取停牌处理如果某支持仓股当前停牌,在计算持仓市值时,系统会使用停牌前的收盘价来计算市值。为性能优化考验,如果一支股票停牌时间超过500个交易日,则系统将放弃继续向前搜索停牌前的收盘价,改用买入时的成交均价来代替。这种情况应该相当罕见。版本历史关于版本历史,请查阅版本历史CreditsZillionare-backtest项目是通过Python Project Wizard创建的。
zillionare-core-types
1. UsageTo use zillionare core types in a projectfrom coretypes import Frame, FrameType2. Features本模块提供了在 Zillionare 中的核心类型定义。主要有:基础数据结构类的定义,比如时间帧类型 FrameType (对应于其它框架中可能使用的字符串 '1m', '1d'之类的定义), 时间日期类型 Frame, 证券类型定义 FrameType 等。在几乎所有需要使用行情数据的地方,您都应该使用这些类型定义。交易错误类型,比如 NocashError (现金不足以完成交易错误)等等。QuotesFetcher 接口定义。如果您要将其它数据源接入到 zillionare 中,就需要实现这个接口,按照定义返回相应的数据。一旦实现了此接口,就可以在 zillionare-omega 配置文件中配置接口,以例 omega 可以自动启用这个 adaptor 来获取数据。2.1. 基础数据结构定义基础数据结构定义中,共有两种类型。一种是用以静态类型检查使用的,通常 IDE,mypy 这样一些工具会利用它,以检测编码错误,或者提供自动完成。比如, BarsArray 就是这样一个类型,我们可以用它来声明一个行情函数的返回值类型。它的特点时,以目前的 Python 版本(截止到 Python3.8) 来看,类型信息无法在运行时访问到。另一类则是运行时类型,比如 FrameType 等。2.1.1. FrameType行情数据都是按帧进行封装的,比如,每 1 分钟为一个单位,封装了高开低收、成交量等信息。这样的单位常常还有 5 分钟,15 分钟,日线等等。 FrameType 列举了在 Zillionare 中常用的帧类型。在其它软件中,您可能看到unit或者peroid、周期等说法。当然,可能 FrameType 是最精准的一个词。Zillionare 提供了以下对应帧类型:周期字符串类型数值年线1YFrameType.YEAR10季线1QFrameType.QUARTER9月线1MFrameType.MONTH8周线1WFrameType.WEEK7日线1DFrameType.DAY660 分钟线60mFrameType.MIN60530 分钟线30mFrameType.MIN30415 分钟线15mFrameType.MIN1535 分钟线5mFrameType.MIN52分钟线1mFrameType.MIN11FrameType还提供了<,<=,>=,>等比较运算。2.1.2. SecurityType常见的证券交易品种定义。类型值说明SecurityType.STOCKstock股票类型SecurityType.INDEXindex指数类型SecurityType.ETFetfETF基金SecurityType.FUNDfund基金SecurityType.LOFlof,LOF基金SecurityType.FJAfja分级A基金SecurityType.FJBfjb分级B基金SecurityType.BONDbond债券基金SecurityType.STOCK_Bstock_bB股SecurityType.UNKNOWNunknown未知品种它的一个用法是,在我们查询证券列表中,有哪些股票类型的代码时:secs=awaitSecurity.select().types(SecurityType.STOCK).eval()print(secs)2.1.3. MarketType市场类型。Zillionare支持的类型为上交所XSHG和XSHE类型值说明MarketType.XSHGXSHG上交所MarketType.XSHEXSHE深交所2.1.4. bars_dtype在zillionare中,我们一般使用 Numpy Structured Array来存储行情数据,以使用numpy的许多性能算法进行运算。同时,它也比pandas.DataFrame更省内存,在小数据集(<50万条)时,多数运算(但不是每一种运算)会有更高的性能。要使用 Numpy Structured Array来表示行情数据,就需要定义定段列表。bars_dtype就是这样的列表,它包括了字段(frame, open, high, low, close, volume, amount, factor)。bars_dtype=np.dtype([("frame","datetime64[s]"),("open","f4"),("high","f4"),("low","f4"),("close","f4"),("volume","f8"),("amount","f8"),("factor","f4"),])2.1.5. bars_dtype_with_code在bars_dtype基础上增加了code字段,以用于同时存取多个证券的行情的情况。bars_dtype_with_code=np.dtype([("code","O"),("frame","datetime64[s]"),("open","f4"),("high","f4"),("low","f4"),("close","f4"),("volume","f8"),("amount","f8"),("factor","f4"),])2.1.6. bars_cols、bars_with_limit_dtype, bars_with_limit_cols即定义在bars_dtype中的字段列表。有时候我们需要在numpy与pandas dataframe之间进行转换时,往往需要这个变量的值。bars_with_limit_dtype提供了带涨跌停报价的行情数据类型。bars_with_limit_cols提供了定义在bars_with_limit_dtype中的字段名列表。2.1.7. BarsArray可用此静态类型作为行情数据(常用变量名bars)的type hint,对应于bars_dtype。2.1.8. BarsWithLimitArray同BarsArray,但带涨跌停报价,对应于bars_with_limit_array。2.1.9. BarsPanel对应于bars_dtype_with_code的type hint类型。2.1.10. xrxd_info_dtype除权除息信息类型2.1.11. security_info_dtype定义了证券列表的字段2.2. Trade Errors在coretypes.errors.trade中,定义了交易中常常可能出现的异常类型。在TradeClient, TraderServer和Backtesting Server间常常都需要使用它。我们把Trade Errors分为客户端错误coretypes.errors.trade.client.*,coretypes.errors.trade.server.*,coretypes.errors.trade.entrust.*三种类型,分别表明客户端编码、传参错误;服务器内部错误和交易类型错误。!!! Tips 对开发者而言,如果需要将此类异常传入到客户端,需要通过TraderError.as_json将其串行化后再通过网络发送,在客户端则可以通过TraderError.from_json将其恢复。为方便查错,服务器在生成TradeError时,可以传入`with_stack=True`,这样生成的TraderError(及子类)中,将包含调用栈信息(在`stack`属性中),以方便查错。deffoo():try:raiseTraderError("mock error",with_stack=True)exceptTradeErrorase:print(e.stack)2.3. QuotesFetcherZillionare目前只适配了聚宽的数据源,但我们通过 QuotesFetcher 让您可以自行适配其它数据源。你需要实现定义在QuotesFetcher中的接口,然后在omega的配置文件中,加载您的实现。具体实现可以参考omega-jqadaptor配置可以参见omega-config# defaults.yamlquotes_fetchers:-impl:jqadaptor# there must be a create_instance method in this moduleaccount:${JQ_ACCOUNT}password:${JQ_PASSWORD}3. Credits本项目使用ppw创建,并遵循ppw定义的代码风格和质量规范。
zillionare-em
emFree software: MITDocumentation:https://em.readthedocs.ioFeaturesTODOCreditsThis package was created withCookiecutterand thezillionare/cookiecutter-pypackageproject template.
zillionare-omega
高速分布式本地行情服务器简介Omega为大富翁(Zillionare)智能量化交易平台提供数据服务。它是一个分布式、高性能的行情服务器,核心功能有:并发对接多个上游数据源,如果数据源还支持多账户和多个并发会话的话,Omega也能充分利用这种能力,从而享受到最快的实时行情。目前官方已提供JoinQuant的数据源适配。高性能和层次化的数据本地化存储,在最佳性能和存储空间上巧妙平衡。在需要被高频调用的行情数据部分,Omega直接使用Redis存储数据;财务数据一个季度才会变动一次,因而读取频率也不会太高,所以存放在关系型数据库中。这种安排为各种交易风格都提供了最佳计算性能。优秀的可伸缩部署(scalability)特性。Omega可以根据您对数据吞吐量的需求,按需部署在单机或者多台机器上,从而满足个人、工作室到大型团队的数据需求。自带数据(Battery included)。我们提供了从2015年以来的30分钟k线以上数据,并且通过CDN进行高速分发。安装好Omega之后,您可以最快在十多分钟内将这样巨量的数据同步到本地数据库。帮助文档鸣谢Zillionare-Omega采用以下技术构建:Pycharm开源项目支持计划
zillionare-omega-adaptors-jq
jqadatporJqdatasdk adapter for zillionare omega. Usually you don't need to install/deploy this library by yourself.Free software: MIT licenseDocumentation:https://jqadatpor.readthedocs.io.Featuresfetch quotation data from joinquant and convert to zillionare compatible format.History1.2core-types版本升至0.5.1。升级后,行情数据接口返回的数据类型也将相应改变,因此,本更新是破坏性更新。1.1change list增加除权除息信息查询获取证券列表接口get_security_list增加查询日期参数。1.0.8change list增加涨跌停查询增加quota查询接口异步化1.0.3 (2021-3-31)change listfetcher will not try fetching data after login failed. This is friendly to server.Fix: after use markdown as readme/history file type, forget correct manifest.in, this cause tox failed.1.0.2 (2021-3-30)This is a patch just to add releae notes. It's identical to 1.0.1 on binary sense.change listadd release notes1.0.1 (2021-3-30)This is first official release of zillionare-omega-adaptors-jq.Features:get_barget_security_listget_all_trade_daysget_bars_batchget_valuationbug fixes: github: #2, #31.0.0 (2021-3-29) (yanked)due to severe bug found (github://#3), please don't use this versionbug fix: 1. lock down jqdatasdk, sqlalchemy's version. Recently sqlalchemy's update (to 1.4) cause several incompatible issue. 2. remove dependancy of omicron 3. fix timezone issue of get_bars/get_bars_batch, see #20.3.2 (2020-12-04)Alpha release of 0.3Features:get_barget_security_listget_all_trade_daysget_bars_batchget_valuation0.1.0 (2020-04-10)First release on PyPI.
zillionare-omicron
Omicron - Core Library for ZillionareContentsinstallation简介Omicron是Zillionare的核心公共模块,实现了数据访问层,向其它模块提供行情、市值、交易日历、证券列表、时间操作及Trigger等功能。使用文档CreditsZillionare-Omicron采用以下技术构建:CookiecutterCookiecutter-pypackage
zillionare-pluto
plutoSkeleton project created by Python Project Wizard (ppw)Free software: MITDocumentation:https://zillionare.github.io/pluto/FeaturesTODOCreditsThis package was created with theppwtool. For more information, please visit theproject page.
zillionare-ths-boards
boards同花顺概念板块与行业板块数据本地化项目Free software: MITDocumentation:https://zillionare.github.io/boards/Features自动同步通过boards serve启动服务器之后,每日凌晨5时自动同步板块数据,并将其按当天日期保存。注意我们使用了akshare来从同花顺获取板块数据。akshare的相应接口并没有时间参数,也即,所有同步的板块数据都只能是最新的板块数据。但如果在当天5时之后,同花顺更新的板块数据,则更新的数据将不会反映在当天日期为索引的数据当中。板块操作提供了根据板块代码获取板块名字(get_name)、根据名字查代码(get_code)、根据名字进行板块名的模糊查找(fuzzy_match_board_name增)等功能。此外,我们还提供了filter方法,允许查找同时属于于多个板块的个股。获取新增加的概念板块新概念板块往往是近期炒作的热点。您可以通过ConceptBoard.find_new_concept_boards来查询哪些板块是新增加的。此功能对行业板块无效。获取新加入概念板块的个股对某个概念而言,新加入的个股可能是有资金将要运作的标志。通过ConceptBoard.new_members_in_board可以查询新加入某个概念板块的个股列表。命令行接口提供了命令行接口以启动和停止服务,以及进行一些查询,详情请见[][#]查询同时处于某几个概念板块中的个股boards filter --industry 计算机应用 --with-concpets 医药 医疗器械 --without 跨境支付其他boards使用akshare来下载数据。下载速度较慢,且可能遇到服务器拒绝应答的情况。这种情况下,boards将会以退火算法,自动延迟下载速度重试5次,以保证最终能完全下载数据,且不被封IP。在此过程中,您可能看到诸如下面的信息输出,这是正常现象。Document is empty, retrying in 30 seconds... Document is empty, retrying in 30 seconds... Document is empty, retrying in 30 seconds... Document is empty, retrying in 60 seconds... Document is empty, retrying in 120 seconds...CreditsThis package was created with theppwtool. For more information, please visit theproject page.
zillionare-trader-client
大富翁交易客户端trade-client是大富翁量化框架中用来交易的客户端。它对回测和实盘提供了几乎相同的接口,从而使得经过回测的策略,可以无缝切换到实盘环境中。功能进行实盘和回测交易获取账号基本信息,比如本金、资产、持仓、盈亏及盈亏比等。交易函数,比如买入(限价和市价)、卖出(限价和市价)、撤单等查询委托、成交、持仓(当日和指定日期)查询一段时间内的账户评估指标,比如sharpe, sortino, calmar, voliality, win rate, max drawdown等。查询参照标的同期指标。!!!Warning 在回测模式下,注意可能引起账户数据改变的操作,比如buy、sell等,必须严格按时间顺序执行,比如下面的例子:client.buy(..., order_time=datetime.datetime(2022, 3, 1, 9, 31)) client.buy(..., order_time=datetime.datetime(2022, 3, 4, 14, 31)) client.buy(..., order_time=datetime.datetime(2022, 3, 4, 14, 32)) client.sell(..., order_time=datetime.datetime(2022, 3, 7, 9, 31))是正确的执行顺序,但下面的执行顺序必然产生错误的结果(实际上服务器也会进行检测并报错误)client.buy(..., order_time=datetime.datetime(2022, 3, 1, 14, 31)) client.buy(..., order_time=datetime.datetime(2022, 3, 1, 9, 31)) client.sell(..., order_time=datetime.datetime(2022, 3, 7, 9, 31))策略需要自行决定是否允许这样的情况发生,以及如果发生失,会产生什么样的后果。CreditsThis package was created withzillionare/python project wizardproject template.
zilliqa-etl
Zilliqa ETL CLIZilliqa ETL CLI lets you convert Zilliqa data into JSON newline-delimited format.Full documentation available here.QuickstartInstall Zilliqa ETL CLI:pip3installzilliqa-etlExport directory service blocks (Schema,Reference):>zilliqaetlexport_ds_blocks--start-block1--end-block500000\--output-diroutput--provider-urihttps://api.zilliqa.comFind other commandshere.For the latest version, check out the repo and call>pip3install-e.>python3zilliqaetl.pyUseful LinksSchemaCommand ReferenceDocumentationRunning Tests>pip3install-e.[dev]>exportZILLIQAETL_PROVIDER_URI=https://api.zilliqa.com >pytest-vvRunning Tox Tests>pip3installtox >toxRunning in DockerInstall Dockerhttps://docs.docker.com/install/Build a docker image> docker build -t zilliqa-etl:latest . > docker image lsRun a container out of the image> docker run -v $HOME/output:/zilliqa-etl/output zilliqa-etl:latest export_ds_blocks -s 1 -e 500000 -o output
zillogaussdist
No description available on PyPI.
zillow
zillowDescriptionInstallpipinstallzillow# orpip3installzillow
zillowAPI
No description available on PyPI.
zillow-api-s
What is Zillow APIZillow is a large online real estate marketplace where you can find all properties for sale or rent. Using the Zillow API or no-codeZillow scraper, you can connect to this vast database and retrieve real estate listings based on specific search parameters.Using the Zillow data API, you can collect up-to-date real estate data without worrying about blockings, rotating proxies, and other difficulties you might encounter in the development process.It makes it easy to get data such as a full address, coordinates (longitude and latitude), price, property description, and many other data characterizing the features of the property, such as the number of bedrooms and bathrooms.Installpip install zillow-api-sAPI KeyTo use the Zillow API, you need an API key. You can get an API key and some credits for free by signing up on theScrape-It.Cloud.UsingTo understand how to use Zillow API, look at this Zillow API example:fromzillow_apiimportZillowAPIzillow_api_client=ZillowAPI("INSERT_YOUR_API_KEY")search_result=zillow_api_client.search(params={"keyword":"new york, ny","type":"forSale","price[min]":1000000,"price[max]":2000000,"homeTypes":["house","apartment"]})print(search_result)In the Zillow API, there are both required and optional request parameters that you can use to customize your search. Here's an overview of the difference between the two:Required Parameters:keyword(string): This parameter is required and is used to specify the keyword used to search for listings. It represents the search query or location you want to retrieve listings for.type(enum): This parameter is required and determines the type of listing you want to retrieve. The possible values are: forSale, forRent, or sold.Optional Parameters:sort(enum): This parameter allows you to specify the sorting option for the search results. It is an optional parameter, and the possible values include various sorting criteria such as verifiedSource, homesForYou, priceHighToLow, and more.price(object): This parameter allows you to filter listings based on the price range. It is an optional parameter consisting of min and max values representing the minimum and maximum price of the listing.beds(object): This parameter enables you to filter listings based on the number of bedrooms. It is optional and includes min and max values to define the minimum and maximum number of bedrooms.baths(object): Similar to beds, this parameter allows you to filter listings based on the number of bathrooms. It is optional and includes min and max values to define the minimum and maximum number of bathrooms.Several other optional parameters are available, such as yearBuilt, lotSize, squareFeet, homeTypes, etc.You can narrow your search criteria using these additional options and get listings that meet your requirements. For example, to include or exclude certain features, set price ranges, specify the number of bedrooms or bathrooms, and more.You can also retrieve data for a specific property using the following function:fromzillow_apiimportZillowAPIzillow_api_client=ZillowAPI("INSERT_YOUR_API_KEY")property_result=zillow_api_client.property(params={"url":"https://www.zillow.com/homedetails/301-E-79th-St-APT-23S-New-York-NY-10075/31543731_zpid/"})print(property_result)You can learn more about the additional options and response format onthe documentation page.Zillow API use casesThe Zillow API has many use cases for developers, businesses, and individuals in the real estate industry. For example, you can use the Zillow API to gather data on real estate trends, market values, and property information.You can also use the Zillow API to provide location-based services, such as nearby amenities, schools, transportation options, etc. This can help people make informed decisions about the suitability of a property based on its surroundings.Using the Zillow API to gain insights into market trends, demographics, and real estate statistics is useful for market research firms.Zillow Python API for Real Estate ScrapingPython is one of the most popular programming languages for web scraping and data processing. Therefore, you can create a multifunctional tool for working and processing real estate data using the Python Zillow API.Proxy or Zillow APIIf you need to collect real estate data from Zillow and you are writing your own scraper, you have two options:Research the structure of Zillow and write your own algorithm to collect data. Here, you will need to use a proxy to avoid blocking. You may also need to plug in captcha services.Use Zillow API for Python. You don't need to use proxies here since API already uses them.Everyone decides which option is more suitable for him based on the tasks of the project and its requirements.If you want to know more about Zillow scraping, you can read our article aboutZillow scraping in Python.
zillowpy
UNKNOWN
zilong
No description available on PyPI.
zilonis
No description available on PyPI.
zim2txt
zim2txt is a Python module that scrapes through a .zim file and creates .txt files from each article it contains. This tool is designed for Linux systems but it works with WSL (Windows Subsystem for Linux) as well. You must install zim-tools(sudo apt-get install zimtools)in advance for this module to work. Here is how to use the module:import zim2txt zim2txt.ZimTools.Export("Path for .zim file", "Path for a temporary folder that will be deleted later (I used /usr/games/newfolder with WSL since it didn't work for any folder that is out of root directory. If it does for you, then you can use any other folder as well.)", "Path for .txt files to be saved (do not use same path with temporary files)", "encoding method, default set to utf8") # Example import zim2txt zim2txt.ZimTools.Export("/data/articles.zim", "/usr/games/newfolder") # You don't have to pass encoding method any argument if you're cool with utf8
zima
======================Zima imageboard engine======================Zima is an imageboard engine that trys to keep all good things from`wakaba http://wakaba.c3.cx/`_ on the one side and introduces a coupleof differences on the another.**Note Bene!** This is very early version of this software. Do notexpect production quality, use it at your own risk.Installation============Download source tarball unpack it and run::python setup.py installThis will install ``zimabbs`` python package, ``zima.py`` and``zwipe.py`` scripts in your ``local/bin`` directory and creates twodirectories:* ``/var/zima`` -- contains imageboard resources;* ``/etc/zima.d`` -- where configuration stores.To start server type::zima.pyUsing mongoDB and CherryPy==========================By default zima uses it's own ad-hoc ``memory`` backend which keepsall data in ram and has no data persistency. This is useful fordeveloping but not for production.Alternatively you can use mongoDb which keeps data more reliable. Toplug mongoDb to zima you need to install ``pyMongo`` package. Afterthat change ``db`` section of your config (which is``/etc/zima.d/config.py``) in the following way::'db': {...'backend': 'mongo','iface': 'localhost:27017',},To use ``CherryPy`` server instead of bottle.py standart developmentserver ``WSGIRefServer`` install CherryPy package and modify config inthe following way::from bottle import CherryPyServerserver = {...'frontend': CherryPyServer,}
zimage
zimageZen image library.
zimagi
No description available on PyPI.
zimatise
No description available on PyPI.
zimbra-permissions-inspector
Zimbra Permissions InspectorThis CLI utility allows you to query sending permissions on a Zimbra server.You can retrieve sending permissions for a particular Zimbra Distribution List (ZDL) or get a complete list of 'Send as' permissions. The most basic query, list all the existing ZDLs (both dynamic and static).RequirementsMake sure you meet the following requirements:Make sure tosetup the Zimbra LDAP admin user, on your Zimbra server.You need thepython-ldappackage.Tested on Zimbra v8.8.12InstallationYou can install it withpip:pip install zimbra-permissions-inspectorUsageHelp output:usage: zimbra-permissions-inspector.py [-h] [-l ZDL] [-sa] [-v] SERVER BASEDN LDAP_ADMIN Query sending permissions on a Zimbra server positional arguments: SERVER URI formatted address of the Zimbra server BASEDN Specify the searchbase or base DN of the Zimbra LDAP server LDAP_ADMIN Admin user of the Zimbra LDAP server optional arguments: -h, --help show this help message and exit -l ZDL, --zdl ZDL Query which Zimbra accounts have permissions to send mails to the given ZDL -sa, --sendas Query 'send as' permissions on both Zimbra accounts and ZDLs -v, --version Show current versionNote that positional arguments are mandatory!.Usage examplesIf no optional arguments are given, it'll list all the existing ZDLs (both dynamic and static):zimbra-permissions-inspector ldap://zimbra.somecorp.com dc=somecorp,dc=com uid=zimbra,cn=admins,cn=zimbraQuery which Zimbra accounts have permissions to send mails to a ZDL ("Zimbra Distribution Lists") named "my-zdl-list":zimbra-permissions-inspector ldap://zimbra.somecorp.com dc=somecorp,dc=com uid=zimbra,cn=admins,cn=zimbra -l zdl-listGet a list of "send as" permissions:zimbra-permissions-inspector ldap://zimbra.somecorp.com dc=somecorp,dc=com uid=zimbra,cn=admins,cn=zimbra -sa
zimbraweb
Python Zimbra WebbranchstatusmaindevelopUsageFor the entire documentation please seehttps://TINF21CS1.github.io/python-zimbra-web.The documentation for the develop branch can be found here:https://TINF21CS1.github.io/python-zimbra-web/develop/You can useZimbraUserto send E-mails. You can send multiple E-mails within a single session.fromzimbrawebimportZimbraUseruser=ZimbraUser("https://myzimbra.server")user.login("s000000","hunter2")user.send_mail(to="[email protected]",subject="subject",body="body",cc="[email protected]")user.logout()Sending EMLsPlease note theLimitationswhen trying to parse EML.fromzimbrawebimportZimbraUseruser=ZimbraUser("https://myzimbra.server")user.login("s000000","hunter2")emlstr=open("myemlfile.eml").read()user.send_eml(emlstr)Sending raw WebkitPayloadsIf you don't want to rely on us to generate the payload, you can generate a payload yourself and send it usingfromzimbrawebimportZimbraUseruser=ZimbraUser("https://myzimbra.server")user.login("s000000","hunter2")# you could also generate the payload yourself or use our libraryraw_payload,boundary=user.generate_webkit_payload(to="[email protected]",subject="hello world!",body="this is a raw payload.")# then send the raw_payload bytesuser.send_raw_payload(raw_payload,boundary)user.logout()AttachmentsYou can generate attachments using theWebkitAttachmentclass:fromzimbrawebimportZimbraUser,WebkitAttachmentuser=ZimbraUser("https://myzimbra.server")user.login("s000000","hunter2")attachments=[]withopen("myfile.jpg","rb")asf:attachments.append(WebkitAttachment(content=f.read(),filename="attachment.jpg"))user.send_mail(to="[email protected]",subject="subject",body="body",attachments=attachments)user.logout()Known LimitationsEmoji is not supported, even though other UTF-8 characters are. See Issue #3This package is made with German UIs in mind. If your UI is in a different language, feel free to fork and adjust the language-specific strings as needed.Issue #43The EML parsing can strictly only parse plaintext emails, optionally with attachments. Any emails with a Content-Type other thantext/plainormultipart/mixedwill be rejected. This is because the zimbra web interface does not allow HTML emails. Parsingmultipart/mixedwill only succeed if there is exactly onetext/plainpart and, optionally, attachments with theContent-Disposition: attachmentheader. If there are anymultipart/alternativeparts, the parsing will fail because we cannot deliver them to the Zimbra web interface.Installpip install zimbrawebContributingBest practice is to develop in a python3.8 virtual env:python3.8 -m venv env,source env/bin/activate(Unix) orenv\Scripts\activate.ps1(Windows)Install dev-requirementspip install -r requirements_dev.txtWhen working on a new feature, checkout togit branch -b feature_myfeaturename. We are usingthis branching modelBefore committing, checkmypy srcreturns no failures.flake8 src testsreturns no problems.pytesthas no unexpected failed tests.Optionoally, test withtox. Might take a few minutes so maybe only run before push.Development Install$gitclonehttps://github.com/TINF21CS1/python-zimbra-web/ $cdpython-zimbra-web $pipinstall-e.This installs the package with symlink, so the package is automatically updated, when files are changed. It can then be called in a python console.
zimfarm
OverviewZimfarm client provides a way to programmatically interact with Zimfarm services.RequirementsPython 3.7.0 or upA Internet connectionUsername and passwordHow to runOption 1: using source codeclone this repository and go into root of the repositorymake a virtual environment (optional)install dependencies:pip install -r requirements.txtset PYTHONPATH environment variableexport PYTHONPATH=$PWDstart writing code and run them, e.g.python example/list_wokers.pyOption 2: install from pipComing soon...
zimmauth
zimmauthEncrypt and store authentication data for s3 and ssh.
zimp
zimpA pip package for simplifying texts for NLP pipelines
zim-places
FeaturesThis is a python package that allows you to search for cities, provinces, and districts in Zimbabwe.Zimbabwe is split into eight provinces and two cities that are designated as provincial capitals. Districts are organized into 59 provinces.Wards are organized into 1,200 districts.Visit the project homepage for further information on how to use the package.InstallationTo can install the zim_places open shell or terminal and run:pip install zim-placesUsageGet all wards:fromzim_placesimportwardsprint(wards.get_wards())Get all districts:fromzim_placesimportdistrictsprint(districts.get_districts())Get all cities:fromzim_placesimportcitiesprint(cities.get_cities())Get all provinces:fromzim_placesimportprovincesprint(provinces.get_provinces())fromzim_placesimport*importjson# Get the data as jsonprint(get_cities())print(get_wards())print(get_provinces())print(get_districts())# Get the data as a list of dictionaries, remember you can customize the list to suit your needdata=json.loads(get_wards())list_of_wards=[{i['Ward']+' '+i['Province_OR_District']}foriindata.values()]print(list_of_wards)data=json.loads(get_districts())list_of_districts=[{i['District']+' '+i['Province']}foriindata.values()]print(list_of_districts)data=json.loads(get_provinces())list_of_provinces=[{i['Province']+' '+i['Capital']+i['Area(km2)']+i['Population(2012 census)']}foriindata.values()]print(list_of_provinces)data=json.loads(get_cities())list_of_cities=[{i['City']+' '+i['Province']}foriindata.values()]print(list_of_cities)LicenseThe project is licensed under the MIT license.Change Log2.0.0(24/01/22)First Release of the Zim Places Library that will assist developers in providing names of places in zimbabwe
zimple
Namespace package for Zimple packages
zimplewiki
UNKNOWN
zimply
ZIMply is an easy to use, offline reader for Wikipedia - as well as other ZIM files - which provides access to them through any ordinary browser. ZIMply is written entirely in Python 3 and, as the name implies, relies on ZIM files. Each ZIM file is a bundle containing thousands of articles, images, etc. as found on websites such as Wikipedia. ZIMply does all the unpacking for you, and allows you to access the offline Wikipedia right from your web browser by running its own web server.
zimply-core
ZIMply CoreZIMply Core is an unstable fork ofZIMply, intended for developers who would like to use its pure Python zim file parser without the additional dependencies.UsageInstall zimply-core from pypi:pip install zimply-coreNow, in Python code, you can import from zimply_core:from zimply_core import ZIMClient zim_file = ZIMClient("wikipedia.zim", encoding="utf-8", auto_delete=True) print("{name} contains {count} articles".format( name=zim_file.get_article("M/Name").data.decode(), count=zim_file.get_namespace_count("A") ))
zimports
Reformats Python imports so that they can pass flake8-import-order. This is roughly:one import per linealphabetically sorted, with stylistic options for how dots, case sensitivity, and dotted names are sortedgrouped by builtin / external library / current application (also stylistically controllable)unused imports removed, using pyflakes to match “unused import” warnings to actual lines of codeduplicate imports removed (note this does not yet include duplicate symbol names against different imports)no star imports (e.g.from <foo> import *); these are rewritten as explicit names, by importing all the names from each target module and then removing all the unused namessupport for TYPE_CHECKING import blocks.The program currently bolts itself on top offlake8-import-order, in order to reuse the import classification and sorting styles that tool provides. Without options given, the script will look directly for asetup.cfgfile with a[flake8]section and will consume flake8-import-order parameters"application-import-names","application-package-names", and"import-order-style", to sort imports exactly as this linter then expects to find them. All of the single-line import styles, e.g. google, cryptography, pycharm, should just work.Special classifications can be given to imports, as either a “ # noqa” comment indicating the import should not be removed, and optionally the comment “ # noqa nosort” which will place the import into a special “don’t sort” category, placing all of the “nosort” imports in the order they originally appeared, grouped after all the sorted imports. This can be used for special situations where a few imports have to be in a certain order against each other (SQLAlchemy has two lines like this at the moment).The application also does not affect imports that are inside of conditionals or defs, or otherwise indented in any way, with the exception of TYPE_CHECKING imports. This is also the behavior of flake8-import-order; only imports in column zero of the source file are counted, although imports that are on lines below other definitions are counted, which are moved up to the top section of the source file.NoteThis application runs inPython 3 only. It can reformat imports for Python 2 code as well but internally it uses library and language features only available in Python 3.zzzeek why are you writing one of these, there are a dozen pep8 import fixersI’ve just gone through a whole bunch. I need one that:works directly with flake8-import-order so we are guaranteed to have a matchhas shell capability, not only a plugin for vim or sublime text (Python Fix Imports, gratis)Removes unused imports, not just reformats them (importanize)Reformats imports, not just removes unused ones (autoflake)Doesn’t miss removing an import that isn’t used just because it’s on a multiline import (autoflake)Breaks upallimports into individual lines, not just if the line is >80 char (importanize)Is still pretty simple (we’re a bit beyond our original “extremely” simple baseline, because all problems are ultimately not that simple) because (since pyflakes and now flake8-import-order do most of the hard work) this is an extremely simple job, there’s (still) no need for a giant application here.But what about… isort ??Since I developed zimports some years ago and now have it on all my projects, isort has come out and is widely becoming accepted as the de-facto import sorter, because it’s actually super nice and has tons of features. It popped up turned on by default in my vscode IDE and it’s under pycqa, it’s clearly the winning tool in this space.So I wouldliketo use isort, and I’ve tried it out. I was able to get it 99% equivalent to how we sort our imports now, with the exception of a weird relative import issue that still wouldn’t compare againstflake8-import-order(it seemed like lexical sorting wasn’t working correctly). Maybe we can get that little part working, but that’s not the main issue.The bigger shortcoming was IIUC it, like “importanize” mentioned previously, just reformats the imports that are present. It won’t remove unused imports, nor does it have any ability to expandimport *into individual imports, since it isn’t looking at the rest of the code. zimports actually hangs on top offlake8so that we can remove unused imports and it also usesflake8output along with a module import path in order to expand out “*” imports. I use this featureall the timewhen I type out test scripts for SQLAlchemy, I just start withfrom sqlalchemy import *and have zimports clean it all up.Maybe there would be a way to keep zimports for that part, and then use isort for the actual sorting. But then I’m still just using zimports, and while isort definitely does a better job at finding imports to sort (it does them inside method bodies, inside of cython files, wow), it’s not really worth it right now for me to change everything when I still have to maintain zimports anyway.TL;DR; yes go use isort, I have no desire to support zimports for other people! :) zimports does a few things that I personally like a lot, especially removing unused imports which is totally essential for my use cases.UsageThe script can run without any configuration, options are as follows:$ zimports --help usage: zimports [-h] [-m APPLICATION_IMPORT_NAMES] [-p APPLICATION_PACKAGE_NAMES] [--style STYLE] [--multi-imports] [-k] [-kt] [--heuristic-unused HEURISTIC_UNUSED] [--statsonly] [-e] [--diff] [--stdout] filename [filename ...] positional arguments: filename Python filename(s) or directories optional arguments: -h, --help show this help message and exit -m APPLICATION_IMPORT_NAMES, --application-import-names APPLICATION_IMPORT_NAMES comma separated list of names that should be considered local to the application. reads from [flake8] application-import-names by default. -p APPLICATION_PACKAGE_NAMES, --application-package-names APPLICATION_PACKAGE_NAMES comma separated list of names that should be considered local to the organization. reads from [flake8] application-package-names by default. --style STYLE import order styling, reads from [flake8] import- order-style by default, or defaults to 'google' --multi-imports If set, multiple imports can exist on one line -k, --keep-unused keep unused imports even though detected as unused. Implies keep-unused-type-checking -kt, --keep-unused-type-checking keep unused imports even though detected as unused in type checking blocks. zimports does not detect type usage in comments or when used as string --heuristic-unused HEURISTIC_UNUSED Remove unused imports only if number of imports is less than <HEURISTIC_UNUSED> percent of the total lines of code. Ignored in type checking blocks --statsonly don't write or display anything except the file stats -e, --expand-stars Expand star imports into the names in the actual module, which can then have unused names removed. Requires modules can be imported --diff don't modify files, just dump out diffs --stdout dump file output to stdoutConfiguration is currently broken up between consumption of flake8 parameters fromsetup.cfg, and then additional zimports parameters inpyproject.toml(as of version 0.5.0) - unification of these two files will be in a future release, possibly when flake8 adds toml support:# setup.cfg [flake8] enable-extensions = G ignore = A003, E203,E305,E711,E712,E721,E722,E741, F841, N801,N802,N806, W503,W504 import-order-style = google application-import-names = sqlalchemy,test # pyproject.toml, integrated with black [tool.black] line-length = 79 target-version = ['py37'] [tool.zimports] black-line-length = 79 keep-unused-type-checking = true # other options: # multi-imports = true # keep-unused = trueThen, a typical run on a mostly clean source tree looks like:$ zimports lib/ [Unchanged] lib/sqlalchemy/inspection.py (in 0.0058 sec) [Unchanged] lib/sqlalchemy/log.py (in 0.0221 sec) ... [Unchanged] lib/sqlalchemy/orm/attributes.py (in 0.2152 sec) [Unchanged] lib/sqlalchemy/orm/base.py (in 0.0363 sec) [Writing] lib/sqlalchemy/orm/relationships.py ([2% of lines are imports] [source +0L/-2L] [3 imports removed in 0.3287 sec]) [Unchanged] lib/sqlalchemy/orm/strategies.py (in 0.2237 sec)The program has two general modes of usage. One is that of day-to-day usage for an application that already has clean imports. Running zimports on the source files of such an application should produce no changes, except for whatever source files were recently edited, and may have some changes to imports that need to be placed into the correct order. This usage model is similar to that ofBlack, where you can run “zimports .” and it will find whatever files need adjusting and leave the rest alone.The other mode of usage is that of the up-front cleaning up of an application that has un- organized imports. In this mode of usage, the goal is to get the source files to be cleaned up so thatzimportscan be run straight without any modifications to the file needed, including that all necessary imports are either used locally or marked as not to be removed.Problems that can occur during this phase are that some imports are unused and should be removed, while other imports that are apparently unused are still in fact imported by other parts of the program. Another issue is that changing the ordering of imports in complex cases may cause the application to no longer run due to the creation of unresolvable import cycles. Finally, some programs have use ofimport *, pulling in a large list of names for which an unknown portion of them are needed by the application. The options--keep-unused,--heuristic-unusedand--expand-starsare provided to assist in working through these issues until the code can be fully reformatted such that runningzimportsno longer produces changes.The issue of apparently unused imports that are externally imported can be prominent in some applications. In order to allow imports that aren’t locally used to remain in the source file, symbols that are part of__all__will not be removed, as will imports that are followed by a `` # noqa`` comment. Either of these techniques should be applied to imports that are used from other modules but not otherwise referenced within the immediate source file. For the less common case that a few imports really need a very specific import order for things to work, those imports can be followed by a `` # noqa nosort`` comment that will add these lines to a special group at the end of all imports, where they will not be removed and their order relative to each other will be maintained.The program does currently require that you pass it at least one file or directory name as an argument. It also does not have the file caching feature that Black has, which can allow it to only look at files that have changed since the last run. The plan is to have it check that it’s inside a git repository where it will run through files to be committed if no filenames are given.Usage as agithookzimportscan be used with thepre-commitgit hooks framework. To add the plugin, add the following to your.pre-commit-config.yaml. Note therev:attribute refers to a git tag or revision number of zimports to be used, such as"master"or"0.1.3":repos:-repo:https://github.com/sqlalchemyorg/zimports/rev:v0.4.5hooks:-id:zimports
zimpute
For the data matrix of scRNA-seq, each cell is treated as a sample, and each row (column) indicates the expression of different genes in this cell is affected by noise.In summary, we have developed a low rank constraint matrix completion method based on truncated kernel norm.
zimpy
zimpyPython Boilerplate contains all the boilerplate you need to create a Python package.Free software: MIT licenseDocumentation:https://statenandrea33.github.io/zimpyFeaturesTODOCreditsThis package was created withCookiecutterand thegiswqs/pypackageproject template.
zimran-config
Installationpipinstallzimran-configUsagefromzimran.configimportCommonSettingsclassSettings(CommonSettings):passsettings=Settings()
zimran-django
Installationpipinstallzimran-djangoUsage# settings.pyMIDDLEWARE=['zimran.django.middleware.HttpHostRenameMiddleware',...]
zimran-events
zimran-eventsThezimran-py-eventsmodule provides AMQP interfaceInstallationpipinstallzimran-eventsUsage exampleProducerfromzimran.eventsimportAsyncProducerproducer=AsyncProducer(broker_url='')awaitproducer.connect()# message publishingawaitproducer.publish('some.event.routing',{'msg':'hello, world'})Consumer# < 0.4.0fromzimran.eventsimportConsumerfromzimran.events.dtoimportExchangeconsumer=Consumer(service_name='my-service',broker_url='')consumer.add_event_handler(name='routing-key',handler=handler_func,exchange=Exchange(name='exchange-name',type='exchange-type',durable=True,))# orfromzimran.eventsimportConsumerconsumer=Consumer(service_name='my-service',broker_url='')@consumer.event_handler('routing-key')defhandler_func(**kwargs):...# >= 0.4.0 versionfromzimran.events.routingimportRouterfromzimran.events.consumerAsyncConsumerrouter=Router()@router.event_handler('routing-key')asyncdefhandler(message:aio_pika.IncomingMessage):passrouter.add_event_handler('routing-key',some_handler)asyncdefmain():consumer=AsyncConsumer(...,router=router)awaitconsumer.run()CodeThe code and issue tracker are hosted on GitHub:https://github.com/zimran-tech/zimran-py-events.gitFeaturesAMQP interfacesFor contributorsSetting up development environmentClone the project:gitclonehttps://github.com/zimran-tech/zimran-py-events.gitcdzimran-py-eventsCreate a new virtualenv:python3-mvenvvenvsourcevenv/bin/activate
zimran-fastapi
Installationpipinstallzimran-fastapiUsagefromzimran.configimportCommonSettingsfromzimran.fastapiimportcreate_appclassSettings(CommonSettings):passsettings=Settings()app=create_app(settings.environment)fromfastapiimportDepends,Responsefromzimran.fastapi.dependenciesimportget_user_idasyncdefhandler(user_id:int=Depends(get_user_id))->Response:pass
zimran-http
Installationpipinstallzimran-httpUsagefromzimran.httpimportAsyncHttpClient,HttpClient# asyncasyncwithAsyncHttpClient(service='...')asclient:response=awaitclient.get('/endpoint')# syncclient=HttpClient(service='...')response=client.get('/endpoint')
zimran-logging
Installationpipinstallzimran-loggingUsagefromzimran.loggingimportsetup_logger,setup_sentrysetup_logger(debug=...)setup_sentry(dsn=...,environment=...)
zimscan
ZIM ScanMinimal ZIM file reader, designed for article streaming.Getting StartedInstall using pip:pip install zimscanOr from Git repository, for latest version:pip install -U git+https://github.com/jojolebarjos/zimscan.gitIterate over a records, which are binary file-like objects:fromzimscanimportReaderpath="wikipedia_en_all_nopic_2019-10.zim"withReader(open(path,"rb"),skip_metadata=True)asreader:forrecordinreader:data=record.read()...LinksZIM file format, official documentationKiwix ZIM repository, to download official ZIM filesWikipedia ZIM dumps, to download Wikipedia ZIM filesZIMply, a ZIM file reader in the browser, in Pythonlibzim, the reference implementation, in C++pyzim, Python wrapper for libzimpyzim, another Python wrapper for libzimInternet In A Box, a project to bundle open knowledge locally
zimscraperlib
zimscraperlibCollection of python code to re-use across python-based scrapersUsageThis library is meant to be installed via PyPI (zimscraperlib).Make sure to reference it using a version code as the API is subject to frequent changes.API should remain the same only within the sameminorversion.Example usage:zimscraperlib>=1.1,<1.2Dependencieslibmagicwgetlibzim (auto-installed, not available on Windows)PillowFFmpeggifsicle (>=1.92)macOSbrewinstalllibmagicwgetlibtifflibjpegwebplittle-cms2ffmpeggifsicleLinuxsudoaptinstalllibmagic1wgetffmpeg\libtiff5-devlibjpeg8-devlibopenjp2-7-devzlib1g-dev\libfreetype6-devliblcms2-devlibwebp-devtcl8.6-devtk8.6-devpython3-tk\libharfbuzz-devlibfribidi-devlibxcb1-devgifsicleAlpineapk add ffmpeg gifsicle libmagic wget libjpegNota:i18n features do not work on Alpine, seehttps://github.com/openzim/python-scraperlib/issues/134; there is one corresponding test which is failing.ContributionThis project adheres to openZIM'sContribution Guidelinespipinstallhatch pipinstall".[dev]"pre-commitinstall# For testsinvokecoverageUsersNon-exhaustive list of scrapers using it (check status when updating API):openzim/freecodecampopenzim/gutenbergopenzim/ifixitopenzim/kolibriopenzim/nautilusopenzim/nautilusopenzim/openedxopenzim/sotokiopenzim/tedopenzim/warc2zimopenzim/wikihowopenzim/youtube
zimsoap
ZimSOAP : a programmatic python interface to zimbra===================================================[![Build Status](https://travis-ci.org/oasiswork/zimsoap.svg?branch=master)](https://travis-ci.org/oasiswork/zimsoap)ZimSOAP allows to access the [SOAP Zimbra API] through a programmatic,data-type-aware interface high-level. It also handle authentification,sessions, pre-authentication and delegated authentication.Not all methods are covered, but you're welcome to wrap the ones you need andpull-request !If you are looking at a lower-level lib, you better look to [python-zimbra]Allows accessing zimbraAdmin and zimbraAccount SOAP APIs- handle authentification- handle pre-authentification admin->admin and admin->Account- presents the request results as nice Python objects- all requests are tested with 8.0.4 and 8.0.5[SOAP Zimbra API]:http://files.zimbra.com/docs/soap_api/8.0.4/soap-docs-804/api-reference/index.html[python-zimbra]:https://github.com/Zimbra-Community/python-zimbra/Installing----------Simple:pip install zimsoapOr if you fetch it from git:./setup.py installAPI---API is accessible through the ZimbraAdminClient() method. Example :zc = ZimbraAdminClient('myserver.example.tld')zc.login('[email protected]', 'mypassword')print("Domains on that zimbra instance :")for domain in zc.get_all_domains():# Each domain is a zobject.Domain instanceprint(' - %s' % domain.name)You can also access raw SOAP methods:zc = ZimbraAdminClient()zc.login('[email protected]', 'mypassword')xml_response = self.zc.GetAllDomainsRequest()If you want up-to-date code example, look at unit tests...Testing-------Most of tests are integration tests, they require a live zimbra server to berunning.The tests will assume some base data (provisioning scsripts included),create/update some, and cleanup after themselves. They may leave garbage data incase they crash.----**DO NOT USE A PRODUCTION SERVER TO RUN TESTS.**Use a dedicated test server, unable to send emails over network and considerall Zimbra accounts/domains/settings are disposable for automated testspurposes.----### Setting your environment for tests ###Most of tests are Integration tests are to be run either :- against a pre-configured VM, using vagrant- using any zimbra server you provide, after reading the above warning.#### Using the vagrant VM ####There is a VM ready for you with vagrant, just make sure you have vagrant installed and then :$ vagrant up 8.0.5$ vagrant provision 8.0.5You have several zimbra versions available as VMs for testing (see vagrantstatus).*Warning*: the test VM requires 2GB RAM to function properly and may put heavyload on your machine.#### Using your own zimbra server ####Be sure to have a server:- running zimbra 8.x,- ports 7071 and 443 reachables- with an unix user having password-less sudo rightsFirst delete all accounts/domains/calendar resources from your test server and run :cat tests/provision-01-test-data.zmprov | ssh user@mytestserver -- sudo su - zimbra -c | zmprov(considering *mytestserver* is your server hostname and *user* is a unix user with admin sudo rights)It will provision an admin account, but disabled. You have to set a password and enable the accountssh user@mytestserver -- sudo su - zimbra -c 'zmprov sp [email protected] mypassword'ssh user@mytestserver -- sudo su - zimbra -c 'zmprov ma [email protected] zimbraAccountStatus active'Then create a *test_config.ini* in tests/ directory. Example content:[zimbra_server]host = mytestserverserver_name = zimbratest.example.comadmin_port = 7071admin_login = [email protected]_password = mypassword*note: server_name is the internal server name from your zimbra server list (generally matches the hostname)*If you damaged the data with failed tests, you can just delete everything exceptthe admin account and then run :cat tests/provision-01-test-data.zmprov | ssh user@mytestserver -- sudo su - zimbra -c | zmprov### Testing ###After you are all set, you can run tests[the standard python way](https://docs.python.org/2/library/unittest.html)$ python -m unittest discover… Or using [py.test](http://pytest.org/).$ py.testFor contributing code, you may also want to run the *flake8* linter:$ pip install -r test-requirements.txt$ make lint
zimupy
# zimupy ZiMuZu Non-official Python SDK
zimuzu
A command line tool for doing the signing work for zimuzu(http://www.zimuzu.tv/).Install$ (sudo) pip install -U zimuzuUsageTouch a new configuration json file namedzimuzu_config.jsonunder/usr/local/bin:{ "account": "Your username", "password": "Your password" }do the sign:$ zimuzu signContributeContributions are always welcome! :sparkles: :cake: :sparkles: Please file an issue with detailed information if you run into problems. Feel free to send me a pull request, I’ll be happy to review and merge it!DetailsLogin:url:http://www.zimuzu.tv/User/Login/ajaxLoginmethod: postpost data:accountpasswordremember: 0 no; 1 yesurl_back:http://www.zimuzu.tv/user/signresponse(json):{ "status":1, # 1 means ok. "info":"\u767b\u5f55\u6210\u529f\uff01", "data":{"url_back":"http:\/\/www.zimuzu.tv\/user\/sign"} }Visit sign pageurl:http://www.zimuzu.tv/user/signmethod: getThis step is essential, or you’ll get 4002 status when you do the sign next.Do sign:url:http://www.zimuzu.tv/user/sign/dosignmethod: getresponse(json):{ "status":1, # 1 means ok. "info":"", "data":1 # 1 means you keep signing for 1 days. }LicenseMIT.
zim-validate
FeaturesThis is package to assist developers to validate zimbabwean driver’s license numbers, vehicle registration numbers, mobile numbers for Telecel,Econet,Netone and any other valid zim number registered under the major MNOs,Zim passports, national IDs among other things. Visit the project homepage for further information on how to use the package.InstallationTo can install the zim_validate package open shell or terminal and run:pip install zim_validateUsageValidate National ID:If dashes are mandatory the ID will be in the format12-3456789-X-01If thedashesare not mandatory the format will be123456789X01Note parameterdashes: (Optional) If you want to make dashes compulsory on the input stringfromzim_validateimportnational_idprint(national_id("12-3456789-X-01",dashes=True))Validate Passport:All Zimbabwean passports are in the format AB123456fromzim_validateimportpassportprint(passport("AB123456"))Validate ID or passport:All Zimbabwean passports are in the formatAB123456If dashes are mandatory the ID will be in the format12-3456789-X-01If thedashesare not mandatory the format will be123456789X01fromzim_validateimportid_or_passportprint(id_or_passport("12-3456789-X-01",dashes=True))Validate Telecel Numbers:This can be used to check if the given mobile number is a valid Telecel number. Pass the optional parameterprefixyou want to make the prefix 263 mandatoryfromzim_validateimporttelecel_numberprint(telecel_number("263735111111",prefix=True))Validate Econet Numbers:This can be used to check if the given mobile number is a valid Econet number. Pass the optional parameterprefixyou want to make the prefix 263 mandatoryfromzim_validateimporteconet_numberprint(econet_number("263775111111",prefix=True))Validate Netone Numbers:This can be used to check if the given mobile number is a valid Netone number. Pass the optional parameterprefixyou want to make the prefix 263 mandatoryfromzim_validateimportnetone_numberprint(netone_number("263715111111",prefix=True))Validate Any Number from Telecel,Econet,Netone among other MNOs:This can be used to check if the given mobile number is a valid Telecel,Econet or Netone number. Pass the optional parameterprefixyou want to make the prefix 263 mandatoryfromzim_validateimportmobile_numberprint(mobile_number("263782123345",prefix=True))Validate Drivers License:All Zimbabwean drivers licenses are in the format111111ABPass the optional parameterspaceif you want a space between the first 6 numbers and the last two lettersfromzim_validateimportlicense_numberprint(license_number("111111AB",space=False))Validate Zim Vehicle Registration Number:All Zimbabwean number plates are in the formatABC1234Pass the optional parameterspaceif you want a space between the first three letters and the preceding letters numbersfromzim_validateimportnumber_plateprint(number_plate("ABF4495",space=False))Bonus :Password:Validate password to contain at at least one upper case,at least one lower case,at least one digit, at least one special character, minimum length 8fromzim_validateimportpasswordprint(password("Password@1"))LicenseThe project is licensed under the MIT license.Change Log0.0.1(04-February-2022)First Release of the Zim Validate Library that will assist developers validating Zim National Ids, Mobile numbers from all MNOs,Zim passports,Drivers licenses among other things0.0.2(05-February-2022)-Fixing the documentation0.0.1(05-February-2022)-Fixing the bug on the validation of mobile numbers
zina1-e-valente
Zina1 PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content.
zinatoNestser
No description available on PyPI.
zinc-api
No description available on PyPI.
zinc-api-runtime
No description available on PyPI.
zincbase
ZincBase is a state of the art knowledge base and complex simulation suite. It does the following:Store and retrieve graph structured data efficiently.Provide ways to query the graph, including via bleeding-edge graph neural networks.Simulate complex effects playing out across the graph and see how predictions change.Zincbase exists to answer questions like "what is the probability that Tom likes LARPing", or "who likes LARPing", or "classify people into LARPers vs normies", or simulations like "what happens if all the LARPers become normies".It combines the latest in neural networks with symbolic logic (think expert systems and prolog), graph search, and complexity theory.View full documentationhere.Quickstartpip3 install zincbasefrom zincbase import KB kb = KB() kb.store('eats(tom, rice)') for ans in kb.query('eats(tom, Food)'): print(ans['Food']) # prints 'rice' ... # The included assets/countries_s1_train.csv contains triples like: # (namibia, locatedin, africa) # (lithuania, neighbor, poland) kb = KB() kb.from_csv('./assets/countries_s1_train.csv', delimiter='\t') kb.build_kg_model(cuda=False, embedding_size=40) kb.train_kg_model(steps=8000, batch_size=1, verbose=False) kb.estimate_triple_prob('fiji', 'locatedin', 'melanesia') 0.9607RequirementsPython 3Libraries from requirements.txtGPU preferable for large graphs but not requiredInstallationpip install -r requirements.txtNote:Requirements might differ for PyTorch depending on your system.Web UIZincbase can serve live-updating force-directed graphs in 3D to a web browser. The commandpython -m zincbase.webwill set up a static file server and a websocket server for live updates. Visithttp://localhost:5000/in your browser and you'll see the graph UI. As you build a graph in Python, you can visualize it (and changes to it) in realtime through this UI.Here are a couple of examples (source codehere):Complexity (Graph/Network) ExamplesTwo such examples are included (right now; we intend to include more soon such as virus spread and neural nets that communicate.) The examples are basic ones: Conway's Game of Life and the Abelian Sandpile. Here are some screencaps; source code ishere, performance can be lightning fast depending how you tweak Zincbase recursion and propagation settings.Required for the UIYou shouldpip install zincbase[web]to get the optional web extra.You should have Redis running; by default, atlocalhost:6379. This is easily achievable, just dodocker run -p 6379:6379 -d redisTestingpython test/test_main.py python test/test_graph.py ... etc ... all the test files there python -m doctest zincbase/zincbase.pyValidation"Countries" and "FB15k" datasets are included in this repo.There is a script to evaluate that ZincBase gets at least as good performance on the Countries dataset as the original (2019) RotatE paper. From the repo's root directory:python examples/eval_countries_s3.pyIt tests the hardest Countries task and prints out the AUC ROC, which should be ~ 0.95 to match the paper. It takes about 30 minutes to run on a modern GPU.There is also a script to evaluate performance on FB15k:python examples/fb15k_mrr.py.Running the web UIThere are a couple of extra requirements -- install withpip3 install zincbase[web]. You also need an accessible Redis instance somewhere. This one-liner will get it running locally:docker run -p 6379:6379 -d redis(requires Docker, of course.)You then need a Zincbase server instance running:Building documentationFrom docs/ dir:make html. If something changed a lot:sphinx-apidoc -o . ..Pushing to pypiNOTE: This is now all automatic via CircleCI, but here are the manual steps for reference:Editsetup.pyas appropriate (probably not necessary)Edit the version inzincbase/__init__.pyFrom the top project directorypython setup.py sdist bdist_wheel --universaltwine upload dist/*TODOadd ability tokb = KB(backend='complexdb://my_api_key')utilize postgres as backend triple storeReinforcement learning for graph traversal.Rete algorithm (maybe)References & AcknowledgementsTheo Trouillon. Complex-Valued Embedding Models for Knowledge Graphs. Machine Learning[cs.LG]. Université Grenoble Alpes, 2017. English. ffNNT : 2017GREAM048L334: Computational Syntax and Semantics -- Introduction to Prolog, Steve HarlowOpen Book Project: Prolog in Python, Chris MeyersProlog Interpreter in JavascriptRotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space, Zhiqing Sun and Zhi-Hong Deng and Jian-Yun Nie and Jian Tang, International Conference on Learning Representations, 2019CitingIf you use this software, please consider citing:@software{zincbase, author = {{Tom Grek}}, title = {ZincBase: A state of the art knowledge base}, url = {https://github.com/tomgrek/zincbase}, version = {0.1.1}, date = {2019-05-12} }ContributingSee CONTRIBUTING. And please do!
zinc-cli
ZINC CLIWelcome to the Zinc CLI.FeaturesBlah 1Blah 2Blah 3LinksHelloExamplesprint("Hello World")
zinc-dns
No description available on PyPI.
zincsearch-sdk
zincsearch-sdkZinc Search engine API documentshttps://docs.zincsearch.comThis Python package is automatically generated by theOpenAPI Generatorproject:API version: 0.3.3Package version: 0.3.3Build package: org.openapitools.codegen.languages.PythonClientCodegen For more information, please visithttps://www.zincsearch.comRequirements.Python >=3.6Installation & Usagepip installIf the python package is hosted on a repository, you can install directly using:pipinstallgit+https://github.com/zinclabs/sdk-python-zincsearch.git(you may need to runpipwith root permission:sudo pip install git+https://github.com/zinclabs/sdk-python-zincsearch.git)Then import the package:importzincsearch_sdkSetuptoolsInstall viaSetuptools.pythonsetup.pyinstall--user(orsudo python setup.py installto install the package for all users)Then import the package:importzincsearch_sdkGetting StartedPlease follow theinstallation procedureand then run the following:importtimeimportzincsearch_sdkfrompprintimportpprintfromzincsearch_sdk.apiimportdocumentfromzincsearch_sdk.model.meta_http_response_documentimportMetaHTTPResponseDocumentfromzincsearch_sdk.model.meta_http_response_errorimportMetaHTTPResponseErrorfromzincsearch_sdk.model.meta_http_response_idimportMetaHTTPResponseIDfromzincsearch_sdk.model.meta_http_response_record_countimportMetaHTTPResponseRecordCountfromzincsearch_sdk.model.meta_json_ingestimportMetaJSONIngest# Defining the host is optional and defaults to http://localhost:4080# See configuration.py for a list of all supported configuration parameters.configuration=zincsearch_sdk.Configuration(host="http://localhost:4080")# The client must configure the authentication and authorization parameters# in accordance with the API server security policy.# Examples for each auth method are provided below, use the example that# satisfies your auth use case.# Configure HTTP basic authorization: basicAuthconfiguration=zincsearch_sdk.Configuration(username='YOUR_USERNAME',password='YOUR_PASSWORD')# Enter a context with an instance of the API clientwithzincsearch_sdk.ApiClient(configuration)asapi_client:# Create an instance of the API classapi_instance=document.Document(api_client)query="query_example"# str | Querytry:# Bulk documentsapi_response=api_instance.bulk(query)pprint(api_response)exceptzincsearch_sdk.ApiExceptionase:print("Exception when calling Document->bulk:%s\n"%e)Documentation for API EndpointsAll URIs are relative tohttp://localhost:4080ClassMethodHTTP requestDescriptionDocumentbulkPOST/api/_bulkBulk documentsDocumentbulkv2POST/api/_bulkv2Bulkv2 documentsDocumentdeleteDELETE/api/{index}/_doc/{id}Delete documentDocumentes_bulkPOST/es/_bulkES bulk documentsDocumentindexPOST/api/{index}/_docCreate or update documentDocumentindex_with_idPUT/api/{index}/_doc/{id}Create or update document with idDocumentmultiPOST/api/{index}/_multiMulti documentsDocumentupdatePOST/api/{index}/_update/{id}Update document with idIndexadd_or_remove_es_aliasPOST/es/_aliasesAdd or remove index alias for compatible ESIndexanalyzePOST/api/_analyzeAnalyzeIndexanalyze_indexPOST/api/{index}/_analyzeAnalyzeIndexcreatePOST/api/indexCreate indexIndexcreate_templatePOST/es/_index_templateCreate update index templateIndexdeleteDELETE/api/index/{index}Delete indexIndexdelete_templateDELETE/es/_index_template/{name}Delete templateIndexe_s_create_indexPUT/es/{index}Create index for compatible ESIndexe_s_get_mappingGET/es/{index}/_mappingGet index mappings for compatible ESIndexes_existsHEAD/es/{index}Checks if the index exists for compatible ESIndexexistsHEAD/api/index/{index}Checks if the index existsIndexget_es_aliasesGET/es/{target}/_alias/{target_alias}Get index alias for compatible ESIndexget_indexGET/api/index/{index}Get index metadataIndexget_mappingGET/api/{index}/_mappingGet index mappingsIndexget_settingsGET/api/{index}/_settingsGet index settingsIndexget_templateGET/es/_index_template/{name}Get index templateIndexindex_name_listGET/api/index_nameList index NameIndexlistGET/api/indexList indexesIndexlist_templatesGET/es/_index_templateList index teplatesIndexrefreshPOST/api/index/{index}/refreshResfresh indexIndexset_mappingPUT/api/{index}/_mappingSet index mappingsIndexset_settingsPUT/api/{index}/_settingsSet index SettingsIndexupdate_templatePUT/es/_index_template/{name}Create update index templateSearchdelete_by_queryPOST/es/{index}/_delete_by_querySearches the index and deletes all matched documentsSearchmsearchPOST/es/_msearchSearch V2 MultipleSearch for compatible ESSearchsearchPOST/es/{index}/_searchSearch V2 DSL for compatible ESSearchsearch_v1POST/api/{index}/_searchSearch V1UsercreatePOST/api/userCreate userUserdeleteDELETE/api/user/{id}Delete userUserlistGET/api/userList userUserloginPOST/api/loginLoginUserupdatePUT/api/userUpdate userDefaulthealthzGET/healthzGet healthzDefaultversionGET/versionGet versionDocumentation For ModelsAggregationHistogramBoundAuthLoginRequestAuthLoginResponseAuthLoginUserIndexAnalyzeResponseIndexAnalyzeResponseTokenIndexIndexListResponseMetaAggregationAutoDateHistogramMetaAggregationDateHistogramMetaAggregationDateRangeMetaAggregationHistogramMetaAggregationIPRangeMetaAggregationMetricMetaAggregationRangeMetaAggregationResponseMetaAggregationsMetaAggregationsTermsMetaAnalyzerMetaBoolQueryMetaDateRangeMetaExistsQueryMetaFuzzyQueryMetaHTTPResponseMetaHTTPResponseDeleteByQueryMetaHTTPResponseDocumentMetaHTTPResponseErrorMetaHTTPResponseIDMetaHTTPResponseIndexMetaHTTPResponseRecordCountMetaHTTPResponseTemplateMetaHealthzResponseMetaHighlightMetaHitMetaHitsMetaHttpRetriesResponseMetaIPRangeMetaIdsQueryMetaIndexAnalysisMetaIndexSettingsMetaIndexSimpleMetaIndexTemplateMetaJSONIngestMetaMappingsMetaMatchBoolPrefixQueryMetaMatchPhrasePrefixQueryMetaMatchPhraseQueryMetaMatchQueryMetaMultiMatchQueryMetaPageMetaPrefixQueryMetaPropertyMetaQueryMetaQueryStringQueryMetaRangeMetaRangeQueryMetaRegexpQueryMetaSearchResponseMetaShardsMetaSimpleQueryStringQueryMetaTemplateMetaTemplateTemplateMetaTermQueryMetaTotalMetaUserMetaVersionResponseMetaWildcardQueryMetaZincQueryV1AggregationDateRangeV1AggregationNumberRangeV1AggregationParamsV1QueryParamsV1ZincQueryDocumentation For AuthorizationbasicAuthType: HTTP basic authenticationAuthorNotes for Large OpenAPI documentsIf the OpenAPI document is large, imports in zincsearch_sdk.apis and zincsearch_sdk.models may fail with a RecursionError indicating the maximum recursion limit has been exceeded. In that case, there are a couple of solutions:Solution 1: Use specific imports for apis and models like:from zincsearch_sdk.api.default_api import DefaultApifrom zincsearch_sdk.model.pet import PetSolution 2: Before importing the package, adjust the maximum recursion limit as shown below:import sys sys.setrecursionlimit(1500) import zincsearch_sdk from zincsearch_sdk.apis import * from zincsearch_sdk.models import *Zinc Search engine API documents https://docs.zincsearch.com # noqa: E501
zind
zindWalk a filesystem with configurable path and filename filters.Python Zindzindis library for walking the filesystem and scanning files with configurable path and text filters.Zind featuresWalk filesystemFilter files and directories using configurable filtersPrint lines of text in filesFilter lines of text using configurable filtersPure Python - no dependenciesInstallationpip install zindZind CliSome examples of the zind cli would look like this:# Walk the filesystem $ zind # Filter out all paths that contain the text 'node_module' $ zind -ge node_module # Only include paths with the text 'py' $ zind -g py # Only include paths that end with the text py # zind -gr ".*py$"Cli UsageFilesystem files and directories can be filtered based on input tokens.-gwill only return lines with matching text.-gewill exclude lines with matching text.-grwill use regex matching to include text.-gcwill enforce case sensitive matches. The options can be mix and matched.-gre ".*.py$"will exclude all files ending in .py.Files can be scanned line by line as well.-twill cause the tool to print matching lines within files that contain the matching text.-tewill exclude matching lines in text.Getting StartedThe most simple example of the api would look like this:from zind.api.core_find import Find find = Find() file_matches = find.find(".",[]) for file_match in file_matches: print(file_match)
zindex
No description available on PyPI.
zindex-py
DISCLAIMERThis repo is a fork of the original repo located athttps://github.com/mattgodbolt/zindex. We modify this repo for using it cohesively with DLIO Profilerhttps://github.com/hariharan-devarajan/dlio-profiler.zindexcreates and queries an index on a compressed, line-based text file in a time- and space-efficient way.The itch I hadI have many multigigabyte text gzipped log files and I'd like to be able to find data in them by an index. There's a key on each line that a simple regex can pull out. However, to find a particular record requireszgrep, which takes ages as it has to seek through gigabytes of previous data to get to each record.Enterzindexwhich builds an index and also stores decompression checkpoints along the way which allows lightning fast random access. Pulling out single lines by either line number of by an index entry is then almost instant, even for huge files. The indices themselves are small too, typically ~10% of the compressed file size for a simple unique numeric index.Creating an indexzindexneeds to be told what part of each line constitutes the index. This can be done by a regular expression, by field, or by piping each line through an external program.By default zindex creates an index offile.gz.zindexwhen asked to indexfile.gz.Example: create an index on lines matching a numeric regular expression. The capture group indicates the part that's to be indexed, and the options show each line has a unique, numeric index.$zindexfile.gz--regex'id:([0-9]+)'--numeric--uniqueExample: create an index on the second field of a CSV file:$zindexfile.gz--delimiter,--field2Example: create an index on a JSON fieldorderId.idin any of the items in the document root'sactionsarray (requiresjq). Thejqquery creates an array of all theorderId.ids, thenjoins them with a space to ensure each individual line piped to jq creates a single line of output, with multiple matches separated by spaces (which is the default separator).$zindexfile.gz--pipe"jq --raw-output --unbuffered '[.actions[].orderId.id] | join(\" \")'"Multiple indices, and configuration of the index creation by JSON configuration file are supported, see below.Querying the indexThezqprogram is used to query an index. It's given the name of the compressed file and a list of queries. For example:$zqfile.gz10234443554It's also possible to output by line number, so to print lines 1 and 1000 from a file:$zqfile.gz--line11000Building from sourcezindexuses CMake for its basic building (though has a bootstrappingMakefile), and requires a C++11 compatible compiler (GCC 4.8 or above and clang 3.4 and above). It also requireszlib. With the relevant compiler available, building ought to be as simple as:$gitclonehttps://github.com/mattgodbolt/zindex.git $cdzindex $makeBinaries are left inbuild/Release.Additionally a static binary can be built if you're happy to dip your toe into CMake:$cdpath/to/build/directory $cmakepath/to/zindex/checkout/dir-DStatic:BOOL=On-DCMAKE_BUILD_TYPE=Release $makeMultiple indicesTo support more than one index, or for easier configuration than all the command-line flags that might be needed, there is a JSON configuration format. Pass the--config <yourconfigfile>.jsonoption and put something like this in the configuration file:{ "indexes": [ { "type": "field", "delimiter": "\t", "fieldNum": 1 }, { "name": "secondary", "type": "field", "delimiter": "\t", "fieldNum": 2 } ] }This creates two indices, one on the first field and one on the second field, as delimited by tabs. One can then specify which index to query with the-i <index>option ofzq.Issues and feature requestsSee theissue trackerfor TODOs and known bugs. Please raise bugs there, and feel free to submit suggestions there also.Feel free tocontact meif you prefer email over bug trackers.
zineb
IntroductionZineb is a lightweight tool solution for simple and efficient web scrapping and crawling built around BeautifulSoup and Pandas. It's main purpose is to helpquickly structure your data in order to be used as fast as possible in data science or machine learning projects.Understanding how Zineb worksZineb gets your custom spider, creates a set ofHTTPRequestobjects for each url, sends the requests and caches a BeautifulSoup object of the page within anHTMLResponseclass of that request.Most of your interactions with the HTML page will be done through theHTMLResponseclass.When the spider starts crawling the page, each response and request in past through the start function:def start(self, response, **kwargs): request = kwargs.get('request') images = response.imagesGetting startedCreating a projectTo create a project dopython -m zineb startproject <project name>which will create a directory which will have the following structure..myproject | |--media | |-- models |-- base.py | |--init.py | |-- manage.py | |-- settings.py | |-- spiders.pyOnce the project folder is created, all your interractions with Zineb will be made trough the management commands that are executed throughpython manage.pyfrom your project's directory.The models directory allows you to place the elements that will help structure the data that you have scrapped from from the internet.Themanage.pyfile will allow you to run all the required commands from your project.Finally, the spiders module will contain all the spiders for your project.Configuring your projectOn startup, Zineb implements a set of basic settings (zineb.settings.base) that will get overrided by the values that you would have defined in yoursettings.pylocated in your project.You can read more about this in thesettings section of this file.Creating a spiderCreating a spider is extremely easy and requires a set of starting urls that can be used to scrap one or many HTML pages.class Celebrities(Zineb): start_urls = ['http://example.com'] def start(self, response, request=None, soup=None, **kwargs): # Do something hereOnce the Celibrities class is called, each request is passed through thestartmethod. In other words thezineb.http.responses.HTMLResponse,zineb.http.request.HTTPRequestand theBeautifulSoupHTML page object are sent through the function.You are not required to use all these parameters at once. They're just for convinience.In which case, you can also write the start method as so if you only need one of these.def start(self, response, **kwargs): # Do something hereOther objects can be passes through the function such as the models that you have created but also the settings of the application etc.Adding meta optionsMeta options allows you to customize certain very specific behaviours [not found in thesettings.pyfile] related to the spider.class Celerities(Zineb): start_urls = ['http://example.com'] class Meta: domains = []DomainsThis option limits a spider to a very specific set of domains.Verbose nameThis option writter asverbose_namewill specific a different name to your spider.Running commandsStartTriggers the execution of all the spiders present in the given the project. This command will be the main one that you will be using to execute your project.ShellStart a iPython shell on which you can test various elements on the HTML page.When the shell is started, thezineb.http.HTTPRequest, thezineb.response.HTMLResponse, and the BeautifulSoup instance of the page are all injected in the shell.Extractors are passed using aliases:links: LinkExtractorimages: ImageExtractormultilinks: MultiLinkExtractortables: TableExtractorThe extractors are also all passed within the shell in addition to the project settings.In that regards, the shell becomes a interesting place where you can test various querying on an HTML page before using it in your project. For example, using the shell withhttp://example.com.We can get a simple url :IPython 7.19.0 In [1]: response.find("a") Out[1]: <a href="https://www.iana.org/domains/example">More information...</a>We can find all urls on the page:IPython 7.19.0 In [2]: extractor = links() In [3]: extractor.resolve(response) In [4]: str(extrator) Out [4]: [Link(url=https://www.iana.org/domains/example, valid=True)] In [5]: response.links Out [5]: [Link(url=https://www.iana.org/domains/example, valid=True)]Or simply get the page title:IPython 7.19.0 In [6]: response.page_title Out [6]: 'Example Domain'Remember that in addition to the custom functions created for the class, all the rest called onzineb.response.HTMLResponseare BeautifulSoup functions (find, find_all, find_next...)Queries on the pageLike said previously, the majority of your interactions with the HTML page will be done through theHTMLResponseobject orzineb.http.responses.HTMLResponseclass.This class will implement some very basic general functionnalities that you can use through the course of your project. To illustrate this, let's create a basic Zineb HTTP response from a request:from zineb.http.requests import HTTPRequest request = HTTPRequest("http://example.com")Requests, when created a not sent [or resolved] automatically if the_sendfunction is not called. In that case, they are marked as being unresolved ex.HTTPRequest("http://example.co", resolved=False).Once the_sendmethod is called, by using thehtml_pageattribute or calling any BeautifulSoup function on the class, you can do all the classic querying on the page e.g. find, find_all...request._send() request.html_response -> Zineb HTMLResponse object request.html_response.html_page -> BeautifulSoup object request.find("a") -> BeautifulSoup TagIf you do not know about BeautifulSoup please readthe documentation here.For instance, suppose you have a spider and want to get the first link present on thehttp://example.compage. That's what you would so:from zineb.app import Zineb class MySpider(Zineb): start_urls = ["http://example.com"] def start(self, response=None, request=None, soup=None, **kwargs): link = response.find("a") # Or, you can also use this tehnic through # the request object link = request.html_response.find("a") # Or you can directly use the soup # object as so link = soup.find("a")In order to understand what theLink,ImageandTableobjects represents, please read thefollowing sectionof this page.Zineb HTTPRequest objects are better explained in the following section.Getting all the linksrequest.html_response.links -> [Link(url=http://example.com valid=True)]Getting all the imagesrequest.html_response.images -> [Image(url=https://example.com/1.jpg")]Getting all the tablesrequest.html_response.tables -> [Table(url=https://example.com/1")]Getting all the textFinally you can retrieve all the text of the web page at once.request.html_response.text -> '\n\n\nExample Domain\n\n\n\n\n\n\n\nExample Domain\nThis domain is for use in illustrative examples in documents. You may use this\n domain in literature without prior coordination or asking for permission.\nMore information...\n\n\n\n'ModelsModels are a simple way to structure your scrapped data before saving them to a file. The Model class is built around Panda's excellent DataFrame class in order to simplify as a much as possible the fact of dealing with your data.Creating a custom ModelIn order to create a model, subclass the Model object fromzineb.models.Modeland then add fields to it. For example:from zineb.models.datastructure import Model from zineb.models.fields import CharField class Player(Model): name = CharField()Using the custom modelOn its own, a model does nothing. In order to make it work, you have to add values to it and then resolve the fields.You can add values to your model in two main ways.Adding a free custom valueThe first method consists of adding values through theadd_valuemethod. This method does not rely on the BeautifulSoup HTML page object which means that values can be added freely.player.add_value('name', 'Kendall Jenner')Adding an expression based valueAddind expression based values requires a BeautifulSoup HTML page object. You can add one value at a time or multiple values.player.add_expression("name", "a#kendall__text", many=True)By using themanyparameter, you can add the all the tags with a specific name and/or attributes to your model at once.Here is a list of expressions that you can use for this field:expressioninterpretationtaga.kendallLink with class kendalla#kendallLind with ID KendallBy default, if a pseudo is not provided,__textpseudo is appended in order to always retrieve the inner text element of the tag.Meta optionsBy adding a Meta to your model, you can pass custom behaviours.OrderingIndexingIndexesOrderingFieldsFields are a very simple way to passing HTML data to your model in a very structured way. Zineb comes with number of preset fields that you can use out of the box:CharFieldTextFieldNameFieldEmailFieldUrlFieldImageFieldIntegerFieldDecimalFieldDateFieldAgeFieldFunctionFieldArrayFieldCommaSeparatedFieldHow fields workOnce the field is called via theresolvefunction on each field which in turns calls thesuper().resolvefunction of theFieldsuper class, the value is stored.By default, the resolve function will do the following things.First, it will run all cleaning functions on the value for example by stripping tags like "<" or ">" by using thew3lib.html.remove_tagslibrary.Second, adeep_cleanmethod will be called on the value which takes out any spaces usingw3lib.html.strip_html5_whitespace, remove escape characters with thew3lib.html.replace_escape_charsfunction and finally reconstruct the value to ensure that any none-detected white space be eliminated.Finally, all validators (default and custom created) are called on the value. The final value is then returned within the model class.CharFieldThe CharField represents the normal character element on an HTML page. You constrain the length.TextFieldThe text field is longer allows you to add paragraphs of text.NameFieldThe name field allows to implement names in your model. Thetitlemethod is called on the string in order to represent the value correctly e.g. Kendall Jenner.EmailFieldThe email field represents emails. The default validator,validators.validate_email, is automatically called on the resolve function fo the class in order to ensure that that the value is indeed an email.UrlFieldThe url field is specific for urls. Just like the email field, the default validator,validators.validate_urlis called in order to validate the url.ImageFieldThe image field holds the url of an image exactly like the UrlField with the sole difference that you can download the image directly when the field is evaluated.class MyModel(Model): avatar = ImageField(download=True, download_to="/this/path")IntegerFieldDecimalFieldDateFieldThe date field allows you to pass dates to your model. In order to use this field, you have to pass a date format so that the field can know how to resolve the value.class MyModel(Model): date = DateField("%d-%m-%Y")AgeFieldThe age field works likes the DateField but instead of returning the date, it will return the difference between the date and the current date which is an age.FunctionFieldThe function field is a special field that you can use when you have a set of functions to run on the value before returning the final result. For example, let's say you have this valueKendall J. Jennerand you want to run a specific function that takes out the middle letter on every incoming values:def strip_middle_letter(value): return class MyModel(Model): name = FunctionField(strip_middle_letter, output_field=CharField(), )Every time the resolve function will be called on this field, the methods provided will be passed on the value.An output field is not compulsory but if not provided, each value will be returned as a character.ArrayFieldAn array field will store an array of values that are all evalutated to an output field that you would have specified.CommaSeperatedFieldCreating your own fieldYou an also create a custom field by suclassingzineb.models.fields.Field. When doing so, your custom field has to provide aresolvefunction in order to determine how the value should be treated. For example:class MyCustomField(Field): def resolve(self, value): initial_result = super().resolve(value)If you want to use the custom cleaning functionalities on your resolve function before running yours, make sure to call super.ValidatorsValidators make sure that the value that was passed respects the constraints that were implemented as a keyword arguments on the field class. There are five basic validations:Maximum lengthUniquenessNullityDefaultnessValidity (validators)Maximum or Minimum lengthThe maximum length check ensures that the value does not exceed a certain length usingzineb.models.validators.max_length_validatororzineb.models.validators.min_length_validatorwhich are encapsulated and used within thezineb.models.validators.MinLengthValidatororzineb.models.validators.MaxLengthValidatorclass.NullityThe nullity validation ensures that the value is not null and that if a default is provided, that null value be replaced by the latter. It useszineb.models.validators.validate_is_not_null.The defaultness provides a default value for null or none existing ones.Practical examplesFor instance, suppose you want only values that do not exceed a certain length:name = CharField(max_length=50)Or suppose you want a default value for fields that are empty or blank:name = CharField(default='Kylie Jenner')Remember that validators will validate the value itself for example by making sure that an URL is indeed an url or that an email follows the expected pattern that you would expect from an email.Suppose you want only values that would beKendall Jenner. Then you could create a custom validator that would do the following:def check_name(value): if value == "Kylie Jenner": return None return value name = CharField(validators=[check_name])You can also create validators that match a specific regex pattern using thezineb.models.validators.regex_compilerdecorator:from zineb.models.datastructure import Model from zineb.models.fields import CharField from zineb.models.validators import regex_compiler @regex_compiler(r'\d+') def custom_validator(value): if value > 10: return value return 0 class Player(Model): age = IntegerField(validators=[custom_validator])It is important to understand that the result of the regex compiler is reinjected into your custom validator on which you can then do various other checks.Field resolutionIn order to get the complete structured data, you need to callresolve_valueswhich will return apandas.DataFrameobject:player.add_value("name", "Kendall Jenner") player.resolve_values() -> pandas.DataFramePractically though, you'll be using thesavemethod which also calls theresolve_valuesunder the hood:player.save(commit=True, filename=None, **kwargs) -> pandas.DataFrame or new fileBy calling the save method, you'll be able to store the data directly to a JSON or CSV file.ExtractorsExtractors are utilities that facilitates extracting certain specific pieces of data from a web page such as links, images [...] quickly. They can be found inzineb.extactors.Some extractors can be used in various manners. First, with a context processor:extractor = LinkExtractor() with extractor: # Do something hereSecond, in an interation process:for link in extractor: # Do something hereFinally, withnext:next(extractor)You can also check if an extractor has a specific value and even concatenate some of them together:# Contains if x in extractor: # Do something here # Addition concatenated_extractors = extractor1 + extractor2LinkExtractorurl_must_contain- only keep urls that contain a specific stringunique- return a unique set of urls (no duplicates)base_url- reconcile a domain to a pathonly_valid_links- only keep links (Link) that are marked as validextractor = LinkExtractor() extractor.finalize(response.html_response) -> [Link(url=http://example.com, valid=True)]There might be times where the extracted links are relative paths. This can cause an issue for running additional requests. In which case, use thebase_urlparameter:extractor = LinkExtractor(base_url=http://example.com) extractor.finalize(response.html_response) # Instead of getting this result which would also # be marked as a none valid link -> [Link(url=/relative/path, valid=False)] # You will get the following with the full url link -> [Link(url=http://example.com/relative/path, valid=True)]NOTE: By definition, a relative path is not a valid link hence the valid set to False.MultiLinkExtractorAMultiLinkExtractorworks exactly like theLinkExtractorwith the only difference being that it also identifies and collects emails that are contained within the HTML page.TableExtractorExtract all the rows from the first table that is matched on the HTML page.class_name- intercept a table with a specific class namehas_headers- specify if the table has headers in order to ignore it in the final datafilter_empty_rows- ignore any rows that do not have a valuesprocessors- a set of functions to run on the data once it is all extractedImageExtractorExtract all the images on the HTML page.You can filter down the images that you get by using a specific set of parameters:unique- return only a unique et set of urlsas_type- only return images having a specific extensionurl_must_contain- only return images which contains a specific stringmatch_height- only return images that match as specific heightmatch_width- only return images that match a specific widthTextExtractorExtract all the text on an HTML page.First, the text is retrieved as a raw value then tokenized and vectorized usingnltk.tokenize.PunktSentenceTokenizerandnltk.tokenize.WordPunctTokenizer.To know more about NLKT,please read the following documentation.Zineb special wrappersHTTPRequestZineb uses a special built-in HTTPRequest class which wraps the following for better cohesion:Therequests.Requestresponse classThebs4.BeautifulSoupobjectIn general, you will not need to interact with this class that much because it's just an interface for implement additional functionnalities especially to the Request class.follow: create a new instance of the class whose resposne will be the one of a new urlfollow_all: create new instances of the class who responses will tbe the ones of the new urlsurljoin: join a path the domainHTMLResponseIt wraps the BeautifulSoup object in order to implement some small additional functionalities:page_title: return the page's titlelinks: return all the links of the pageimages: return all the images of the pagetables: return all the tables of the pageSignalsSignals are a very simple yet efficient way for you to run functions during the lifecycle of your project when certain events occur at very specific moments.Internally signals are sent on the following events:When the registry is populatedBefore the spider startsAfter the spider has startedBefore an HTTP request is sentBefore and HTTP request is sentBefore the model downloads anythingAfter the model has downloaded somethingCreating a custom signalTo create custom signal, you need to mark a method as being a receiver for any incoming signals. For example, if you want to create a signal to intercept one of the events above, you should do:from zineb.signals import receiver @receiver(tag="Signal Name") def my_custom_signal(sender, **kwargs): passThe signals function has to be able to accept asenderobject and additional parameters such as the current url or the current HTML page.You custom signals do not have to return anything.PipelinesPipelines are a great way to send chained requests to the internet or treat a set of responses by processing them afterwards through a set of functions of your choice.Some Pipeplines are also perfect for donwloading images.ResponsesPipelineThe response pipepline allows you to chain a group of responses and treat all of them at once through a function:from zineb.http.pipelines import ResponsesPipeline pipeline = ResponsesPipeline([response1, response2], [function1, function2]) pipeline.results -> listIt comes with three main parameters:responses- which corresponds to a list of HTMLResponsesfunctions- a list of functions to pass each individual response and additional parametersparamaters- a set of additional parameters to pass to the functionsThe best way to use the ResponsesPipeline is within the functions of your custom spider:class MySpider(Zineb): start_urls = ["https://example.com"] def start(self, response, soup=None, **kwargs): extractor = LinksExtractor() extractor.resolve(soup) responses = request.follow_all(*list(extractor)) ResponsesPipeline(responses, [self.do_something_here]) def do_something_here(self, response, soup=None, **kwargs): # Continue parsing data hereN.B.Each function is executed sequentially. So, the final result will come from the final function of the listHTTPPipelineThis pipeline takes a set of urls, creates HTTPResquests for each of them and then sends them to the internet.If you provided a set of functions, it will pass each request through them.from zineb.http.pipelines import HTTPPipeline from zineb.utils.general import download_image HTTPPipeline([https://example.com], [download_image])Each function should be able to accept an HTTP Response object.You can also pass additional parameters to your functions by doing the following:HTTPPipeline([https://example.com], [download_image], parameters={'extra': False})In this specific case, your function should accept anextraparameter which result would be False.CallbackThe Callback class allows you to run a callback function once each url is processed and passed through the main start function of your spider.The__call__method is triggerd on the instance in order to resolve the function to use.class Spider(Zineb): start_urls = ["https://example.com"] def start(self, response, **kwargs): request = kwargs.get("request") model = MyModel() return Callback(request.follow, self.another_function, model=model) def another_function(self, response, **kwargs): model = kwargs.get("model") model.add_value("name", "Kendall Jenner") model.save()UtilitiesLink reconciliationMost of times, when you retrieve links from a page, they are returned as relative paths. Theurljoinmethod reconciles the url of the visited page with that path.<a href="/kendall-jenner">Kendall Jenner</a> # Now we want to reconcile the relative path from this link to # the main url that we are visiting e.g. https://example.com request.urljoin("/kendall-jenner") -> https://example.com/kendall-jennerSettingsThis section will talk about all the available settings that are available for your project and how to use them for web scrapping.PROJECT_PATHRepresents the current path for your project. This setting is not be changed.SPIDERSIn order for your spider to be executed, every created spider should be registered here. The name of the class should serve as the name of the spider to be used.SPIDERS = [ "MySpider" ]DOMAINSYou can restrict your project to use only to a specific set of domains by ensuring that no request is sent if it matches one of the domains within this list.DOMAINS = [ "example.com" ]ENSURE_HTTPSEnforce that every link in your project is a secured HTTPS link. This setting is set to False by default.MIDDLEWARESMiddlewares are functions/classes that are executed when a signal is sent from any part of the project. Middlewares implement extra functionnalities without affecting the core parts of the project. They can then be disabled safely if you do not need them.MIDDLEWARES = [ "zineb.middlewares.handlers.Handler", "myproject.middlewares.MyMiddleware" ]The main Zineb middlewares are the following:zineb.middlewares.referer.Refererzineb.middlewares.handlers.Handlerzineb.middlewares.automation.Automationzineb.middlewares.history.Historyzineb.middlewares.statistics.GeneralStatisticszineb.middlewares.wireframe.WireFrameUSER_AGENTSA user agent is a characteristic string that lets servers and network peers identify the application, operating system, vendor, and/or version of the requestingMDN Web Docs.Implement additional sets of user agents to your projects in addition to those that were already created.RANDOMIZE_USER_AGENTSSpecifies whether to use one user agent for every request or to randomize user agents on every request. This setting is set to to False by default.DEFAULT_REQUEST_HEADERSSpecify additional default headers to use for each requests.The default initial headers are:Accept-Language- enAccept- text/html,application/json,application/xhtml+xml,application/xml;q=0.9,/;q=0.8Referrer - NonePROXIESUse a set of proxies for each request. When a request in sent, a random proxy is selected and implemented with the request.PROXIES = [ ("http", "127.0.0.1"), ("https", "127.0.0.1") ]RETRYSpecifies the retry policy. This is set to False by default. In other words, the request silently fails and never retries.RETRY_TIMESSpecificies the amount of times the the request is sent before eventually failing.RETRY_HTTP_CODESIndicates which status codes should trigger a retry. By default, the following codes: 500, 502, 503, 504, 522, 524, 408 and 429 will trigger it.
zineb-scrapper
No description available on PyPI.
zinebuildout
InstallationThis package allows you to install Zine and its dependencies in a sandbox with buildout, then serve it with any WSGI server while using the Paste facilities for WSGI stack configuration.Install Zine:Download and extract the zinebuildout archive from PyPI, (or clone it with: hg clonehttps://cody.gorfou.fr/hg/zinebuildout). You don’t need to easy_install it. Then run:$ python bootstrap.py $ ./bin/buildoutConfigure Zine:Edit “deploy.ini” to adaptinstance_folder,hostandportto your needs. You don’t need to change anything if you just want to try it on your local host.Start Zine:In foreground:$ ./bin/paster serve deploy.iniOr in background:$ ./bin/paster serve --daemon deploy.iniYour admin interface will then be accessible throughhttp://localhost:8080/admin/Then you can put this process behind a reverse proxy (with Apache ProxyPass or RewriteRule) or any other. Example of an Apache VirtualHost:<virtualHost *> (...) RewriteEngine on RewriteRule ^/(.*) http://localhost:8080/$1 [proxy,last] </virtualHost>Optional Twitter widgetThis distribution offers a Twitter widget that displays the N latests tweets of your timeline. To use it, just add the following line in the ‘_widgets.html’ file of your Zine theme:{{ widgets.tweets('ccomb', 5) }}Replace ‘ccomb’ with your twitter account, and 5 with the number of tweets you want to display.Versions0.6.1 (2010-12-28)upgraded minor versionsadded a missing tweets.html template0.6 (2010-11-01)upgraded to buildout 1.5upgraded all minor versions using z3c.checkversions0.5 (2009-07-19)fixed some egg versionsadded a twitter widget to display the latest tweets0.4 (2009-02-19)remove the zine section from Paste config. (thx Calvin) It will allow defining several zine instances in the same config file.0.3 (2009-01-29)fixed the bad 0.2 release0.2 (2009-01-27)move to zine 0.1.2no more need to configure the instance folderAdded pygments and docutils (for the rst parser)0.1 (2009-01-14)initial buildout for Zine 0.1.1
zinemachine
The Zine MachineThe Zine Machine is a little box that spits short passages of text out of a receipt printer.It uses a python running on a Raspberry Pi Zero W to receive input from GPIO buttons and drive the receipt printer over bluetooth. When printing a zine, it loads.zinefiles from the filesystem and sends the commands to the printer over a serial connection, using the modulepython-escposSetuppip install zinemachineSeeRaspberry Pi Setupfor full instructions on how to set up and configure your Raspberry Pi for the Zine Machine.Usagepython -m zinemachine [command]Valid commands:print FILE: Print a single zine fileserve -c CATEGORY PIN: Run persistently, and print random zine inCATEGORYwhen button on GPIO pinPINis pressed. Provide multiple-cflags to register additional buttons. Categories are directories containing.zinefiles under$PWD/zines/(e.g.-c diy 18binds all zines under$PWD/zines/diyto pin 18)validate [FILE]: Run the.zinefile validator on theFILEor directory. Defaults to$PWD/zines/Use-hto list help and additional commands.python -m zinemachine -h python -m zinemachine print -h python -m zinemachine serve -h python -m zinemachine validate -hAdding zinesCreate azines/directory and add a subdirectory for each category of zine. Using theservecommand,.zineand.txtfiles in a category are randomly printed when the button bound to that category is pressed.Files and directories starting with a period.are ignored.You can copy thesample zinesdirectory from the Github repo as a starting point..zine filesA zine file is a text file (UTF-8) containing an optional metadata header followed by the text of a zine.The header encodes metadata using colon:separated key-value pairs.It must begin with a line consisting of five dashes-----.Each key-value pair must be on a single line ended by a newline character\n.The header should be terminated with a line of five dashes-----. The first non-empty line that does not contain a colon is also considered the end of the header.The rest of the file contains the text of the zine.----- Title: My first zine Author: Elliot Hatch Description: A receipt printer will print this! URL: https://github.com/elliothatch/zine-machine/blob/master/test-zines/.test/first-zine.zine Date Published: 2023-08-19 ----- <h1>Hello</h1> Welcome to my zine. ...Header keys are case insensitive and must not contain a colon. All whitespace is stripped from keys. Leading and trailing whitespace is stripped from values.The zine machine recognizes these fields. More fields may be provided but may not be used by the zine machine.TitleAuthorDescriptionPublisherDate Published- ISO format. Recommended format: YYYY-MM-DDURLMarkup.zinefiles support rich markup for enhanced features. Markup resembles a limited set of HTML tags. Some tags can be nested in other tags.<h1>Header</h1>- Header, printed in double sized text.<u>Underlined</u>- Underline<u2>Underlined 2</u2>- Underline style 2<b>Bolded</b>- Bold. Does not have an effect with default configuration because the Zine Machine bolds all text for improved readability. Auto-bold can be disabled with a command line argument.<invert>Inverted</invert>- Invert (black background, white text)<img src="image.png">Caption</img>- Insert an image with optional caption. src path is relative to the.zinefile. You can group images with a zine by placing them all together in a directory.ValidationRunning the Zine Machine with thevalidatecommand will check all zines in its index for header errors, invalid markup, and unprintable characters and images.The default validation settings for printable characters and max image width are based on manually determined values for the generic receipt printer we used (LMP201). If your printer supports more fonts or prints images at a different pixel width, you can configure these settings with the printer profile (currently requires Python module usage).You can provide the--resizeflag with an optional maximum pixel width (--resize 200, default 576) to automatically downscale images that are too large for the printer. The original image will be saved with an.origextension. If the Zine Machine will print an error if it tries to print an image that is too wide.Raspberry Pi SetupWiring the buttonsYou can run the Zine Machine to use any GPIO pins for the print category buttons. It configures the buttons in PULL_UP mode using the Pi's internal pull-up resistors.The standard configuration is to use 4 buttons, wired to to the ground and GPIO pins opposite the micro USB power connector. This allows you to crimp all the wires into a standard 2x3 rectangular connector, sharing a common ground.Pin numbers:Blue: 16Yellow: 20Green: 19Pink: 26Connector diagram (g=ground, G=Green, .=unused, etc.):| |g P G| . . . | |. Y B| . . . \------------ (edge of board)Flash Raspberry Pi OSUse the Raspberry Pi imager to flash Raspberry Pi OS to the SD card. After you select the OS image, click the gear in the bottom right to set some advanced options that will make it easier to work with the Pi.Set hostname zinemachine.localEnable SSH Use password authentication Set username and passwordConfigure wireless LAN Enter your WiFi SSID and password, and select your country code.Set locale settings Set to your keyboard layout, or certain keypresses may be interpreted differently than you expect.Start up Raspberry PiInsert the SD card into the Raspberry Pi and apply power. It will take awhile to boot up, so wait 2-5 minutes. Since you configured the WiFi network and SSH during the flashing step, you should be able to see the raspberry pi show up in your router's device table.Open a terminal and connect to the Raspberry Pi with the following command, replacingIP_ADDRwith the address of your Pi:ssh pi@IP_ADDRAlternatively, plug a keyboard, mouse, and monitor into your Raspberry Pi and configure it directly from there.Update Package Repossudo apt-get update sudo apt-get upgradeBind Bluetooth on BootBefore we can connect to the printer, we need to bind RFCOMM to the printer's bluetooth device. then we can use the device/dev/rfcomm0to establish a connection.Open a text editor to create the systemd service filesudo nano /etc/systemd/system/receipt-bluetooth.serviceCopy and paste thereceipt-bluetooth.servicecontents into the file.Modify the MAC address to match MAC address of your receipt printer (replace 02:3B:A9:4C:F0:AE with the MAC address listed by the printer, usually found by printing a test sheet)Enable the service on boot.--nowstarts it immediatelysudo systemctl enable --now receipt-bluetooth.serviceThis service does NOT connect to the printer, it only configures the bluetooth connection. The actual connection is made when the device is used inzine-machine.pyPython SetupUpgrade the Python package manager, pippip install --upgrade pipInstall the zinemachine Python package.pip install zinemachineAdd zines and testYou will need to add some Zines to the Raspberry Pi before you can start printing them. As a starting point, download thesample zinesfrom the Github repository.If you are connected to the Pi via SSH, download these to your local computer, then copy them to the Pi by opening a new terminal and entering the command, replacingIP_ADDRwith the IP address you used to SSH into the Pi:rsync -avzh ./zines/ pi@IP_ADDR:/home/pi/zine-machine/zines/The first argument of the command,./zines/should be the sample zine directory you downloaded eariler. The trailing/is important.After the download is complete, you can test the Zine Machine. It looks for zines in thezines/directory under the current working directory, so firstcdinto the parent directory abovezines/.cd /home/pi/zine-machineIf you've wired the buttons according to the standardPin configuration, you can run the Zine Machine with the following command:python -m zinemachine serve -c theory pink -c ecology yellow -c diy green -c queer-stuff blueAfter a few seconds, the receipt printer should output "Ready to print!", and you can push a button to print a random zine.If you didn't use the standard wiring configuration, you can use pin numbers instead of color names. The following command is equivilent to the previous one:python -m zinemachine serve -c theory 26 -c ecology 20 -c diy 19 -c queer-stuff 16PressCTRL-Cto stop the Zine Machine.NOTE: The Zine machine also comes configured with two button combinations. Hold all 4 buttons for 5 seconds to shutdown the Pi, or hold just the pink, yellow, and green buttons for 5 seconds to stop the Zine Machine program without shutting down.These commands are hardcoded via GPIO pin number, and will not work if you use custom GPIO pin.Enable start on bootOpen a text editor to create the systemd service filesudo nano /etc/systemd/system/zine-machine.serviceCopy and paste thezine-machine.servicecontents into the file.Modify theExecStartline if you need to change the categories or buttonsModify theWorkingDirectoryline if you need to change where the zines will be loaded from. The path should be the directory containing thezines/folder.Enable to start on boot.sudo systemctl enable zine-machine.serviceYou can now start the Zine Machine as a background daemon withsudo systemctl start zine-machineCheck the status of the zine machine service withsudo systemctl status zine-machineand view operational logs withsudo journalctl -u zine-machineThe Zine Machine will now automatically start when the Raspberry Pi turns on! Make sure the receipt printer is turned on BEFORE you power up the Raspberry Pi. If it loses connection with the printer at any point, you can reset the Zine Machine program by holding the PINK, YELLOW, and GREEN buttons for 5 seconds (assuming standard wiring).Module UsageThe Zine Machine can also be used as a python module, withimport zinemachinefor further customization. Refer tomain.pyfor basic usage.Input ManagerThe input manager allows you to bind a function to a combination of pressed or held buttons.UseaddButtonto register a new button that can be used in button combinations.UseaddChordto create an input binding that will execute a function when the appropriate buttons are pressed.WhenholdTimeis 0 (default) it creates a "pressRelease" command, which fires when all buttons in the chord are pressed, then released.WhenholdTimeis greater than 0, a "pressHold" command is created, which fires when the buttons in the chord are pressed and held for the given time, in seconds. Whenever a button is pressed or released, the hold timer is reset back to zero, so the command is only triggered after the exact chord has been held forholdTime.InputManager is also initialized with awaitTime(default 0.25 seconds), which adds a grace period where a button is still considered held down after it is released. This wait time is ONLY considered for "pressRelease" commands, so that even if you do not release all the buttons of a chord at the exact same time, it is still considered a press-release of the entire chord, instead of just the last button released. The wait timer resets whenever any button is released--even if each button in the chord is released one at a time, they are all considered part of the "pressRelease" chord as long as the time between each release is less thanwaitTime.DevelopmentSetupclone this repogit clone https://github.com/elliothatch/zine-machine.git cd zine-machineUpgrade pippip install --upgrade pipCreate virtual environment (optional)python -m venv .You must source the virtual environment before installing or running the project.source bin/activateinstall the package in editable mode (--userdoes nothing in venv)python -m pip install -e . --userrunpython -m zinemachinetestpython -m unittest
zinfoexport
This package contains a client to make a data request from the z-info API.To create a client you need a json file with the following parameters:username: strThis is your z-info usernamepassword: strThis is your z-info passwordfile_name: strName for your export filefile_format: strCan either be ".csv" or ".feather"input_file: strName for your input csv. It is mandatory to have a header row with these three values: tagnr, name, wbmstartdate: strStart date in YYYY-MM-DD formatenddate: strEnd date in YYYY-MM-DD formatperiode: strtime for which values will be aggregated. can be anything between 1-9 s/m/h (for example: 5m = 5 minutes)interval: strTo avoid making requests for too long intervals this spilts the dates up in intervals. Default is 10 which works fine.waterschapnummer: strWaterschap specific number.spcid: strWaterschap specific spcid.\You then need to specify the file name to create a client.With this client you can use the method run to make a request based on your parameters.If you create a client without a parameters file, it will generate a template which you can then fill in and supply using the function set_parameters("name of your parameter file")
zing
UNKNOWN
zingg
About zingg packageZingg Python APIs for entity resolution, record linkage, data mastering and deduplicationZingg.AIrequirementpython 3.6+; spark 3.5.0Installationpip install zingg
zinggnnester
UNKNOWN
zinglplotter
zinglplotterZingl-Bresenham plotting algorithms.The Zingl-Bresenham plotting algorithms are from Alois Zingl's "The Beauty of Bresenham's Algorithm" (http://members.chello.at/easyfilter/bresenham.html). They are all MIT Licensed and this library is also MIT licensed. In the case of Zingl's work this isn't explicit from his website, however from personal correspondence "'Free and open source' means you can do anything with it like the MIT licence[sic]."These algorithms are error-carry-forward algorithms such that they use only integer math to plot pixel positions, and curves like quadratic and cubic beziers do not need to be turned into tiny lines or checked for how small a line should be used. They merely travel from one pixel to the next pixel carrying the error forward.FunctionsThis library is a series of plot line generators converted from C++.plot_line(x0, y0, x1, y1)plot_quad_bezier_seg(x0, y0, x1, y1, x2, y2)plot_quad_bezier(x0, y0, x1, y1, x2, y2)plot_cubic_bezier_seg(x0, y0, x1, y1, x2, y2, x3, y3)plot_cubic_bezier(x0, y0, x1, y1, x2, y2, x3, y3)plot_line_aa(x0, y0, x1, y1)plot_line_width(x0: int, y0: int, x1: int, y1: int, wd: float)These do Zingl-Bresenham algorithms for line, quad, cubic. The_segfunction perform the draw but only for rational segments (no inversion points). The_aafunction performs the same thing but in an anti-alias manner.fromzinglplotterimportplot_lineforx,yinplot_line(0,0,5,8):print(f"({x},{y})")Will result in:python (0,0) (1,1) (1,2) (2,3) (3,4) (3,5) (4,6) (4,7) (5,8)fromzinglplotterimportplot_quad_bezierforx,yinplot_quad_bezier(0,0,9,4,0,10):print(f"({x},{y})")Will result in:(0,0) (1,0) (2,1) (3,2) (4,3) (5,4) (5,5) (5,5) (5,6) (4,7) (3,8) (2,9) (1,9) (0,10)
zingmp3py
zingmp3pyA light weight Python library for the ZingMp3 API*all functions are return dict or ZingMp3 Objectinstallpypipip install zingmp3pyGit (version in development)pip install git+https://github.com/thedtvn/zingmp3py.gitSync :fromzingmp3pyimportZingMp3zi=ZingMp3()# login is not requiredzi.login("zpsid cookies")zi.getDetailPlaylist("67ZFO8DZ")zi.getDetailArtist("Cammie")zi.getRadioInfo("IWZ979CW")zi.getSongInfo("ZWAF6UFD")zi.getSongStreaming("ZWAF6UFD")zi.getTop100()zi.search("rick roll")Asyncimportasynciofromzingmp3pyimportZingMp3Asyncasyncdefmain():zi=ZingMp3Async()# login is not requiredawaitzi.login("zpsid cookies")awaitzi.getDetailPlaylist("67ZFO8DZ")awaitzi.getDetailArtist("Cammie")awaitzi.getSongInfo("ZWAF6UFD")awaitzi.getSongStreaming("ZWAF6UFD")awaitzi.getTop100()awaitzi.search("rick roll")asyncio.run(main())how to get zpsid cookiesgo tohttps://id.zalo.me/account/logininfothen check f12 go to tab application go to cookies and check for cookie name zpsidnote: please check check you are login or not by check keyloggedin json return in that url istrueGet Type And IDfromzingmp3pyimportgetUrlTypeAndIDgetUrlTypeAndID("https://zingmp3.vn/liveradio/IWZ979CW.html")
zini
INI-files parser with schemes and typesPhilosophy of ZiniApplication’s settings must be simple! In it should be a code or complex structures. Must be only a simple types.Why not…JSON?JSON is uncomfortable and unextendable.YAML?The YAML is like a garden of rakes. It’s very complex format. I do not need all it’s futures.Configparser?Configparser is ugly;Configparser is overengineered;Configparser is not have type casting;Configparser is not have type checking;Configparser is… configparser.Supported typesboolean:simpletrueorfalse, e.g.key = trueint:simple numeric type, e.g.key = 13float:float type, e.g.key = 3.14string:stringsalwaysuses quotes, e.g.key = "some string"datetime:datetime formated like as ISO 8601YYYY-MM-DDYYYY-MM-DDhh:mmYYYY-MM-DDhh:mm:ssYYYY-MM-DDhh:mm:ss.sssWhen the time, you can set timezone asZor±hh:mm.E.g.:key =2005-01-13key =2005-01-1318:05:00key =2005-01-1315:05:00 +03:00key =2005-01-1315:00Ztimedelta:durations:key = 20m— 20 minuteskey = 10h2m— 10 hours and 2 minuteskey = 1w2s— one week (7 days) and 2 secondskey = 1s20ms— one 2 second and 20 millisecondskey = 1w1d1h1m1s1ms— 694861001 millisecondslist:list of values:key="string value"2005-01-13 18:00:0513Examples$ cat tests/test.ini# first comment[first]boolean=falseinteger=13[second]; second commentboolean=truestring="some string"[complex]list="string""string too""else string"Simple reading>>>fromziniimportZini>>>ini=Zini()>>>result=ini.read('tests/test.ini')>>>isinstance(result,dict)True>>>result['first']['boolean']isFalse# automatic type castingTrue>>>result['first']['integer']==13True>>>result['second']['string']=="some string"True>>>result['complex']['list']==["string","string too","else string"]TrueTypes and defaults>>>fromziniimportZini>>>ini=Zini()>>>ini['first']['integer']=str# set type>>>result=ini.read('tests/test.ini')zini.ParseError:errorinline3:'integer = 13'>>>fromziniimportZini>>>ini=Zini()>>>ini['second']['boolean']="string"# set type and default value>>>result=ini.read('tests/test.ini')zini.ParseError:errorinline7:'boolean = true'Lists of values>>>importzini>>>ini=zini.Zini()>>>ini['third']['generic']=[str]>>>result=ini.read('tests/test.ini')ParseError:errorinline20:' 10'
zinifile
操作更方便的配置文件读取模块某ini文件[table] data = mydata读取代码如下import zinifile ini = zinifile.load(文件名) ini = zinifile.load_text(ini内容) print(ini.table.data)你可以将zinifile创建的实例当做一个字典或者迭代器.for key in ini for key, value in ini.items() value = ini.get('键') value = ini['键']更新日志18-11-15 1.0.41.空值不会再返回一个空的str了,而是返回None2.用xx.xx如果属性不存在则返回zinifile.empty_node对象, 这个对象可以像_node一样使用, 但是获取任何数据都是空数据, 他的布尔检测为Falseempty_node is None #False empty_node == None #True bool(empty_node) #False本项目仅供所有人学习交流使用, 禁止用于商业用途
zinnia-drupal
Zinnia Drupal is a helper Django application that can be used for importing content from Drupal database into Django Blog Zinnia.