package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
adguard-sync
AdguardSyncThis script syncs the DNS-Rewrite list with the given config file.Example:>>>fromadguard_syncimportAdguardSync>>>AdguardSync('config.yml')Configuration File:---adguards:-hostname:https://10.0.0.3username:adguardpassword:adguard_passverify_ssl:False# (Optional, default = True)-hostname:https://10.0.0.4username:adguardpassword:adguard_passdns_records:-domain:adguard.testaddress:10.0.0.3-domain:adguard2.testaddress:10.0.0.4
adhan
adhan.py is a Python 2.7 and 3+ library for computing adhan times.It is a refactoring of the PrayTimes.org Python adhan calculator that will ensure:PEP8 compliant codeA PyPI packageA simplified API that favors convention over configurationA test suitePresence on GitHub to encourage contributionInstallationpipinstalladhanUsagefromdatetimeimportdatefromadhanimportadhanfromadhan.methodsimportISNA,ASR_STANDARDparams={}params.update(ISNA)params.update(ASR_STANDARD)adhan_times=adhan(day=date.today(),location=(30.25,-97.75),parameters=params,timezone_offset=-6,)""" adhan_times will be a dict containing datetime objects for the keys 'fajr', 'shuruq', 'zuhr', 'asr', 'maghrib', and 'isha' """Available MethodsThe following methods are available in the adhan.methods module and should cover the vast majority of casesISNA: Islamic Society of North AmericaMUSLIM_WORLD_LEAGUE: Muslim World LeagueEGYPT: Egyptian General Authority of SurveyMAKKAH: Umm al-Qura University, MakkahKARACHI: University of Islamic Sciences, KarachiTEHRAN: Institude of Geophysics, University of TehranSHIA: Shia Ithna Ashari, Leva Research Institute, QumASR_STANDARD: Shafi’i, Maliki, Ja’fari, and HanbaliASR_HANAFI: HanafiCustom Parameter DictionaryIn case you want to define your own parameters, the parameters argument accepts dicts with the following keysfajr_angle: The angle below sunrise to compute Fajr forisha_angle: The angle below sunset to compute Isha forasr_multiplier: The multiplier to use for Asr, such that the length of an object’s shadow is the multiplier * the object’s length + the length of the object’s shadow at middayisha_delay: The floating point number of hours after Maghrib that Isha is
adhands_api
History0.1.0 (2015-01-11)First release on PyPI.
adhan-pi
Developing>>>sudodnfinstallffmpeg# install ffmpeg on your system (if not installed)>>>sudoln-s~/adhan-pi/opt/adhan-pi>>>python3-mvenv/opt/adhan-pi/env>>>source/opt/adhan-pi/env/bin/activate>>>cd/opt/adhan-pi>>>pipinstall-e'.[dev]'>>>tox-elint&&toxSetting up Cron env>>> source /opt/adhan-pi/env/bin/activate >>> pip install -e '.[cron]'Setting up CronAdd this to your cronjob (with your user) (crontab -e)@daily /opt/adhan-pi/env/bin/schedule_prayer_cron –query “New York, NY” –user salahset up crons manually>>> source /opt/adhan-pi/env/bin/activate >>> /opt/adhan-pi/env/bin/schedule_prayer_cron --query "New York, NY" --user salah
adhanpy
adhanpyThis is a port ofbatoulapps/adhan-java, a prayer times program, from Java to Python. As it stands the project reuses most of the structure of the original project but may differ through refactoring and in an effort to rewrite in a more pythonic way where it makes sense. Like the original project there are no external dependencies except in development wherepytestand other development tools are made use of.RequirementsPython >= 3.9Installationpip install adhanpyUsageCreate aPrayerTimesobject by passing geo coodinates, datetime and either passing a calculation method:prayer_times=PrayerTimes(coordinates,today,CalculationMethod.MOON_SIGHTING_COMMITTEE)or a calculation parameters object allowing to choose from different parameters such as angles:parameters=CalculationParameters(fajr_angle=18,isha_angle=18)prayer_times=PrayerTimes(coordinates,today,calculation_parameters=parameters)If passing a calculation method to the calculation parameters object, the calculation method will have precedence and will overwrite other parameters you may have also passed.For instance the MOON_SIGHTING_COMMITTEE method uses a fajr angle of 18 and if for instance the calculation parameters object is created by passing a different fajr angle the latter will be ignored:parameters=CalculationParameters(fajr_angle=12,method=CalculationMethod.MOON_SIGHTING_COMMITTEE)prayer_times=PrayerTimes(coordinates,today,calculation_parameters=parameters)print(parameters.fajr_angle)# 18.0 (the fajr_angle argument has been ignored)Times are returned in UTC time via datetime objects, for convenience it is possible to directly pass a ZoneInfo object to PrayerTimes:london_zone=ZoneInfo("Europe/London")prayer_times=PrayerTimes(coordinates,today,CalculationMethod.MOON_SIGHTING_COMMITTEE,time_zone=london_zone,)# this will display the time in the chosen time zoneprint(f"Fajr:{prayer_times.fajr.strftime('%H:%M')}")or convert to a different timezone later, each prayer time object is in fact a datetime object:prayer_times=PrayerTimes(coordinates,today,CalculationMethod.MOON_SIGHTING_COMMITTEE,)# the following will be in UTCprint(f"Fajr:{prayer_times.fajr.strftime('%H:%M')}")# and to use a different timezone on the datetime object itself:london_zone=ZoneInfo("Europe/London")print(f"Fajr:{prayer_times.fajr.astimezone(london_zone).strftime('%H:%M')}")A full example is located insrc/exampleof the project directory.DevelopmentTo install adhanpy for development purposes, run the following:python3 -m virtualenv venv source venv/bin/activate pip install --upgrade pip pip install -r requirements.txt pip install -e .LicenceMITAcknowledgmentsCredits go to the author of the original implementation in Java and other languages, especially the very complex astronomy formulas.
adhan-time
aladhan.pyA complete python wrapper for thealadhan API.
adHarvester
Harvester for any type of online shops and advertisements
adhawk
No description available on PyPI.
adhawk-ble
No description available on PyPI.
adh-deployment-manager
ADH Deployment ManagerADH Deployment Manager is a Python library which simplifies interfacing with ADH REST API by providing a convenient set of wrappers and abstractions. The library provides such capabilities as:deploying ADH queries from local text files to multiple projects based on a single configuration filesync queries between ADH and local storagelocal runner for testing and prototypingbatch query executorbranching mechanismjob monitoring and other featuresADH Deployment Manager provides both high level interface for interacting with ADH via Deployment object and low-level by providing access to such elements asAnalysys QueryandJobto issue ad-hoc operations (like rerun of a particular query).Minimal working example:# load necessary modules from adh_deployment_manager.authenticator import AdhAutheticator from adh_deployment_manager.deployment import Deployment import adh_deployment_manager.commands as commands # provide authentication mechanism credentials = AdhAutheticator().get_credentials("/path/to/credentials.json") developer_key = "INSERT_YOUR_DEVELOPER_KEY" # instantiate deployment with config and credentials # (and optionally path to folder where source queries are located) deployment = Deployment( config = "/path/to/config.yml", credentials = credentials, developer_key = developer_key, queries_folder="/path/to/adh-queries/", query_file_extention=".sql") # deploy queries to ADH project(s) deployer = commands.Deployer(deployment) deployer.execute() # run queries in ADH projects(s) runner = commands.Runner(deployment) runner.execute()Table of contentsProject overviewRequirementsInstallationGetting startedAccess setup(Recommended)- Authenticating as a service accountOAuth 2.0 setupCreate configSpecify queriesAdd new ADH queriesUse existing ADH queriesDeploying and running queriesProject overviewBack totable of contentsADH Deployment Manager deployment consists of two elements:sqlfolder - contains ADH queries in.sqlformatconfig.ymlfile - specifies which queries fromsqlfolder should be deployed alongside parameters and filtered row summary. More about config atCreate Config.Check possible structure formy_adh_projectdeployment below:my_adh_project |__config.yml |__sql |__query_name1.sql |__query_name2.sqlRequirementsBack totable of contentsPython3GitInstallationBack totable of contentsThe CLI tool calledadmcan be installed from pip:pip install adh-deployment-managerGetting startedBack totable of contentsAccess setupPlease followGet started with the Ads Data Hub APIto correctly setup API access to Ads Data Hub.After you setup API access there are two options to authenticate - via service account or OAuth 2.0.(Recommended)Authenticating as a service accountAuthenticating as a service account is the recommended way of authentication foradh-deployment-manager. Please follow the steps outlined below:Log in into Google Cloud project that connected to your ADH accountCreate service account in your output GCPreference link:https://cloud.google.com/iam/docs/creating-managing-service-accountsDownload service account's credential (JSON format) in your local environmentreference link:https://cloud.google.com/iam/docs/creating-managing-service-account-keysAssign BigQuery Admin role to the service accountreference link:https://cloud.google.com/iam/docs/granting-roles-to-service-accountsGenerate API Key (Developer Key for ADH)reference link:https://cloud.google.com/docs/authentication/api-keysAssign Analyst access role to the service accountreference link:https://developers.google.com/ads-data-hub/guides/assign-access-by-roleOAuth 2.0 setupIf authenticating via service account is not possible please follow the steps outlined below:Log in into Google Cloud project that connected to your ADH accountGenerate OAuth 2.0 Client ID and download credentials:go toAPI & Services - Credentials, click+ CREATE CREDENTIALSand selectOAuth client IDSelectDesktop Appas application type, specify any application name and clickCREATEbutton.Click the download icon next to the credentials that you just created.Generate API Key (Developer Key for ADH)reference link:https://cloud.google.com/docs/authentication/api-keysOnceadh-deployment-manageris running you will be prompted to log in into your Google account so the program can authenticate.Create configBack totable of contentsconfig.ymlis the core element of deployment. It must contain two mandatory elements:customer_ids- customer_ids (either one of an array) for which queries should be deployed and/or run.queries_setup- compound element which consists of query titles, parameters, filtered row summary, etc.The minimal working example of the config with two requirement elements (customer_idsandqueries_setup):customer_ids: - 123456789 queries_setup: - queries: - query_titleOptional Elementsconfig.ymlmay contain optional elements that can be associated with all queries inqueries_setupblock:ads_data_from- list of customer_ids to get ads data from. If the field is not included in config it will be automatically converted to a list of regular customer_ids.bq_project&bq_dataset- BQ project and dataset used storing output data (specified during ADH setup)date_range_setup- date range for running queries in ADH which consists of two elements:start_dateandend_datein YYYY-MM-DD format (i.e., 1970-01-01). Supports template values, i.e. YYYYMMDD-10 transforms into10 days ago from execution day.Specifying queries and their parametersqueries_setupmay contains the following elements:queries- list of query titles that need to be deployed/launched in ADH. If query with specified title cannot be found in ADH,adh-deployment-managerwill try to build query with such title based on provided filename insqlfolder. Everyquery_titleinquerieswill share the sameparametersandfiltered_row_summaryprovides for the block.queriesshould contain at least one query_title.query_titlewill be used to create table in BQ, so the output table will bebq_project.bq_dataset.query_title.(optional)parameters- block that contains one or moreparameter_namewith correspondingtypeandvalues.type- type of the parameter (i.e.INT64,STRING,DATE,TIMESTAMP), required field.values- values used when query is suppose to run. If you provide array structure here (separated by-at each line)typeof parameter will beARRAYof typetype, optional field.(optional)filtered_row_summary- block that contains one or more filtered row summary column names with correspondingtypeandvalue.type- type of filtered row summary (eitherSUMorCONSTANT)value- specified only whentypeCONTANTis used, specifies how this metric or dimention will be named.(optional)execution_mode- option to split query execution and saving results by day. Can be eithernormal(query is run over thestart_date-end_datedate range) orbatch(query execution can be splitted over each day within querystart_dateandend_date).execution_modecan be omitted, in that case the query will be executed innormalmode(optional)wait- specify whether the next query or query block should be launch only after successfull execution of the previous one. Can take two possible values:each(wait for each query in the block) orblock(wait only for the last query in the block). ifwaitis omitted it means that query execution will be independent of the previous one.(optional)replace- if a query has any placeholders (specified in{placeholder}format) thatreplaceblock should containkey: valuepairs which will replace placeholders in the query text with supplied values. This can be useful when specifingbq_projectandbq_datasetnames.replacecan be omitted, in that case no replacements will be performed.(optional)date_range_setup- in case queries in a block should run over a different time period than specified in globaldate_range_setupyou can specify thesestart_dateandend_datehere.Example of queries_setupThe example structure ofqueries_setuplooks like the following one:queries_setup: - queries: - query_title1 - query_title2 parameters: parameter_name1: type: INT64 values: - 1234 - 1235 - 1236 parameter_name2: type: STRING values: my_value parameter_name3: type: DATE filtered_row_summary: metric_name: type: SUM dimension_name: type: CONSTANT value: my_value execution_mode: normal wait: each replace: placeholder1: value1 placeholder2: value2 date_range_setup: start_date: YYYYMMDD-10 end_date: YYYYMMDD-1 - queries: ....In order to make the structure of config more clear, let's cover all elements in the example above.queries_setupcontains a query block which contains two queries (query_title1andquery_title2).Deploying:For each of these queries:three parameters should be created:parameter_name1of typeARRAYofINT64(three sample values are specified undervaluescolumn; these values will be used during runtime)parameter_name2of typeSTRING(with a single valuemy_valuewhich will be used during runtime)parameter_name3or typeDATE. This parameters does not have value associated with it and should be specified during runtime (as keyword argument to a corresponding function call)Filtered Row Summary should be added:columnmetric_namewill contain sum of all values filtered due to privacy checkscolumndimension_namewill containmy_valuefor all users filtered due to privacy checks.Running:Sincestart_date: YYYYMMDD-10,end_date: YYYYMMDD-1both queries should be executed over the last 10 days period (excluding today)Sinceexecution_mode: normalwhen running these queries we run them fromstart_datetoend_dateperiod without splitting query execution by day.Sincewait: eachwe launchquery_title2only afterquery_title1execution is completed.Both queries contain two placeholders -placeholder1andplaceholder2. When deploying them to ADH we will replace them withvalue1andvalue2respectively.Specify queriesBack totable of contentsAdd new ADH queriesIf the purpose of deployment is to create queries in ADH based on source code you need to create a dedicated folder to contain these queries. By defaultadh-deployment-managerexpectessqlfolder with files containing files with.sqlextension. Bothqueries_folderandquery_file_extentioncould be specified when creatingDeploymentobject.from adh_deployment_manager.deployment import Deployment my_deployment = Deployment( config = "path/to/config.yml", credentials = my_credentials, queries_folder = "path/to/queries_folder", query_file_extention = ".sql" )Use Existing Queries from ADHIf the purpose of deployment is to run existing ADH queries you should omitqueries_folderandqueries_file_extentionwhen creatingDeploymentobject. Query titles inqueriesblock should be title of the queries found in ADH UI.from adh_deployment_manager.deployment import Deployment my_deployment = Deployment( config = "path/to/config.yml", credentials = my_credentials )Deploying and running queiresBack totable of contentsADH Deployment Manager installsadmCLI tool that allows you to simplify interaction with the library.admaccept several arguments:command- one ofrun,deploy,update,fetchsubcommand- one ofdeployorupdate-c path/to/config.yml- specifies where config is located-q path/to/queries_folder- specifies where folder with queries is located-l path/to/output_folder- specified where queries fetched from ADH should be storedIn order to run this commands you'll need to export developer_key as environmental variable:export ADH_DEVELOPER_KEY=<developer_key>Usageadm [OPTIONS] command subcommand options: -c path/to/config.yml -q path/to/queries_folder -l path/to/output_folderExamplesDeploy queries based on configadm -c path/to/config.yml -q path/to/queries deployRun queries without deploymentadm -c path/to/config.yml runRun and update queriesadm -c path/to/config.yml -q path/to/queries run updateFetch queries from config and store in specified locationadm -c path/to/config.yml -l path/to/output_folder fetch
adhesive
Adhesive is a micro BPMN runner written in Python.You can easily model complex logic in BPMN, and Adhesive will execute it for you taking care of parallelism, joining, etc. using the standard BPMN notation.Since it’s small, it can easily be embedded in containers, or replace complex scripts.InstallationpipinstalladhesiveGetting StartedSimple BuildsTo create a basic build you just create a file in your project named_adhesive.py. In it you then declare some tasks. For example:[email protected]("Checkout Code")defcheckout_code(context):adhesive.scm.checkout(context.workspace)@adhesive.task("Run Build")defrun_build(context):context.workspace.run("mvn clean install")adhesive.build()Since no process was defined, adhesive takes the defined tasks, stitches them in order, and has a process defined as<start>→Checkout Code→Run Build→<end>.To run it simply calladhesivein the terminal:adhesiveThis is the equivalent of Jenkins stages. But we can do better:Programmatic BuildsIn order to use the full programmatic functionalities that adhesive offers, you are able to stitch your BPM process manually. You have sub processes, branching and looping available:[email protected]("Run in parallel item{loop.value}")defcontext_to_run(context):ifnotcontext.data.executions:context.data.executions=set()context.data.executions.add(str(uuid.uuid4()))data=adhesive.process_start()\.branch_start()\.sub_process_start()\.task("Run in parallel",loop="items")\.sub_process_end()\.branch_end()\.branch_start()\.sub_process_start()\.task("Run in parallel item{loop.value}",loop="items")\.sub_process_end()\.branch_end()\.process_end()\.build(initial_data={"items":[1,2,3,4,5]})assertlen(data.executions)==10Here you see the full BPMN power starting to unfold. We create a process that branches out, creates sub processes (sub processes can be looped as a single unit). Loops are creating execution tokens that also run in parallel in the same pool.Note that you can passinitial_datainto the process, and you can also get thecontext.datafrom the last execution token.BPMN ProcessLast but not least, adhesive reads BPMN files, and builds the process graph from them. This is particularly good if the process is complex and has a lot of dependencies:Thebuild of adhesiveis modeled as aBPMN processitself, so we load it from the file directly using:adhesive.build_bpmn("adhesive-self.bpmn")[email protected]("Read Parameters")defread_parameters(context)->None:context.data.run_mypy=Falsecontext.data.test_integration=True@adhesive.task(re=r"^Ensure Tooling:\s+(.+)$")defgbs_ensure_tooling(context,tool_name)->None:ge_tooling.ensure_tooling(context,tool_name)# ...adhesive.build_bpmn("adhesive-self.bpmn")As you see steps are parametrizable, and use the data from the task name into the step definition.Defining BPMN TasksFor example here, we define an implementation of tasks using regex matching, and extracting values:@adhesive.task(re=r"^Ensure Tooling:\s+(.+)$")defgbs_ensure_tooling(context,tool_name)->None:# ...Or a user task (interactive form):@adhesive.usertask('Publish to PyPI?')defpublish_to_pypi_confirm(context,ui):ui.add_checkbox_group("publish",title="Publish",values=(("nexus","Publish to Nexus"),("pypitest","Publish to PyPI Test"),("pypi","Publish to PyPI"),),value=("pypitest","pypi"))Don’t forget, [email protected]@adhesive.usertaskare just defining mappings for implementations of the task names available in the process. Only theadhesive.build()creates a linear process out of the declaration of the tasks.As you notice, there’s always a first parameter namedcontext. Thecontextparameter contains the following information:task- the Task in the graph that’s currently matched against this execution.task_name- The resolved name, with the variables interpolated. Matching is attemptedafterthe name is resolved.data- Data that the current execution token contains. This data is always cloned across executions, and `set`s and `dict`s are automatically merged if multiple execution tokens are merged. So you have a modifiable copy of the data that you’re allowed to change, and is propagated into the following execution tokens.loop- if the current task is in a loop, the entry contains itsindex, thekeyandvalueof the items that are iterating, and theexpressionthat was evaluated. Note that loop execution happens in parallel since these are simple execution tokens.lane- the current lane where the tasks belongs. Implicitly it’sdefault.workspace- a way to interact with a system, and execute commands, create files, etc.adhesiveruns all the tasks on a parallel process pool for better performance. This happens automatically.The tasks perform the actual work for the build. But in order to have that, we need to be able to execute commands, and create files. For that we have theworkspace.Start Event MessagesAdhesive supports also start events with messages in the process. Each message start event, is being processed in its own thread andyieldresults:@adhesive.message('Generate Event')defmessage_generate_event(context):foriinrange(10):[email protected]('Process Event')defprocess_event(context):print(f"event data:{context.data.event}")Each yield generates a new event that fires up the connected tasks. The data yielded is present in theeventattribute in the token, for the following tasks.Callback MessagesThe other option to push messages into a process is to use callback messages:@adhesive.message_callback('REST: /rest/process-resource')defmessage_rest_rest_process_resource(context,callback):@app.route("/rest/resource/create")defcreate_resource():callback(Dict({"type":"CREATE"}))return"Create event fired"Using this we’re able to hook into other systems that have their own loop, such as in this case the Flask server, and push messages using thecallback. This approach has also the advantage of not creating new threads for each message endpoint.ConnectionsTasks are linked using connections. In some cases, connections can have conditions. Conditions are expressions that when evaluated toTruewill allow the token to pass the connection. In the connection there is access to thetask,task_name,data,loop,laneandcontext, as well as the variables defined in thecontext.data.So if in a task there is defined a data field such as:@adhesive.task('prepare data')defprepare_data(context):context.data.navigation_direction="forward"Thenavigation_directioncan be validated in the condition with any of the following:context.data.navigation_direction == "forward"data.navigation_direction == "forward"navigation_direction == "forward"WorkspaceWorkspaces are just a way of interacting with a system, running commands, and writing/reading files. Currently there’s support for:the local systemdocker containerskubernetesremote SSH connectionsWhen startingadhesiveallocates a default workspace folder in the configured temp location (implicitly/tmp/adhesive). TheWorkspaceAPI is an API that allows you to run commands, and create files, taking care of redirecting outputs, and even escaping the commands to be able to easily run them inside docker containers.The workspace is available from the cotext directly from thecontext, by callingcontext.workspace.For example callingcontext.workspace.run(…​)will run the command on the host where adhesive is running:@adhesive.task("Run Maven")defbuild_project(context)->None:context.workspace.run("mvn clean install")If we’re interested in the program output we simply do arunwith acapture_stdoutthat returns the output as a string:@adhesive.task("Test")defgbs_test_linux(context)->None:content=context.workspace.run("echo yay",capture_stdout=True)assertcontent=="yay"or we can use the simplified call withrun_outputthat guarantees astras result, unlike theOptional[str]forrun:@adhesive.task("Test")defgbs_test_linux(context)->None:content=context.workspace.run_output("echo yay")assertcontent=="yay"Theruncommands implicitly use/bin/sh, but a custom shell can be specified by passing theshellargument:content=context.workspace.run_output("echo yay",shell="/bin/bash")Docker WorkspaceTo create a docker workspace that runs inside a container with the tooling you just need to:fromadhesive.workspaceimportdockerThen to spin up a container that has the current folder mounted in, where you’re able to execute commandsinsidethe container. You just need to:@adhesive.task("Test")defgbs_test_linux(context)->None:image_name='some-custom-python'withdocker.inside(context.workspace,image_name)asw:w.run("python -m pytest -n 4")This creates a container using our current context workspace, where we simply execute what we want, using therun()method. After thewithstatement the container will be teared down automatically.SSH WorkspaceIn order to have ssh, make sure you installedadhesivewith SSH support:pipinstall-Uadhesive[ssh]To have a SSH Workspace, it’s again the same approach:fromadhesive.workspaceimportsshThen to connect to a host, you can just use thessh.insidethe same way like in the docker sample:@adhesive.task("Run over SSH")defrun_over_ssh(context)->None:withssh.inside(context.workspace,"192.168.0.51",username="raptor",key_fileaname="/home/raptor/.ssh/id_rsa")ass:s.run("python -m pytest -n 4")The parameters are being passed to paramiko, that’s the implementation beneath theSshWorkspace.Kubernetes WorkspaceTo run things in pods, it’s the same approach:fromadhesive.workspaceimportkubeThen we can create a workspace to run things in kubernetes pods. The workspace, as well as the API, will use thekubectlcommand [email protected]("Run things in the pod")defrun_in_the_pod(context)->None:withkube.inside(context.workspace,pod_name="nginx-container")aspod:pod.run("ps x")# This runs in the podKubernetes APIAdhesive also packs a kubernetes api, that’s available on theadhesive.kubeapi:fromadhesive.kubeapiimportKubeApiTo use it, we need to create an instance against a [email protected]('Determine action')defdetermine_action(context):kubeapi=KubeApi(context.workspace,namespace=context.data.target_namespace)Let’s create a namespace:kubeapi.create(kind="ns",name=context.data.target_namespace)Or let’s create a service using thekubectl applyapproach:kubeapi.apply(f""" apiVersion: v1 kind: Service metadata: name: nginx-http labels: app:{context.data.target_namespace}spec: type: ClusterIP ports: - port: 80 protocol: TCP name: http selector: app:{context.data.target_namespace}""")Or let’s get some pods:pod_definitions=kubeapi.getall(kind="pod",filter=f"execution_id={context.execution_id}",namespace=context.data.target_namespace)These returns objects that allow navigating properties as regular python attributes:new_pods=dict()forpodinpod_definitions:ifnotpod.metadata.name:raiseException(f"Wrong definition{pod}")new_pods[pod.metadata.name]=pod.status.phaseYou can also navigate properties that are not existing yet, for example to wait for the status of a pod to appear:@adhesive.task('Wait For Pod Creation{loop.key}')defwait_for_pod_creation_loop_value_(context):kubeapi=KubeApi(context.workspace,namespace=context.data.target_namespace)pod_name=context.loop.keypod_status=context.loop.valuewhilepod_status!='Running':time.sleep(5)pod=kubeapi.get(kind="pod",name=pod_name)pod_status=pod.status.phaseTo get the actual data from the wrappers that the adhesive API creates, you can simply call the_rawproperty.Workspace APIHere’s the full API for it:classWorkspace(ABC):""" A workspace is a place where work can be done. That means a writable folder is being allocated, that might be cleaned up at the end of the execution. """@abstractmethoddefwrite_file(self,file_name:str,content:str)->None:pass@abstractmethoddefrun(self,command:str,capture_stdout:bool=False)->Union[str,None]:""" Run a new command in the current workspace. :param capture_stdout: :param command: :return: """pass@abstractmethoddefrm(self,path:Optional[str]=None)->None:""" Recursively remove the file or folder given as path. If no path is sent, the whole workspace will be cleared. :param path: :return: """pass@abstractmethoddefmkdir(self,path:str=None)->None:""" Create a folder, including all its needed parents. :param path: :return: """pass@abstractmethoddefcopy_to_agent(self,from_path:str,to_path:str)->None:""" Copy the files to the agent from the current disk. :param from_path: :param to_path: :return: """pass@abstractmethoddefcopy_from_agent(self,from_path:str,to_path:str)->None:""" Copy the files from the agent to the current disk. :param from_path: :param to_path: :return: """pass@contextmanagerdeftemp_folder(self):""" Create a temporary folder in the current `pwd` that will be deleted when the `with` block ends. :return: """pass@contextmanagerdefchdir(self,target_folder:str):""" Temporarily change a folder, that will go back to the original `pwd` when the `with` block ends. To change the folder for the workspace permanently, simply assing the `pwd`. :param target_folder: :return: """passUser TasksIn order to create user interactions, you have user tasks. These define form elements that are populated in thecontext.data, and available in subsequent tasks.When a user task is encountered in the process flow, the user is prompted to fill in the parameters. Note that the other started tasks continue running, proceeding forward with the build.Thenameused in the method call defines the name of the variable that’s in thecontext.data.For example in here we define a checkbox group that allows us to pick where to publish the package:@adhesive.usertask("Read User Data")defread_user_data(context,ui)->None:ui.add_input_text("user",title="Login",value="root")ui.add_input_password("password",title="Password")ui.add_checkbox_group("roles",title="Roles",value=["cyborg"],values=["admin","cyborg","anonymous"])ui.add_radio_group("disabled",# title is optionalvalues=["yes","no"],value="no")ui.add_combobox("machine",title="Machine",values=(("any","<any>"),("win","Windows"),("lin","Linux")))This will prompt the user with this form:This data is also available for edge conditions, so in the BPMN modeler we can define a condition such as"pypi" in context.data.roles, or sincedatais also available in the edge scope:"pypi" in data.roles.The other option is simply reading what the user has selected in a following task:@adhesive.task("Register User")defpublish_items(context):forroleincontext.data.roles:# ...User tasks support the following API, available on theuiparameter, the parameter after the context:classUiBuilderApi(ABC):defadd_input_text(self,name:str,title:Optional[str]=None,value:str='')->None:defadd_input_password(self,name:str,title:Optional[str]=None,value:str='')->None:defadd_combobox(self,name:str,title:Optional[str]=None,value:Optional[str]=None,values:Optional[Iterable[Union[Tuple[str,str],str]]]=None)->None:defadd_checkbox_group(self,name:str,title:Optional[str]=None,value:Optional[Iterable[str]]=None,values:Optional[Iterable[Union[Tuple[str,str],str]]]=None)->None:defadd_radio_group(self,name:str,title:Optional[str]=None,value:Optional[str]=None,values:Optional[List[Any]]=None)->None:defadd_default_button(self,name:str,title:Optional[str]=None,value:Optional[Any]=True)->None:Custom ButtonsIn order to allow navigation inside the process, theadd_default_buttonAPI exists to permit creation of buttons. Implicitly a single button with anOKlabel is added to the User Task, that when pressed fills thecontext.datain the outgoing execution token.Withadd_default_buttonwe create custom buttons such asBackandForward, or whatever we need in our process. Unlike the defaultOKbutton, when these are called, they also set in thecontext.datathevaluethat’s assigned to them. This value we use then further in aGateway, or simple as a condition on the outgoing edges.The title is optional, and only if missing it’s build either from thenameif all the buttons in the form have unique names, since they assign a different variable in thecontext.data, or from thevalueif they have overlapping names.SecretsSecrets are files that contain sensitive information are not checked in the project. In order to make them available to the build, we need to define them in either~/.adhesive/secrets/SECRET_NAMEor in the current folder as.adhesive/secrets/SECRET_NAME.In order to make them available, we just use thesecretfunction that creates the file in the current workspace and deletes it when exiting. For example here’s how we’re doing the actual publish, creating the secret inside a docker container:@adhesive.task('^PyPI publish to (.+?)$')defpublish_to_pypi(context,registry):withdocker.inside(context.workspace,context.data.gbs_build_image_name)asw:withsecret(w,"PYPIRC_RELEASE_FILE","/germanium/.pypirc"):w.run(f"python setup.py bdist_wheel upload -r{registry}")Note thedocker.insidethat creates a different workspace.ConfigurationAdhesive supports configuration via its config files, or environment variables. The values are read in the following order:environment variables:ADHESIVE_XYZ, thenvalues that are in the project config yml file:.adhesive/config.yml, thenvalues configured in the global config yml file:$HOME/.adhesive/config.yml.Currently the following values are defined for configuration:temp_folderdefault value/tmp/adhesive, environment var:ADHESIVE_TEMP_FOLDER.Is where all the build files will be stored.pluginsdefault value[], environment var:ADHESIVE_PLUGINS_LIST.This contains a list of folders, that will be added to thesys.path. So to create a reusable plugin that will be reused by multiple builds, you need to simply create a folder with python files, then point to it in the~/.adhesive/config.yml:plugins:-/path/to/folderThen in the python path you can simply do regular imports.colordefault valueTrue, environment var:ADHESIVE_COLOR.Marks if the logging should use ANSI colors in the terminal. Implicitly this istrue, but if log parsing is needed, it can make sense to have it false.log_leveldefault_valueinfo, environment var:ADHESIVE_LOG_LEVEL.How verbose should the logging be on the terminal. Possible values aretrace,debug,info,warning,errorandcritical.pool_sizedefault value is empty, environment var:ADHESIVE_POOL_SIZE.Sets the number of workers that adhesive will use. Defaults to the number of CPUs if unset.stdoutdefault value is empty, environment var:ADHESIVE_STDOUT.Implicitly for each task, the log is redirected in a different file, and only shown if the task failed. The redirection can be disabled.parallel_processingdefault value isthread, environment var:ADHESIVE_PARALLEL_PROCESSING.Implicitly tasks are scaled using multiple threads in order to alleviate waits for I/O. This is useful for times when remote ssh workspaces are defined in the lanes, so the same connection can be reused for multiple tasks.This value can be set toprocess, in case the tasks are CPU intensive. This has the drawback of recreating the connections on workspaces’ each task execution.Hacking AdhesiveAdhesive builds with itself. In order to do that, you need to checkout theadhesive-libshared plugin, and configure your local config to use it:plugins:-/path/to/adhesive-libThen simply run the build using adhesive itself:adhesive
adhesive-zeebe
This is a fork of Adhesive to handle Zeebe Modeler BPMN workflows.Changes are:Support for Zeebe Modeler BPMN xml files.Can be used in a thread.Should be used as a library from another app.The following documentation is the original documentation and might not work as excepted when adhesive is used as a cli command.Adhesive is a micro BPMN runner written in Python.You can easily model complex logic in BPMN, and Adhesive will execute it for you taking care of parallelism, joining, etc. using the standard BPMN notation.Since it’s small, it can easily be embedded in containers, or replace complex scripts.InstallationpipinstalladhesiveGetting StartedSimple BuildsTo create a basic build you just create a file in your project named_adhesive.py. In it you then declare some tasks. For example:[email protected]("Checkout Code")defcheckout_code(context):adhesive.scm.checkout(context.workspace)@adhesive.task("Run Build")defrun_build(context):context.workspace.run("mvn clean install")adhesive.build()Since no process was defined, adhesive takes the defined tasks, stitches them in order, and has a process defined as<start>→Checkout Code→Run Build→<end>.To run it simply calladhesivein the terminal:adhesiveThis is the equivalent of Jenkins stages. But we can do better:Programmatic BuildsIn order to use the full programmatic functionalities that adhesive offers, you are able to stitch your BPM process manually. You have sub processes, branching and looping available:[email protected]("Run in parallel item{loop.value}")defcontext_to_run(context):ifnotcontext.data.executions:context.data.executions=set()context.data.executions.add(str(uuid.uuid4()))data=adhesive.process_start()\.branch_start()\.sub_process_start()\.task("Run in parallel",loop="items")\.sub_process_end()\.branch_end()\.branch_start()\.sub_process_start()\.task("Run in parallel item{loop.value}",loop="items")\.sub_process_end()\.branch_end()\.process_end()\.build(initial_data={"items":[1,2,3,4,5]})assertlen(data.executions)==10Here you see the full BPMN power starting to unfold. We create a process that branches out, creates sub processes (sub processes can be looped as a single unit). Loops are creating execution tokens that also run in parallel in the same pool.Note that you can passinitial_datainto the process, and you can also get thecontext.datafrom the last execution token.BPMN ProcessLast but not least, adhesive reads BPMN files, and builds the process graph from them. This is particularly good if the process is complex and has a lot of dependencies:Thebuild of adhesiveis modeled as aBPMN processitself, so we load it from the file directly using:adhesive.build_bpmn("adhesive-self.bpmn")[email protected]("Read Parameters")defread_parameters(context)->None:context.data.run_mypy=Falsecontext.data.test_integration=True@adhesive.task(re=r"^Ensure Tooling:\s+(.+)$")defgbs_ensure_tooling(context,tool_name)->None:ge_tooling.ensure_tooling(context,tool_name)# ...adhesive.build_bpmn("adhesive-self.bpmn")As you see steps are parametrizable, and use the data from the task name into the step definition.Defining BPMN TasksFor example here, we define an implementation of tasks using regex matching, and extracting values:@adhesive.task(re=r"^Ensure Tooling:\s+(.+)$")defgbs_ensure_tooling(context,tool_name)->None:# ...Or a user task (interactive form):@adhesive.usertask('Publish to PyPI?')defpublish_to_pypi_confirm(context,ui):ui.add_checkbox_group("publish",title="Publish",values=(("nexus","Publish to Nexus"),("pypitest","Publish to PyPI Test"),("pypi","Publish to PyPI"),),value=("pypitest","pypi"))Don’t forget, [email protected]@adhesive.usertaskare just defining mappings for implementations of the task names available in the process. Only theadhesive.build()creates a linear process out of the declaration of the tasks.As you notice, there’s always a first parameter namedcontext. Thecontextparameter contains the following information:task- the Task in the graph that’s currently matched against this execution.task_name- The resolved name, with the variables interpolated. Matching is attemptedafterthe name is resolved.data- Data that the current execution token contains. This data is always cloned across executions, and `set`s and `dict`s are automatically merged if multiple execution tokens are merged. So you have a modifiable copy of the data that you’re allowed to change, and is propagated into the following execution tokens.loop- if the current task is in a loop, the entry contains itsindex, thekeyandvalueof the items that are iterating, and theexpressionthat was evaluated. Note that loop execution happens in parallel since these are simple execution tokens.lane- the current lane where the tasks belongs. Implicitly it’sdefault.workspace- a way to interact with a system, and execute commands, create files, etc.adhesiveruns all the tasks on a parallel process pool for better performance. This happens automatically.The tasks perform the actual work for the build. But in order to have that, we need to be able to execute commands, and create files. For that we have theworkspace.Start Event MessagesAdhesive supports also start events with messages in the process. Each message start event, is being processed in its own thread andyieldresults:@adhesive.message('Generate Event')defmessage_generate_event(context):foriinrange(10):[email protected]('Process Event')defprocess_event(context):print(f"event data:{context.data.event}")Each yield generates a new event that fires up the connected tasks. The data yielded is present in theeventattribute in the token, for the following tasks.Callback MessagesThe other option to push messages into a process is to use callback messages:@adhesive.message_callback('REST: /rest/process-resource')defmessage_rest_rest_process_resource(context,callback):@app.route("/rest/resource/create")defcreate_resource():callback(Dict({"type":"CREATE"}))return"Create event fired"Using this we’re able to hook into other systems that have their own loop, such as in this case the Flask server, and push messages using thecallback. This approach has also the advantage of not creating new threads for each message endpoint.ConnectionsTasks are linked using connections. In some cases, connections can have conditions. Conditions are expressions that when evaluated toTruewill allow the token to pass the connection. In the connection there is access to thetask,task_name,data,loop,laneandcontext, as well as the variables defined in thecontext.data.So if in a task there is defined a data field such as:@adhesive.task('prepare data')defprepare_data(context):context.data.navigation_direction="forward"Thenavigation_directioncan be validated in the condition with any of the following:context.data.navigation_direction == "forward"data.navigation_direction == "forward"navigation_direction == "forward"WorkspaceWorkspaces are just a way of interacting with a system, running commands, and writing/reading files. Currently there’s support for:the local systemdocker containerskubernetesremote SSH connectionsWhen startingadhesiveallocates a default workspace folder in the configured temp location (implicitly/tmp/adhesive). TheWorkspaceAPI is an API that allows you to run commands, and create files, taking care of redirecting outputs, and even escaping the commands to be able to easily run them inside docker containers.The workspace is available from the cotext directly from thecontext, by callingcontext.workspace.For example callingcontext.workspace.run(…​)will run the command on the host where adhesive is running:@adhesive.task("Run Maven")defbuild_project(context)->None:context.workspace.run("mvn clean install")If we’re interested in the program output we simply do arunwith acapture_stdoutthat returns the output as a string:@adhesive.task("Test")defgbs_test_linux(context)->None:content=context.workspace.run("echo yay",capture_stdout=True)assertcontent=="yay"or we can use the simplified call withrun_outputthat guarantees astras result, unlike theOptional[str]forrun:@adhesive.task("Test")defgbs_test_linux(context)->None:content=context.workspace.run_output("echo yay")assertcontent=="yay"Theruncommands implicitly use/bin/sh, but a custom shell can be specified by passing theshellargument:content=context.workspace.run_output("echo yay",shell="/bin/bash")Docker WorkspaceTo create a docker workspace that runs inside a container with the tooling you just need to:fromadhesive.workspaceimportdockerThen to spin up a container that has the current folder mounted in, where you’re able to execute commandsinsidethe container. You just need to:@adhesive.task("Test")defgbs_test_linux(context)->None:image_name='some-custom-python'withdocker.inside(context.workspace,image_name)asw:w.run("python -m pytest -n 4")This creates a container using our current context workspace, where we simply execute what we want, using therun()method. After thewithstatement the container will be teared down automatically.SSH WorkspaceIn order to have ssh, make sure you installedadhesivewith SSH support:pipinstall-Uadhesive[ssh]To have a SSH Workspace, it’s again the same approach:fromadhesive.workspaceimportsshThen to connect to a host, you can just use thessh.insidethe same way like in the docker sample:@adhesive.task("Run over SSH")defrun_over_ssh(context)->None:withssh.inside(context.workspace,"192.168.0.51",username="raptor",key_fileaname="/home/raptor/.ssh/id_rsa")ass:s.run("python -m pytest -n 4")The parameters are being passed to paramiko, that’s the implementation beneath theSshWorkspace.Kubernetes WorkspaceTo run things in pods, it’s the same approach:fromadhesive.workspaceimportkubeThen we can create a workspace to run things in kubernetes pods. The workspace, as well as the API, will use thekubectlcommand [email protected]("Run things in the pod")defrun_in_the_pod(context)->None:withkube.inside(context.workspace,pod_name="nginx-container")aspod:pod.run("ps x")# This runs in the podKubernetes APIAdhesive also packs a kubernetes api, that’s available on theadhesive.kubeapi:fromadhesive.kubeapiimportKubeApiTo use it, we need to create an instance against a [email protected]('Determine action')defdetermine_action(context):kubeapi=KubeApi(context.workspace,namespace=context.data.target_namespace)Let’s create a namespace:kubeapi.create(kind="ns",name=context.data.target_namespace)Or let’s create a service using thekubectl applyapproach:kubeapi.apply(f""" apiVersion: v1 kind: Service metadata: name: nginx-http labels: app:{context.data.target_namespace}spec: type: ClusterIP ports: - port: 80 protocol: TCP name: http selector: app:{context.data.target_namespace}""")Or let’s get some pods:pod_definitions=kubeapi.getall(kind="pod",filter=f"execution_id={context.execution_id}",namespace=context.data.target_namespace)These returns objects that allow navigating properties as regular python attributes:new_pods=dict()forpodinpod_definitions:ifnotpod.metadata.name:raiseException(f"Wrong definition{pod}")new_pods[pod.metadata.name]=pod.status.phaseYou can also navigate properties that are not existing yet, for example to wait for the status of a pod to appear:@adhesive.task('Wait For Pod Creation{loop.key}')defwait_for_pod_creation_loop_value_(context):kubeapi=KubeApi(context.workspace,namespace=context.data.target_namespace)pod_name=context.loop.keypod_status=context.loop.valuewhilepod_status!='Running':time.sleep(5)pod=kubeapi.get(kind="pod",name=pod_name)pod_status=pod.status.phaseTo get the actual data from the wrappers that the adhesive API creates, you can simply call the_rawproperty.Workspace APIHere’s the full API for it:classWorkspace(ABC):""" A workspace is a place where work can be done. That means a writable folder is being allocated, that might be cleaned up at the end of the execution. """@abstractmethoddefwrite_file(self,file_name:str,content:str)->None:pass@abstractmethoddefrun(self,command:str,capture_stdout:bool=False)->Union[str,None]:""" Run a new command in the current workspace. :param capture_stdout: :param command: :return: """pass@abstractmethoddefrm(self,path:Optional[str]=None)->None:""" Recursively remove the file or folder given as path. If no path is sent, the whole workspace will be cleared. :param path: :return: """pass@abstractmethoddefmkdir(self,path:str=None)->None:""" Create a folder, including all its needed parents. :param path: :return: """pass@abstractmethoddefcopy_to_agent(self,from_path:str,to_path:str)->None:""" Copy the files to the agent from the current disk. :param from_path: :param to_path: :return: """pass@abstractmethoddefcopy_from_agent(self,from_path:str,to_path:str)->None:""" Copy the files from the agent to the current disk. :param from_path: :param to_path: :return: """pass@contextmanagerdeftemp_folder(self):""" Create a temporary folder in the current `pwd` that will be deleted when the `with` block ends. :return: """pass@contextmanagerdefchdir(self,target_folder:str):""" Temporarily change a folder, that will go back to the original `pwd` when the `with` block ends. To change the folder for the workspace permanently, simply assing the `pwd`. :param target_folder: :return: """passUser TasksIn order to create user interactions, you have user tasks. These define form elements that are populated in thecontext.data, and available in subsequent tasks.When a user task is encountered in the process flow, the user is prompted to fill in the parameters. Note that the other started tasks continue running, proceeding forward with the build.Thenameused in the method call defines the name of the variable that’s in thecontext.data.For example in here we define a checkbox group that allows us to pick where to publish the package:@adhesive.usertask("Read User Data")defread_user_data(context,ui)->None:ui.add_input_text("user",title="Login",value="root")ui.add_input_password("password",title="Password")ui.add_checkbox_group("roles",title="Roles",value=["cyborg"],values=["admin","cyborg","anonymous"])ui.add_radio_group("disabled",# title is optionalvalues=["yes","no"],value="no")ui.add_combobox("machine",title="Machine",values=(("any","<any>"),("win","Windows"),("lin","Linux")))This will prompt the user with this form:This data is also available for edge conditions, so in the BPMN modeler we can define a condition such as"pypi" in context.data.roles, or sincedatais also available in the edge scope:"pypi" in data.roles.The other option is simply reading what the user has selected in a following task:@adhesive.task("Register User")defpublish_items(context):forroleincontext.data.roles:# ...User tasks support the following API, available on theuiparameter, the parameter after the context:classUiBuilderApi(ABC):defadd_input_text(self,name:str,title:Optional[str]=None,value:str='')->None:defadd_input_password(self,name:str,title:Optional[str]=None,value:str='')->None:defadd_combobox(self,name:str,title:Optional[str]=None,value:Optional[str]=None,values:Optional[Iterable[Union[Tuple[str,str],str]]]=None)->None:defadd_checkbox_group(self,name:str,title:Optional[str]=None,value:Optional[Iterable[str]]=None,values:Optional[Iterable[Union[Tuple[str,str],str]]]=None)->None:defadd_radio_group(self,name:str,title:Optional[str]=None,value:Optional[str]=None,values:Optional[List[Any]]=None)->None:defadd_default_button(self,name:str,title:Optional[str]=None,value:Optional[Any]=True)->None:Custom ButtonsIn order to allow navigation inside the process, theadd_default_buttonAPI exists to permit creation of buttons. Implicitly a single button with anOKlabel is added to the User Task, that when pressed fills thecontext.datain the outgoing execution token.Withadd_default_buttonwe create custom buttons such asBackandForward, or whatever we need in our process. Unlike the defaultOKbutton, when these are called, they also set in thecontext.datathevaluethat’s assigned to them. This value we use then further in aGateway, or simple as a condition on the outgoing edges.The title is optional, and only if missing it’s build either from thenameif all the buttons in the form have unique names, since they assign a different variable in thecontext.data, or from thevalueif they have overlapping names.SecretsSecrets are files that contain sensitive information are not checked in the project. In order to make them available to the build, we need to define them in either~/.adhesive/secrets/SECRET_NAMEor in the current folder as.adhesive/secrets/SECRET_NAME.In order to make them available, we just use thesecretfunction that creates the file in the current workspace and deletes it when exiting. For example here’s how we’re doing the actual publish, creating the secret inside a docker container:@adhesive.task('^PyPI publish to (.+?)$')defpublish_to_pypi(context,registry):withdocker.inside(context.workspace,context.data.gbs_build_image_name)asw:withsecret(w,"PYPIRC_RELEASE_FILE","/germanium/.pypirc"):w.run(f"python setup.py bdist_wheel upload -r{registry}")Note thedocker.insidethat creates a different workspace.ConfigurationAdhesive supports configuration via its config files, or environment variables. The values are read in the following order:environment variables:ADHESIVE_XYZ, thenvalues that are in the project config yml file:.adhesive/config.yml, thenvalues configured in the global config yml file:$HOME/.adhesive/config.yml.Currently the following values are defined for configuration:temp_folderdefault value/tmp/adhesive, environment var:ADHESIVE_TEMP_FOLDER.Is where all the build files will be stored.pluginsdefault value[], environment var:ADHESIVE_PLUGINS_LIST.This contains a list of folders, that will be added to thesys.path. So to create a reusable plugin that will be reused by multiple builds, you need to simply create a folder with python files, then point to it in the~/.adhesive/config.yml:plugins:-/path/to/folderThen in the python path you can simply do regular imports.colordefault valueTrue, environment var:ADHESIVE_COLOR.Marks if the logging should use ANSI colors in the terminal. Implicitly this istrue, but if log parsing is needed, it can make sense to have it false.log_leveldefault_valueinfo, environment var:ADHESIVE_LOG_LEVEL.How verbose should the logging be on the terminal. Possible values aretrace,debug,info,warning,errorandcritical.pool_sizedefault value is empty, environment var:ADHESIVE_POOL_SIZE.Sets the number of workers that adhesive will use. Defaults to the number of CPUs if unset.stdoutdefault value is empty, environment var:ADHESIVE_STDOUT.Implicitly for each task, the log is redirected in a different file, and only shown if the task failed. The redirection can be disabled.parallel_processingdefault value isthread, environment var:ADHESIVE_PARALLEL_PROCESSING.Implicitly tasks are scaled using multiple threads in order to alleviate waits for I/O. This is useful for times when remote ssh workspaces are defined in the lanes, so the same connection can be reused for multiple tasks.This value can be set toprocess, in case the tasks are CPU intensive. This has the drawback of recreating the connections on workspaces’ each task execution.Hacking AdhesiveAdhesive builds with itself. In order to do that, you need to checkout theadhesive-libshared plugin, and configure your local config to use it:plugins:-/path/to/adhesive-libThen simply run the build using adhesive itself:adhesive
adhocboost
AdHocBoostWelcome to AdHocBoost--a model that is specialized for classification in a severely imbalanced-class scenario.AboutMany data science problems have severely imbalanced classes (e.g. predicting fraudulent transactions, predicting order-cancellations in food-delivery, predicting if a day in Berlin will be sunny). In these situations, predicting the positive class is hard! This module aims to alleviate some of that.TheAdHocBoostmodel works by creatingnsequential models. The firstn-1models can most aptly be thought of as dataset filtering models, i.e. each one does a good job at classifying rows as "definitelynotthe positive class" versus "maybe the positive class". Thenthmodel only works on this filtered "maybe positive" data.Like this, the class imbalance is alleviated at each filter-step, such that by the time the dataset is filtered for final classification by thenthmodel, the classes are considerably more balanced.Run InstructionsInstallation is withpip install adhocboost. Beyond that,AdHocBoostconforms to a sklearn-like API: to use it, you simply instantiate it, and then use.fit(),.predict(), and.predict_proba()as you see... fit ;)
adhoccomputing
Ad Hoc Computing (AHC) FrameworkCommunication engineers are specialised in digital communications and networking technologies, whereas computer engineers or application developers are mostly specialised in software engineering with no or minimal background in the communication domain. With the introduction of virtualisation and softwarization, networks have become programmable that requires knowledge from both telecommunication and computer engineering domains. We have to fill the gap between these two domains to address the challenges of future networks. The main objective of the AHC project is to develop a distributed computing and learning environment on wireless networks employing software-defined radios. There are many simulators, emulators, and test-beds for researching networks or event-driven concurrent programming tools of distributed algorithms. However, there is a need for a tool that helps researchers integrate distributed algorithms considering the specifics of wireless networks. The tool has to incorporate wireless channel characteristics, packet collisions, contention-based channel access, forward- and backward-error correction, topology management, multi-hopping, or end-to-end reliable data transport and many other issues related to wireless communication and networking.The overall goal of the AHC project is to develop an open-source education and research software framework that facilitates the development of distributed algorithms on wireless networks considering the impairments of wireless channels. The framework will be used as a learning and prompt-prototyping tool.ObjectivesThe specific objectives areCreating web-based education tools for teaching and learning distributed systems, networks, and communication,Abstraction of the intricate details of the digital communication discipline from networking or distributed computing domains,Creating easy-to-understand and accessible educational materials about wireless networks,Providing hands-on opportunities for learning these technologies, inside of the classroom and out,Facilitating a framework to invent new technologies,Improving existing open-source digital communications technologies,Creating a remote simulation environment by using web-based tools for getting more realistic, real-world experiment results,Creating simulation configurations dynamically so that users will be able to run simulations by meeting specific requirements of projects.UsersThe users of the AHC framework will be students, teachers, researchers and engineers working in the fields of digital communication, networking or distributed computing. The developed framework will be available to all these user groups as open-source software.DesignIn this section, we present the details of the ad hoc computing library (AHC) library and algorithms thereof following an asynchronous event-driven composition model. The AHC library is being implemented in Python language, and the software is provided as open-source athttps://github.com/cengwins/ahc. The basic abstraction of AHC is a component, which is a single-threaded automaton. In other words, a component is a single-threaded process implemented in python where the thread waits on a queue for accepting input events from other components. Each component has a name and an instance number. The name and the instance number together uniquely represent each component instance. A component is an event-driven active process that waits for an input event. From this perspective, a component is an automaton.Each component has a separate eventhandlers dictionary (hash table) to which the event handlers are added on the initialization of the component. The component model automatically adds the "init" event to the eventhandlers dictionary. Note that the name of the event is "init" and the function that will handle the "init" event is onInit. The constructor of the component model executes the following.initializes the eventhandlers dictionary. This dictionary allows us to develop generic component models and automata.adds the default events (initialize, messagefromtop, messagefrompeer) to the event handlers. After all the components are created and initialized, the onInit function of all components will be triggered with the INIT event in a single shot. The default onInit implementation is a fake function. If the extended component does not implement it, the default onInit method will be called.creates an input queue. Each component has a single input queue that will be used by the connected components or the component itself to trigger events.initializes the connectors that allow us to connect components for composing complex models. Although a developer may use other techniques for connecting components to each other, the default method for composition is to follow a stack architecture on which we will further elaborate in the sequel.adds itself to the ComponentRegistry which is a singleton class that keeps track of all instantiated components. The ComponentRegistry can be used globally to find a component in the composition.finally creates the thread that will listen to the input queue for input events. The queuehandler function is going to handle the events and call the associated event handler. It is possible to extend the component model to further implement various queue handlers.Although the number of threads that will listen to the queue is set to one by default, it can be changed by the developer; notice that the order of the events per component may change if more one than one thread is employed. If an event is inserted in the input queue of a component, the thread of the component is automatically triggered to fetch events from the queue on a first-come-first-served basis. Events are defined with a basic data structure.An event is generated by a component that becomes the source of that event. The component reference is stored in the eventsource member of the Event class. The event member defines the event. Each component has to declare its component event type enumeration if events beyond the defaults will be used.Eventhandlers keep the association between the event and its handler function. The eventhandlers dictionary has to be populated with component-specific events on the initialization of the component instance. Using the eventhandler dictionary, the queuehandler determines the member function which is going to be invoked by the thread that runs the component. Note that events are enumerations. Event creation time is stored in the time member. The content that will be carried from one component to another inside the event is the eventcontent member. Event contents are, in general, the messages the peering components exchange although any content is allowed in the implementation.The queuehandler is a simple function that fetches the event ahead of the queue that is passed as a parameter to itself, gets the event name from the event object, associates the event with the event handler by a lookup in the eventhandlers dictionary, and then calls the event handler by passing the event as the parameter. The triggerevent function is used by components to put events into the input queue of a component. For any event evt, the handler is the function “onEvt” where the eventobj of class Event is the sole parameter.Components are stacked to construct a complex component. Since we follow a stack hierarchy, every component will have a reference to zero or more components on top or bottom of itself. Components can be connected to other components at the same layer as peers. The references to other components are called connectors. Components send each other events through connectors. There is a many-to-many relationship among components. A component has three connectors by default: "UP", "DOWN", and "PEER". The "UP" and "DOWN" connectors refer to the component that resides at the immediate higher or immediate lower layers, respectively. The "PEER" connector is used to communicate to other components at the same layer. Any complex component can be implemented out of simpler components by following this composition mechanism without any limitation on the depth.Components invoke each other using sendup, senddown, and sendpeer functions. When these functions are called, all of the components associated with the designated connector receive the event. If the component does not implement the associated event handler with the event, then the event is silently discarded for that component. Multiplexing is not employed; components are supposed to take actions by implementing the designated event or by implementing the multiplexing feature by some field defined in the eventcontent field.The topology of the to-be-experimented wireless network is generated using the Networkx Python package. In an experimentation model, there will be nodes that are connected over some channels. The DOWN connector of nodes is linked to the channels. Channels do not employ the default connector types. The unique identifiers (name concatenated with the instance number) are employed as connector names in channels.A topology may consist of one or more nodes (or components). To invoke the INIT event for all instantiated components, the start function of the Topology class has to be invoked. Then, the main thread has to loop forever.There are several ways for creating a topology. In general, NetworkX graph generation methods are used to create a graph that will be provided as an input parameter to the constructFromGraph function of the Topology class. This is a very powerful method since the NetworkX package handles many graph generators. For each node, components of type nodetype are created and for each edge in the graph, a channel of type channeltype is created and the components are connected to that channel.The eventcontent field of the Event class is a generic member. Anything can be provided as event content. However, in this project, we assume the wireless network model that will be experimented with is a packet switching network where a store-and-forward mechanism is employed. Although the typical approach of implementing separate physical addresses at the link layer and network-specific addresses at the network layer can be implemented over this design, we will generally use the componentinstancenumber as the unique address of a node. Inside a node, if a developer requires unique addressing of components, the unique identifier of components, that is component name and number together, can be used.The generic message structure is a simple one. Messages have headers following the GenericMessageHeader class and payloads following the GenericMessagePayload class. Messages can be encapsulated using this structure. Messages can be multiplexed and demultiplexed using the messagetype field. In other words, create a message, put the other message in the payload, and tag this message with another type. This structure allows us to design generic networking stacks. The other fields of the header are self-descriptive.The Channel class is an extension of the component model. In other words, a channel is also a component that is significantly overwritten. The generic channel model has two additional event types: INCH and DLVR. As an extension of the component model, the constructor first calls the constructor of the super. Then, it adds the channel-specific event handlers. The channel model adds two additional queues to the input queue, namely, they are the input and the output queues.Channels have three pipeline stages. The messagefromtop event handler is the first pipeline stage. The inchannel event handler is the interim pipeline stage and the deliver event handler is the final (output) pipeline stage. All pipeline stages have separate queues with separate threads; a typical channel has three threads. Messages that are transmitted over channels can be, among others, dropped, replicated, modified, or delayed. Such phenomena can be incorporated into extended channel models using these three pipeline stages. A developer may revolve an event over the same pipeline stage several times if required. The default deliver event handler, delivers the event that carries a message to all of the components that are connected to the channel. In short, the default channel model is a broadcast channel with no losses or duplicates.Although the order of the messages generated by the same component can be preserved, the order of messages generated by different components may change since the pipeline stages are handled by separate threads that depend on the process scheduling of the employed operating system.As the channel specific-event handlers propagate the event to the subsequent pipeline stage, they keep the eventsource intact. We do not let channels put their references as the eventsource, to make channels transparent to the components. A developer has to employ the same approach when the channel models are extended. Furthermore, the triggerevent function puts the events into the input queue by default. Since channels have multiple queues, we do not invoke the triggerevent function. The events are inserted into the queues directly.As we have already described, the lowest-layer component's down connector is connected to the node model and the down connector of the node model is connected to a channel. In the reverse direction, the channels have separate connector types for all components that are connected to themselves and those connectors are references to the unique identifier of the connected component.ContactErtan Onur -@[email protected]
adhoc-interface
InterfaceUnder development and planning
adhoconda
adhoconda: ad hoc Jupyter kernels spun off of Conda environmentsImpatient?Click hereJupyter notebooks are awesome artifacts for sharing ideas and results (CITATION NEEDED). They enable one to visualize complex results and play with or reuse someone else's code. However, a common problem with notebooks is that ofreproducibility: the results they show cannot be recomputed from the same data. A more basic problem often occurs before one even attempts such recomputation: whatever context and environment they were authored is not carried along the notebook, and is difficult, not to say literally impossible, to put together from scratch.A common gambit for environment pre-reproducibility that tends to work within large organization is for the community that shares code to first share a common environment that is cheaply instantiated, and can be thrown away once a computation is done. The notebooks exchanged within such a community will then all be headed with one or two cells that adapt the environment base to fit the code dependencies of the notebook. A strong example of this approach is embodied innbgallery: users are then expected to be all configured to pull packages off of aPyPI simplerepository, through the use ofipydeps(which is itself deployed in the base environment). Thus, all notebooks shared in this community start with a cell of the formimportipydepsipydeps.pip(["dep1","dep2",...])This project proposes an alternative based onConda. This package manager is very popular across certain scientific communities, as it effortlessly sets up numerical dependencies prelinked against highly optimized numerical and algorithmic libraries for a range of common general purpose computation platforms (e.g. MKL on Intel). Furthermore, Conda makes a strong effort to associate compatible versions of dependencies by resolving jointly the deployment constraints for all explicit and implicit (cascading) dependencies. This yields stronger guarantees on the reproducibility of an environment that approaches a notebook author's.For the impatientIn a common Conda environment you use, saybase:pip install adhocondaWhenever you want to peruse a notebook you get off the Internet:makenv<Displaynameofkernel>then either open Jupyter from environment in local directory.conda-env, or open it in your live Jupyter and change the kernel to the name of this new one.Notebooks attuned to this sharing system will all start with a pair of cells:%pipinstalladhoconda%load_extadhocondafollowed with%%condaenv# Conda environment supporting the notebook executiondependencies:-etcRequirementsConda deployed on the machine where one would peruse notebooks, either throughAnacondaorMiniconda. (The author is a fan of the latter.)UsageThe key to facilitating the reproducibility of notebook environments is to dedicate Conda environments to notebooks, and to make explicit how to build these environments from the notebooks themselves. This gives rise to two problems. On the one hand, we need to easily set up basic environments that are immediately available as Jupyter notebook kernels. On the other hand, the notebooks we share must augment such basic environments in order to satisfy the dependencies they require before they do anything else. The lightweightadhocondapackage addresses both of these problems.Creating ad hoc environments and kernelsOne would installadhocondausingpipin an environment they use currently.pipinstalladhocondaIt would not be out of place in one'sbaseenvironment. However, some folks prefer having their common tools live in a distinct bespoke environment that they activate at their shell's interactive startup -- you do you.From there, let's say one has downloaded a IPython Jupyter notebook off the Internet that was intended for sharing. One wants this notebook to run a Jupyter kernel spun up in a discrete, ad hoc Conda environment. Let's say we would name this kernelBrave New World; from a shell where the environment havingadhocondais active, we create this new environment and Jupyter kernel.makenv"Brave New World"This creates a Conda environment namedadhoc-brave-new-world, a IPython kernel of the same name, and the displayed name of that kernel will beBrave New World. One can then either activate this environment and run Jupyter Lab (or Notebook) from there to open the notebook they downloaded. If one is working out of a running Jupyter instance (say, through Jupyterhub), then they can open the notebook and set its kernel toBrave New World.What's in my new environment?By default,makenvputs together what it calls ahome environment. This is described in a file thatmakenvputs at$HOME/.config/adhoconda/environment.ymlthe first time it runs, and that the user can modify at will to add bells and whistles one uses, or to pin package versions to match some local constraints (which may limit sharing possibilities). The provided home environment description is minimalistic, including only Python and Jupyter Lab. In any case, were the user to require putting together a ad hoc environment based on an alternative YAML description, they can specify it through the--fileflag of themakenvcommand.Solving a notebook's dependenciesNow, the author of the downloaded notebook exposed how the Conda environment from where they worked. They did so usingadhoconda! Their first cell looks like this:%pipinstalladhoconda%load_extadhocondaThis deploys packageadhocondain the environment, accessible to the kernel. As an IPython extension, this package adds the cell magic%%condaenv. The second cell of the notebook uses it right away:%%condaenvdependencies:-python>=3.7-matplotlib-numpy-pandas-scikit-learn-pip-pip:-duckdb-sparse-umap-learnThis runsconda env updatewith the content of the cell written to a temporary YAML file, to use as environment descriptor. It effectively ensures that all the notebook requires is present, bailing out if there are version conflicts with the incumbent environment contents. These cells also allow the author of the notebook to sanity-check their notebook before publishing: just like their audience, they can useadhocondato set up a dedicated environment and kernel, have their dependencies set up in a single solve, and see whether all their computations run as they expect.Conscientious notebook authors might go one step further and include the version of their packages in their environment. This enables the audience to reconstitute approximately the dependencies at the moment the notebook was written. This stands to make the notebook resilient to API deprecations and bug introduction in the dependent packages, as well as drifts in computation results arising from bug fixes.adhocondaenables updating a Conda environment with the data structure of theenvironment.ymlfile familiar to users ofconda env. Obviously, such Conda-backed kernels can also supportipydepsand other similar PyPI-only environment builder. The point ofadhocondais to push notebook authors to publish their environments, so that replicating their computations does not have to start with a reverse engineering job.TODOJupyter Lab extension to provide the features of slightly awkward scriptmakenv.Easier management of ad hoc environments and kernels: when is it a good moment to delete them?
adhoc-pdb
adhoc-pdbA simple tool that allows you to debug your system whenever you want, with no overhead, even in production!Installpip install adhoc-pdb(orpip install adhoc-pdb[cli]to get a nice CLI)For development, clone this repo and runmake.UsageIn your code:importadhoc_pdbadhoc_pdb.install()Debug using adhoc-pdb cli:adhoc-pdb<pid>or using pure shell:kill-SIGUSR1<pid> telnetlocalhost9999
adhocracy-Pylons
The Pylons web framework is designed for building web applications and sites in an easy and concise manner. They can range from as small as a single Python module, to a substantial directory layout for larger and more complex web applications.Pylons comes with project templates that help boot-strap a new web application project, or you can start from scratch and set things up exactly as desired.ExampleHello Worldfrom paste.httpserver import serve from pylons import Configurator, Responseclass Hello(object):def __init__(self, request):self.request = requestdef index(self):return Response(body=”Hello World!”)if __name__ == ‘__main__’:config = Configurator() config.begin() config.add_handler(‘home’, ‘/’, handler=Hello, action=’index’) config.end() serve(config.make_wsgi_app(), host=’0.0.0.0’)Core FeaturesA framework to make writing web applications in Python easyUtilizes a minimalist, component-based philosophy that makes it easy to expand onHarness existing knowledge about PythonExtensible application designFast and efficient, an incredibly small per-request call-stack providing top performanceUses existing and well tested Python packagesCurrent StatusPylons 1.0 series is stable and production ready. The Pylons Project now maintains the Pyramid web framework for future development. Pylons 1.0 users should strongly consider using it for their next project.Download and InstallationPylons can be installed withEasy Installby typing:> easy_install PylonsDependant packages are automatically installed from thePylons download page.Development VersionPylons development uses the Mercuial distributed version control system (DVCS) with BitBucket hosting the main repository here:Pylons Bitbucket repository
adhocracy-pysqlite
Python interface to SQLite 3pysqlite is an interface to the SQLite 3.x embedded relational database engine. It is almost fully compliant with the Python database API version 2.0 also exposes the unique features of SQLite.
adhoctx
No description available on PyPI.
adhrit
Failed to fetch description. HTTP Status Code: 404
adhs
Scikit-Learn-compatible implementation of Adaptive Hierarchical ShrinkageThis directory contains an implementation of Adaptive Hierarchical Shrinkage that is compatible with Scikit-Learn. It exports 2 classes:ShrinkageClassifierShrinkageRegressorInstallationadhsPackageTheadhspackage, which contains the implementations of Adaptive Hierarchical Shrinkage, can be installed using:pip install .ExperimentsTo be able to run the scripts in theexperimentsdirectory, some extra requirements are needed. These can be installed in a new conda environment as follows:conda create -n shrinkage python=3.10 conda activate shrinkage pip install .[experiments]Basic APIThis package exports 2 classes and 1 method:ShrinkageClassifierShrinkageRegressorcross_val_shrinkageShrinkageClassifierandShrinkageRegressorBoth classes inherit fromShrinkageEstimator, which extendssklearn.base.BaseEstimator. Adaptive hierarchical shrinkage can be summarized as follows: $$ \hat{f}(\mathbf{x}) = \mathbb{E}{t_0}[y] + \sum{l=1}^L\frac{\mathbb{E}{t_l}[y] - \mathbb{E}{t_{l-1}}[y]}{1 + \frac{g(t_{l-1})}{N(t_{l-1})}} $$ where $g(t_{l-1})$ is some function of the node $t_{l-1}$. Classical hierarchical shrinkage (Agarwal et al. 2022) corresponds to $g(t_{l-1}) = \lambda$, where $\lambda$ is a chosen constant.__init__()parameters:base_estimator: the estimator around which we "wrap" hierarchical shrinkage. This should be a tree-based estimator:DecisionTreeClassifier,RandomForestClassifier, ... (analogous forRegressors)shrink_mode: 6 options:"no_shrinkage": dummy value. This setting will not influence thebase_estimatorin any way, and is equivalent to just using thebase_estimatorby itself. Added for easy comparison between different modes of shrinkage and no shrinkage at all."hs": classical Hierarchical Shrinkage (from Agarwal et al. 2022): $g(t_{l-1}) = \lambda$."hs_entropy": Adaptive Hierarchical Shrinkage with added entropy term: $g(t_{l-1}) = \lambda H(t_{l-1})$."hs_log_cardinality": Adaptive Hierarchical Shrinkage with log of cardinality term: $g(t_{l-1}) = \lambda \log C(t_{l-1})$ where $C(t)$ is the number of unique values in $t$."hs_permutation": Adaptive Hierarchical Shrinkage with $g(t_{l-1}) = \frac{1}{\alpha(t_{l-1})}$, with $\alpha(t_{l-1}) = 1 - \frac{\Delta_\mathcal{I}(t_{l-1}, { }\pi x(t{l-1})) + \epsilon}{\Delta_\mathcal{I}(t_{l-1}, x(t_{l-1}))+ \epsilon}$"hs_global_permutation": Same as"hs_permutation", but the data is permuted only once for the full dataset rather than once in each node.lmb: $\lambda$ hyperparameterrandom_state: random state for reproducibilityreshrink(shrink_mode, lmb, X): changes the shrinkage mode and/or lambda value in the shrinkage process. Callingreshrinkwith a given value ofshrink_modeand/orlmbon an existing model is equivalent to fitting a new model with the same base estimator but the new, given values forshrink_modeand/orlmb. This method can avoid redundant computations in the shrinkage process, so can be more efficient than re-fitting a newShrinkageClassifierorShrinkageRegressor.Other functions:fit(X, y),predict(X),predict_proba(X),score(X, y)work just like with any othersklearnestimator.cross_val_shrinkageThis method can be used to efficiently run cross-validation for theshrink_modeand/orlmbhyperparameters. As adaptive hierarchical shrinkage is a fully post-hoc procedure, cross-validation requires no retraining of the base model. This function exploits this property.TutorialsGeneral usage: Shows how to apply hierarchical shrinkage on a simple dataset and access feature importances.Cross-validating shrinkage parameters: Hyperparameters for (augmented) hierarchical shrinkage (i.e.shrink_modeandlmb) can be tuned using cross-validation, without having to retrain the underlying model. This is because (augmented) hierarchical shrinkage is afully post-hocprocedure. As theShrinkageClassifierandShrinkageRegressorare valid scikit-learn estimators, you could simply tune these hyperparameters usingGridSearchCVas you would do with any other scikit-learn model. However, thiswillretrain the decision tree or random forest, which leads to unnecessary performance loss. This notebook shows how you can use our cross-validation function to cross-validateshrink_modeandlmbwithout this performance loss.
adh-sample-library-preview
AVEVA Data Hub Python Library Sample:loudspeaker:Notice: This library is an AVEVA Data Hub targeted version of the ocs_sample_library_preview. The ocs_sample_library_preview library is being deprecated and this library should be used moving forward.Version:0.10.10_previewThis sample library requires Python 3.7+. You can download Pythonhere.NOTE: The library previously required Python 3.9+ to take advantage of type annotations. To provide compatibility with environments that cannot upgrade Python to 3.9,from __future__ import annotationswas added to each necessary file.This provides backwards compatibility down to Python 3.7.About the libraryThe python ADH library is an introductory language-specific example of programming against Aveva Data Hub (ADH). It is intended as instructional samples only and are not for production use.The samples also work on OSIsoft Cloud Services unless otherwise noted.They can be obtained by running:pip install adh_sample_library_previewThe library is not intended to show every endpoint and every option/parameter for endpoints it has. The library is known to be incomplete.Other language libraries and samples are available onGitHub.TestingThe library is tested using PyTest. To test locally, make sure that PyTest is installed, then navigate to the Tests directory and run the test classes by executingpython -m pytest {testclass}where {testclass} is the name of a test class, for example ./test_baseclient.py.Optionally to run end to end tests, rename the appsettings.placeholder.json file to appsettings.json and populate the fields, (This file is included in the gitignore and will not be pushed to a remote repository), then runpython -m pytest {testclass} --e2e TrueLoggingEvery request made by the library is logged using the standardPython logging library. If the client application using the library creates a logger, then library will log to it at the following levels:LevelUsageErrorany non 200-level response code, along with the error messageInfoall request urls and verbsall response status codesDebugdata payload and all request headers (Authorization header value redacted)response content and all response headersThe process for creating a logger is described in theLogging HOWTO documentation.An example walkthrough is shown here:Logger Creation ExampleTo initiate logging, the client must create a logger, defining a log file, a desired log level, and default formatting:# Step 0 - set up loggerlog_file='logfile.txt'log_level=logging.INFOlogging.basicConfig(filename=log_file,encoding='utf-8',level=log_level,datefmt='%Y-%m-%d%H:%M:%S',format='%(asctime)s%(module)16s,line:%(lineno)4d%(levelname)8s|%(message)s')This creates a logger object that streams any logged messages to the desired output. The libraries called by the client, including thisADH Sample Library Python, that have implemented logging will send their messages to this logger automatically.Thelog levelspecified will result in any log at that levelor higherto be logged. For example,INFOcapturesINFO,WARNING,ERROR, andCRITICAL, but ignoresDEBUG.Logger Usage ExampleTo change the log level after creation, the level can be set using the following commandlogging.getLogger().setLevel(logging.DEBUG)This concept is particularly helpful when debugging a specific call within the application. Logging can be changed before and after a call to the library in order to provide debug logs for that specific call only, without flooding the logs with debug entries for every other call to the library.An example of this can be seen here.# Step 4 - Retrieve the data vieworiginal_level=logging.getLogger().levellogging.getLogger().setLevel(logging.DEBUG)dataview=adh_client.DataViews.getDataView(namespace_id,SAMPLE_DATAVIEW_ID)logging.getLogger().setLevel(original_level)Note that the original level was recorded, logging was set to debug, thegetDataViewcall was performed, then logging was set to its previous level. The logs will contain debug message for only this call, and all other calls before and after will be logged with their original level.Developed using Python 3.10.1AVEVA Samplesare licensed under the Apache 2 license.For the main ADH sample libraries pageReadMeFor the main AVEVA samples pageReadMe
adia
ADiadiagram: Foo sequence: foo -> bar: Hello World!Output:DIAGRAM: Foo +-----+ +-----+ | foo | | bar | +-----+ +-----+ | | |~~~Hello World!~~~>| | | |<------------------| | | +-----+ +-----+ | foo | | bar | +-----+ +-----+ADiais a language specially designed to render ASCII diagrams.Currently, only sequence diagrams are supported, but the roadmap is to support two more types of diagrams:fork(pylover/adia#42) andclass(pylover/adia#41).Get Closer!live demo pagedocumentationThe ADia can also run flawlessly inside the browsers using the awesome project:Brython.Thehttps://github.com/pylover/adia-liveis a good example of how to use it inside the Javascript. In addition, please read theJavascript APIsection of thedocumentation.
adi-analyzer
No description available on PyPI.
adibasiccalculator
This is a very simple calculator that takes two numbers and either add, subtract, multiply or divide them.Change Log0.0.1 (19/04/2020)First Release
adi.bookmark
Provide a bookmarklet for storing bookmarks as an Archetype-based link-item to a given Plone-folder.WhyCraved for a quick way to store interesting site-addresses to my Plonesite.WhatProvide a bookmarklet which, when clicked, will:Transfer the currently watched URL and store it as a link-item.Use the current Linux-date in seconds as the id for the link.Use the current window’s title as the link-title, if not given falls back to the part of the URL after its last slash.Offer the user a prompt to enter an additional description and give a chance to cancel the whole operation.HowInstall this add-on to a Plone-site and call the folder where you want to store bookmarks to, append ‘/adi_bookmark’ to the URL in your browser, drag’n’drop the offered link to your browser’s bookmark-bar.Optionally, you can get the bookmarklet without installing, from:http://euve4703.vserver.de:8080/adi/static/public/repos/github/ida/adi.bookmark/adi/bookmark/skins/adi_bookmark/adi_bookmark.ptThere enter the Plone-folder-destination before drag’n’dropping the link.Now, anytime you’ll click that bookmarklet, it’ll redirect you to the Plone-folder and store the former watched URL as a link.CaveatsIt is very unlikely but possible, that the generated link-id of the current date could be taken already, however we (meh, myself and their royal majesties) decided to take that risk. If the id exists already, the existing object would be overwritten, possibly unintended, thy had been warned.TODODecode strings before passing as URL-params. Special characters will likely blow things up, not tested, yet.Support keywords a.k.a. tags.
adic
adicDeveloper GuideSetup# create conda environment$mambaenvcreate-fenv.yml# update conda environment$mambaenvupdate-nadic--fileenv.ymlInstallpipinstall-e.# install from pypipipinstalladicnbdev# activate conda environment$condaactivateadic# make sure the adic package is installed in development mode$pipinstall-e.# make changes under nbs/ directory# ...# compile to have changes apply to the adic package$nbdev_preparePublishing# publish to pypi$nbdev_pypi# publish to conda$nbdev_conda--build_args'-c conda-forge'$nbdev_conda--mambabuild--build_args'-c conda-forge -c dsm-72'UsageInstallationInstall latest from the GitHubrepository:$pipinstallgit+https://github.com/dsm-72/adic.gitor fromconda$condainstall-cdsm-72adicor frompypi$pipinstalladicDocumentationDocumentation can be found hosted on GitHubrepositorypages. Additionally you can find package manager specific guidelines oncondaandpypirespectively.
adicao-subtracao
aritmetica_basicaDescription. The package aritmetica_basica is used to: - -InstallationUse the package managerpipto install aritmetica_basicapipinstalladicao_subtracaoUsagefromaritmetica_basica.adicaoimportaritmetica_basicaaritmetica_basica.adicao()AuthorJonatasLicenseMIT
adicity
# Adicity A fixed-arity high level functional programming language engine. ### Contributors - Cole Wilson ### Contact <[email protected]>
adi.commons
Introduction============Some very common filesystem-operations, wrapped into functions with names tobe easily remembered, and adjustments for wanted default behaviours, such as:When attempting to delete a file which doesn't exist, fail silently.Changelog=========1.1 (2017-06-06)----------------- Add isUpcomingWord().- Add aliases of existing defs: read(), write(), append(), prepend(), contains().- Sort defs alphabetically.1.0 (2016-10-21)----------------- Add newlinesToTags() and iterToTags().0.9 (2016-07-10)----------------- Add isEven() and isOdd().0.8 (2016-05-19)----------------- Add removeLinesContainingPattern().0.7 (2016-05-14)----------------- Add writeFile(), appendToFile() and prependToFile().0.6 (2015-11-22)----------------- Add insertAfterNthLine().0.5 (1509201)------------ Let addDirs() fail silently, if the directory to create, exists already.- Better hlp-msgs.0.4 (150920)------------ Really adjust MANIFEST (good morning Kreuzberg)0.3 (150920)------------ Adjust MANIFEST0.2 (150920)------------ Add MANIFEST0.1 (150920)------------ Initial release
adict
Usage:pip install adict from adict import adict d = adict(a=1) assert d.a == d['a'] == 1See all features, includingajson()inadict.py:test().https://github.com/denis-ryzhkov/adict/blob/master/adict.py#L46
aDict2
aDict provides interface to reading and writing aDict dictionary format. Typical usage:from adict import * # data classes from adict.parser import Parser # text to data from adict.printer import Printer # data to text with open('some file') as f: a = Article('Python') a.classes.append('n') a.transcriptions.append('ˈpaɪθən') d1 = Definition('a kind of programming language') d2 = Definition('a kind of Snake') d2.links.append('hyperonym', 'Snake') a.content = [d1, d2] dictionary = Parser(f).parse() dictionary.articles.append(a) with open('some other file', 'w') as f: f.write(str(Printer(dictionary)))aDict format specification…in English has not written yet. :)However, I think, this format is pretty simple to understand by trial and error or by looking on this implementation.ContributorsOnly me = arseniiv so far.
adidas-sensor
No description available on PyPI.
adidentifier
# AdIdentifier[![PyPI version](https://img.shields.io/pypi/pyversions/adidentifier.svg)](https://pypi.python.org/pypi/adidentifier)[![PyPI](https://img.shields.io/pypi/v/adidentifier.svg)](https://pypi.python.org/pypi/adidentifier)## InstallationPrerequisites:* The re2 library from Google> \# git clone https://github.com/google/re2.git & cd re2 & make & make install* The Python development headers> \# apt-get install python-dev* Cython 0.20+ (pip install cython)> $ pip install cythonAfter the prerequisites are installed, install as follows (pip3 for python3):> $ pip install https://github.com/andreasvc/pyre2/archive/master.zipor>$ git clone git://github.com/andreasvc/pyre2.git>$ cd pyre2>$ make installthen>$ pip install adidentifier## Usage### Import```pythonfrom adidentifier import AdIdentifier```### Initialize```pythonad = AdIdentifier()```## API### is_finance(text)Check whether the text or url is relevent to Finance.```pythontest1 = ["速贷之家-借钱不担心_2小时到账","https://www.aiqianzhan.com/html/register3_bd4.html?utm_source=bd4-pc-ss&utm_medium=bd4SEM&utm_campaign=D1-%BE%BA%C6%B7%B4%CA-YD&utm_content=%BE%BA%C6%B7%B4%CA-%C3%FB%B4%CA&utm_term=p2p%CD%F8%B4%FB"]for test in test1:resu = ad.is_finance(text)print text,"------->>", resu```> Output:```速贷之家-借钱不担心_2小时到账 ------->> Truehttps://www.aiqianzhan.com/html/register3_bd4.html?utm_source=bd4-pc-ss&utm_medium=bd4SEM&utm_campaign=D1-%BE%BA%C6%B7%B4%CA-YD&utm_content=%BE%BA%C6%B7%B4%CA-%C3%FB%B4%CA&utm_term=p2p%CD%F8%B4%FB ------->> True```### is_ad(url)Check whether the url is relevent to AD```pythontest2 = ["https://ss3.baidu.com/-rVXeDTa2gU2pMbgoY3K/it/u=3778907493,3669893773&fm=202&mola=new&crop=v1","https://ss2.bdstatic.com/8_V1bjqh_Q23odCf/pacific/upload_25289207_1521622472509.png?x=0&y=0&h=150&w=242&vh=92.98&vw=150.00&oh=150.00&ow=242.00","http://pagead2.googlesyndication.com/pagead/show_ads.js","http://www.googletagservices.com/tag/js/gpt_mobile.js"]for text in adtexts2:resu = ad.is_ad(text)print(text, "------>>", resu)```> Output:```('https://ss3.baidu.com/-rVXeDTa2gU2pMbgoY3K/it/u=3778907493,3669893773&fm=202&mola=new&crop=v1', '------>>', True)('https://ss2.bdstatic.com/8_V1bjqh_Q23odCf/pacific/upload_25289207_1521622472509.png?x=0&y=0&h=150&w=242&vh=92.98&vw=150.00&oh=150.00&ow=242.00', '------>>', True)('http://pagead2.googlesyndication.com/pagead/show_ads.js', '------>>', True)('http://www.googletagservices.com/tag/js/gpt_mobile.js', '------>>', False)```### get_target_from_href(href)Extract the target url from a hyperlink. eg. https://www.baidu.com/...%ASDD ----> https://www.wdzj.com/...1%E8%B4%B7```pythonprint ad.get_target_from_href("https://www.baidu.com/baidu.php?url=0f0000jsnOdydCYpIY2xQXFCV1h5YmZnZh_pWjXI1sMrqQiM8Y55S59-6yXvznN6gm_5K2BIwOl4qzVcr2qRUIZdYnyTM2gOTAL-ed0xhaXP7ZI4XoxPJtWsnc4vPT3Qgcpo8dLTicCsAu_tZqqn5DH0sVytFArXV5kfFxBwLN5Kyia2R0.DD_NR2Ar5Od663rj6t8ae9zC63p_jnNKtAlEuw9zsISgZsIoDgQvTVxQgzdtEZ-LTEuzk3x5I9qxo9vU_5Mvmxgv3IhOj4en5VS8ZutEOOS1j4SrZdSyZxg9tqhZden5o3OOOqhZ1tT5ot_rSEj4en5ovmxgkl32AM-WI6h9ikX1BsIT7jHzlRL5spycTT5y9G4mgwRDkRAcY_1fdIT7jHzs_lTUQqRHAZ1tT5ot_rSEj4en5ovmxgkl32AM-CFhY_mx5ksSEzselt5M_sSEu9qx7i_nYQZu_LSr4f.U1Yk0ZDq1xBYSsKspynqn0KY5TL3V5_0pyYqnWcd0ATqmhRLn0KdpHdBmy-bIfKspyfqnWR0mv-b5Hckr0KVIjYknjDLg1DsnH-xnW0vn-t1PW0k0AVG5H00TMfqP1cz0ANGujYkPjmvg1cvnWR4g1cknH0Yg1cznHR40AFG5HcsP0KVm1YLPjDknjnknjIxP1fkPWckP1f1g1DkP1bkrHD1nHIxn0KkTA-b5H00TyPGujYs0ZFMIA7M5H00mycqn7ts0ANzu1Ys0ZKs5H00UMus5H08nj0snj0snj00Ugws5H00uAwETjYs0ZFJ5H00uANv5gKW0AuY5H00TA6qn0KET1Ys0AFL5HDs0A4Y5H00TLCq0ZwdT1YLPHTvnHnLPWTLrjmkPWmvnHfk0ZF-TgfqnHRzPHcYrH0knj0dPsK1pyfqrHNhmW-9m10snj0suARvrfKWTvYqPWD4PRuAPHc3Pbw7wj9arfK9m1Yk0ZK85H00TydY5H00Tyd15H00XMfqn0KVmdqhThqV5HKxn7tsg100uA78IyF-gLK_my4GuZnqn7tsg1Kxn0Ksmgwxuhk9u1Ys0AwWpyfqn0K-IA-b5iYk0A71TAPW5H00IgKGUhPW5H00Tydh5H00uhPdIjYs0AulpjYs0Au9IjYs0ZGsUZN15H00mywhUA7M5HD0UAuW5H00mLFW5HfsPHmv&us=0.0.0.0.0.0.0.101&ck=0.0.0.0.0.0.0.0&shh=www.baidu.com&sht=baidu")```> Output:```shellhttps://www.wdzj.com/zhuanti/518lcj/?_pwk=n_4_1_1_1_3_5_4_s%E5%BF%85%E4%BA%89%E8%AF%8D|%E7%BD%91%E8%B4%B7|%E7%BD%91%E8%B4%B7&utm_source=baidu&utm_medium=cpc&tm_content=search&utm_campaign=%E7%BD%91%E8%B4%B7&utm_term=%E7%BD%91%E8%B4%B7```### get_domain_from_url(href)Extract the domain from a url . eg. https://www.asdasd.com/asdasd ----> www.asdasd.com```pythonprint ad.get_domain_from_url("https://www.asdasd.com/asdasd")```> Output:```shellwww.asdasd.com```## ConfigConfig will be generated automatically.```ini[CUSTOM]uri_keywords = qian,dai,cf,wd,jintext_keywords = 网贷ad_filter = https://ss3.baidu.com/*,https://ss2.bdstatic.com/*```## ATTENTION!!!调用is_finance(),判断链接是否是金融链接时,必须传入 href 超链接指向的target地址,且格式如同`{scheme}://{domain}/{path}`,其中`path`可以省略。
adi.devgen
Introduction============Yet another command-line Plone-Add-On-Generator.This package doesn't have any dependencies, so all ever possible occuringproblems can safely be attributed to itself.Additionally it was desirable to be able to execute any Plone-addon-extendingcommand of within any location of an addon. Or, of outside of an addon, byprepending the path to the addon to the command, so one doesn't necessarilyneed to change directories and executing a command doesn't require you to be ina certain directory.Besides of Plone-addon-related helper-functions, there are also functions moregeneral related to developing, like `doOnRemote()`, `getRepos()`,`squash()`, and so on.Setting up a new Plone-instance is as easy as `addPlone()`, will download theconfigs for buildout locally and set buildout's mode to 'offline', so we savetime, whenever running buildout, because it will not look up the configs ofremote addresses, like usually, as time is honey.DISCLAIMER: The usual formatting of Plone-configs have changed, addPlone() workstestedly up to Plone vs. 4.3.4, needs to be fixed for getting newer versions.Installation=============Stable-version--------------Of the commandline execute::pip install adi.devgenDevelop-version---------------Of the commandline execute::pip install -e git+https://github.com/ida/adi.devgen.git#egg=adi.devgenThe latest state of this package will be added to a directory (a.k.a. folder)called 'src', which lives, where your pip lives. To find out where your piplives, type `which pip` into your console. You can then change the code insideof the src-directory and get the effects immediately.Configuration / Presettings===========================When creating a new addon and a file '~/.buildout/devgen.cfg' is present,values will be read of it and inserted into the `setup.py` of the addon.The file-contents' format must be like this::author=Arbi [email protected]=https://github.com/arbitrary/your.addonUsage=====Type the command and 'help', to get a verbose description fo this tool::devgen helpType the command alone, to get a list of the available generator-functions::devgenOr, have a look into the methods of adi.devgen.scripts.skel.addSkel, all ofthose are available of the commandline, when prepending 'devgen'.To get a choosen function's help-text, to see what arguments it expects, type::devgen [FUNCTION_NAME] helpExamples========Create boilerplate for an addon, that can do nothing, but be installed in a Plonesite::devgen addProfile your.addonCreate it not in the directory, where you are, but somewhere else::devgen addProfile some/where/else/your.addonRegister another addon as a dependency to your addon::devgen addDep collective.bestaddonever some/where/your.addonOr, first locate into your addon, then you can omit the appended path::cd your.addondevgen addDep collective.bestaddoneverBy the way, most commands work of within any location inside of an addonand no need to pass a path.Register and add a browser-based stylesheet named 'main.css' in'your.addon/your/addon/browser/resources'::devgen addCssRegister and add a browser-based Javascript named 'magic.js' in'your.addon/your/addon/browser/resources'::devgen addJS magicRegister and add a browser-based Template named 'main.pt' and aPython-script named 'main.py' with an example how to retrieve acomputed value of the script in the template via TAL, in:'your.addon/your/addon/browser/resources'::devgen addViewThe view can then be called in a browser like this::http://localhost:8080/Plone/++resource++your.addon.resources/your_addon_main_viewWhere 'main' is the default name for the files, you can choose any other::devgen addView any_otherThat'll result to::http://localhost:8080/Plone/++resource++your.addon.resources/your_addon_any_other_viewTODO====- Regard more than one-dotted-namespace for addon.- Possibly transfer:https://github.com/ida/skriptz/blob/master/plone/Dexterity/addField.pyChangelog=========1.5 (2017-06-07)----------------- Do not reference versions.cfg in versions.cfg, results in infinite loop. [ida]1.4 (2017-06-06)----------------- Adjust addBuildout() to parse extends-sections, instead of unsharply finding patterns,now works also for Plone-5-builds.- Fix addPlone(): Add extends-part to a build's default buildout.cfg,pointing to the used versions.cfg, defining the Plone-version.1.3 (2017-05-11)----------------- Improve addPlone() and deploy(). [ida]- Add isView() and improve idExists-methods. [ida]- Add getChildPosInParent() and getChildPosInParents() [ida]1.2 (2016-10-21)----------------- Add workflow-related helper-methods. [ida]1.1 (2016-07-10)----------------- Last release was a brown bag, pardon.- Add addNChildrenRecursive(), delDep(), deploy(), getField(), getFields(),getUserId(),- Fix skin-path name, so templates get immediately callable after added to product.- Fix isIniInstall() in addInstallScript().- Improve addInstallScript()- Regard if browser-skel is missing in addView().- Show complete name, not just first name in quickinstaller.- Add eggtractor, add develop-section in default-buildout-conf,increase default plone-vs.1.0 (2016-05-14)----------------- Add doOnRemote(), squash() and getUnpushedCommits().- Fix "cannot find virtenv" in addPlone().- Re-add default-filename "main" for generating stylesheet, Javascript,Python-script and a template via addOn().- Fix, if browserlayer is missing in addCss() and addJs().0.9 (2015-11-18)----------------- Add addView().- Add default-values of a function's expected arguments to help-msg.- Fix path: Use dot instead of slash, for a resources' paths injs-registry-generation.- Let getAddonPath() fail with an exit, to prevent furthercode-executions.- Rename addBrowserSkel() to addBrowser(), addSkinSkel() to addSkin,and so on, for less typing.- Fix addBrowser() and addSkin() from scratch – if not added on top of existingaddon.- Improve addAndRegisterView().0.8 (151002)------------- Generate missing browser-slug in config.- Change docs from MD-format to RST, as pypi requires.- Add addCss() and addJs().0.7 (150926)------------- Fix missing import and typo in setup.py-generation, which broke addons-installs.0.6 (150923)------------- Update README, improve installPlone().0.5 (150921)------------- Fix imports, better hlp-msgs, improve installPlone().0.4 (150920)------------- Update README0.3 (150920)------------- Fix changed import-paths.0.2 (150920)------------- Add adi.commons as dependency.0.1 (150920)------------- Initial release
adidnsdump
No description available on PyPI.
adi.dropdownmenu
Introduction============This product uses Products.ContentWellPortlets to place a sitemap-portletabove the columns and gives the dropdown-effect via CSS.This way, your folder structure will be mirrowed in the dropdown.The depth and target-folder can be configured via the sitemap-portletsweb-user-interface, just like Plone's standard navigation-portlets.Includes responsive design for dektop-, tablet- and mobile-devices.Installation============In your buildout.cfg's eggs-section add 'adi.dropdownmenu',run builduot, restart instance.Dependencies============- Products.ContentWellPortlets- collective.portlet.sitemapCredits=======Glued together by Ida Ebkes.In kind cooperation with Alterway, the creators of "collective.portlet.sitemap".The Weblion's crowd for developing "Products.ContentWellPortlets".And thanks to all of the friendly and helpful people of the Plone community.Changelog=========0.7 (2016-09-04)----------------- Also add button in menu-head for toggling entire menu, for small screens.0.6 (2016-09-04)----------------- Add layout for tablets and mobile-devices.- Note: Version '0.5' seems to have been a brown-bag-release, itdoesn't get pulled of PyPI when running buildout. We ignore version'0.5' and continue with '0.6'.0.4 (2012-02-01)-------------------- really hide globalnav (forgot config in last release) [ida]0.3 (2012-01-29)-------------------- Deassign collective.portlet.sitemap in left column. [ida]- Hide default globalnav-viewlet. [ida]- Unlimited css-sublevel-depth-support. [ida]- Styling for top-ul and visualize selected toplevel. [ida]0.2 (2012-01-07)-------------------- Extended CSS to support up to two sublevels instead of one.0.1 (2012-01-05)-------------------- Initial release
adi.enabletopics
This add-on does nothing but enabling the Archetype based contenttype ‘Topic’, a.k.a old-style-collections, and their configuration-UI in the controlpanel, disabled by default since of Plone 4.1
adi-env-parser
Environment parserRequirementsPython - Minimum required version is 3.8Using the environment parserEnvironmentParser class parses all environment variables with certain prefix and creates a Python dictionary based on the structure of these variables. The values are converted to booleans and integers when detected as such by default.General variable structure rules:variable name after prefix should not be emptyfirst character of variable name after prefix should not be "_"different levels of depth within environment variables are specified by using "__" string.arrays can be specified by using numeric index as a key within particular levelarray numeric indices should be defined in order, variables with invalid index will be discardedValue conversion rules:value will be converted to boolean if it matchestrueorfalsewhen lower casedvalue will be converted to integer if it contains digits onlyUsing the EnvironmentParser classExample of instantiating of EnvironmentParser object usingMYPREFIXas a prefix for environment variables. Upon instantiation, the object will automatically parse the current environment variables and store them in itsconfigurationproperty.importjsonfromadi_env_parserimportEnvironmentParserparser=EnvironmentParser(prefix="MYPREFIX")print(json.dump(json.dumps(parser.configuration,indent=4)))It is possible to provide existing JSON formatted file as a configuration base.importjsonfromadi_env_parserimportEnvironmentParserparser=EnvironmentParser(prefix="MYPREFIX",config_file="configuration.json")print(json.dump(json.dumps(parser.configuration,indent=4)))It is possible to disable value conversion by settingconvert_valuesparameter when instantiatingEnvironmentParserobject.fromadi_env_parserimportEnvironmentParserparser=EnvironmentParser(prefix="MYPREFIX",convert_values=False)ExamplesExamples use PYENV as environment variable prefix. This is default prefix used when not specifying one explicitly when instatiating EnvironmentParser.Creating dictionaryEnvironment variables:PYENV_hotel_name="Blue Falcon"PYENV_rooms__room_1="James Holden"PYENV_rooms__room_2="Amos Burton"PYENV_rooms__room_3="Naomi Nagata"PYENV_rooms__room_4="Alex Kamal"Resulting object:{"hotel_name":"Blue Falcon","rooms":{"room_1":"James Holden","room_2":"Amos Burton","room_3":"Naomi Nagata","room_4":"Alex Kamal"}}Creating arrayEnvironment variables:PYENV_hotel_name="Blue Falcon"PYENV_room_1__inventory__0="Wardrobe"PYENV_room_1__inventory__1="Table"PYENV_room_1__inventory__2="Lamp"Resulting object:{"hotel_name":"Blue Falcon","room_1":{"inventory":["Wardrobe","Table","Lamp"]}}Dictionaries within listEnvironment variables:PYENV_hotel_name="Blue Falcon"PYENV_rooms__0__name="Room 1"PYENV_rooms__0__capacity="2"PYENV_rooms__2__name="Room 2"PYENV_rooms__2__capacity="2"Resulting object:{"hotel_name":"Blue Falcon","rooms":[{"name":"Room 1","capacity":"2"},{"name":"Room 2","capacity":"2"}]}Console utilityModule provides console utility which can be used for parsing of environment variables. It also supports reading of existing JSON formatted file and setting indentation for output of created configuration JSON object.➜adi-env-parser--help usage:adi-env-parser-p<prefix>-j<base_json_file> ParsesenvironmentvariableswithdefinedprefixandcreatesJSONoutputfromtheparsedstructure. optionalarguments:-h,--helpshowthishelpmessageandexit--prefix[PREFIX],-p[PREFIX]Environmentvariableprefix.Default:PYENV--json[JSON],-j[JSON]JSONformattedfiletoreadasbaseconfiguration--indent[INDENT],-i[INDENT]NumberofspacestouseforindentationofoutputJSONstringDevelopmentInstall development packagespipinstall-e".[dev]"pipinstall-e".[test]"# Install build-local package group if you want to build packages locallypipinstall-e".[build-local]"Install pre-commitpre-commitinstallBuilding and publishing new versionNew version is built and published on tag in GitHub repository. The package version is infered from commit name.
adif2xml
adif2xmlA commandline tool to convert an ADIF file to an XML file.Installpipinstalladif2xmlUsageadif2xmlsource.adi-ooutput.xml
adifa
Failed to fetch description. HTTP Status Code: 404
adif-io
This is an ADIF parser in Python.Actual usageMain result of parsing: List of QSOs:Each QSO is represented by one Python dict.Keys in that dict are ADIF field names in upper case,value for a key is whatever was found in the ADIF, as a string.Order of QSOs in the list is same as in ADIF file.Secondary result of parsing: The ADIF headers. This is returned as a Python dict.Normally, you'd calladif_io.read_from_file(filename). But you can also provide a string with an ADI-file's content, as follows:import adif_io qsos, header = adif_io.read_from_string( "A sample ADIF content for demonstration.\n" "<adif_ver:5>3.1.0<eoh>\n" "<QSO_DATE:8>20190714 <TIME_ON:4>1140<CALL:5>LY0HQ" "<MODE:2>CW<BAND:3>40M<RST_SENT:3>599<RST_RCVD:3>599" "<STX_STRING:2>28<SRX_STRING:4>LRMD<EOR>\n" "<QSO_DATE:8>20190714<TIME_ON:4>1130<CALL:5>SE9HQ<MODE:2>CW<FREQ:1>7" "<BAND:3>40M<RST_SENT:3>599<RST_RCVD:3>599" "<SRX_STRING:3>SSA<DXCC:3>284<EOR>") print("QSOs: {}\nADIF Header: {}".format(qsos, header))This will print outQSOs: [{'RST_SENT': '599', 'CALL': 'LY0HQ', 'MODE': 'CW', 'RST_RCVD': '599', 'QSO_DATE': '20190714', 'TIME_ON': '1140', 'BAND': '40M', 'STX_STRING': '28', 'SRX_STRING': 'LRMD'}, {'DXCC': '284', 'RST_SENT': '599', 'CALL': 'SE9HQ', 'MODE': 'CW', 'RST_RCVD': '599', 'BAND': '40M', 'FREQ': '7', 'QSO_DATE': '20190714', 'TIME_ON': '1130', 'SRX_STRING': 'SSA'}]ADIF Header: {'ADIF_VER': '3.1.0'}Time on and time offGiven oneqsodict, you can also have the QSO's start time calculated as a Pythondatetime.datetimevalue:adif_io.time_on(qsos[0])If your QSO data also includesTIME_OFFfields (and, ideally, though not required,QSO_DATE_OFF), this will also work:adif_io.time_off(qsos[0])Geographic coordinates - to some degreeADIF uses a somewhat peculiar 11 characterXDDD MM.MMMformat to code geographic coordinates (fieldsLATorLON). The more common format these days are simple floats that code degrees. You can convert from one to the other:adif_io.degrees_from_location("N052 26.592") # Result: 52.4432 adif_io.location_from_degrees(52.4432, True) # Result: "N052 26.592"The additionalboolargument oflocation_from_degreesshould beTruefor latitudes (N / S) andFalsefor longitudes (E / W).ADIF versionThere is little ADIF-version-specific here. (Everything should work with ADI-files of ADIF version 3.1.3, if you want to nail it.)Not supported: ADIF data types.This parser knows nothing about ADIF data types or enumerations. Everything is a string. So in that sense, this parser is fairly simple.But it does correcly handle things like:<notes:66>In this QSO, we discussed ADIF and in particular the <eor> marker.So, in that sense, this parser issomewhatsophisticated.Only ADI.This parser only handles ADI files. It knows nothing of the ADX file format.For now: input onlyThere may be an ADIF output facility some time later.Sample codeHere is some sample code:import adif_io qsos_raw, adif_header = adif_io.read_from_file("log.adi") # The QSOs are probably sorted by QSO time already, but make sure: for qso in qsos_raw: qso["t"] = adif_io.time_on(qso) qsos_raw_sorted = sorted(qsos_raw, key = lambda qso: qso["t"])Pandas / Jupyter users may want to addimport pandas as pdup above and continue like this:qsos = pd.DataFrame(qsos_raw_sorted) qsos.info()
adif-merge
adif_merge.pyHam Radio ADIF Logbook format merge/resolution program written in PythonSummaryThis tool is designed to merge and resolve multiple ADIF files, including partial information from different reported sources (e.g. previous uploads to LoTW, QRZ, clublog, et al. Each of these sources tend to "augment" log entries with their own additional information.The motivation for this is that GridTrackerhttps://tagloomis.com/and several logging programs are able to not only send log entries to remote services, but also automatically download ADIF files back from them. I found myself with a bunch of ADIF files, but none of them really gave me the whole picture of my QSOs. I also have both a home and portable station and sometimes I'd forget to move my logs between the two. This allows me to merge those logs at a later date (or reconstruct them from external servers).The code will look at multiple log entries that occur with the same band, call, and mode within 90 seconds of each other and attempt to merge them, since some reporting tools or duplicate logging out of WSJT-X occasionally occurs (e.g. a manual log entry to correct a gridsqare, or different rounding of times on and off (to the nearest minute).It tries to automate the decision making process for conflicts between log entries, and will tend to treat .adif files with "lotw" in their name as more authoritative for some fields.For a complete look at the decision making process, read the code. It's commented, including caveats and is designed to be easily modifiable.Use the-p <filename>.jsonoption to generate problem QSO output in JSON format to see where there were conflicts and how we resolved them.InstallationDeveloped under python 3 >= 3.6pip3 install adif_mergeSample usageHere's what I do to merge my WSJT and GridTracker managed logs::adif_merge -o mergedlog.adif -c merged_wsjtx.log -p problems.json \ ~/.local/share/WSJT-X/wsjtx_log.adi ~/Documents/GridTracker/*.adifPlease use the--problemsoption to look at merge issues that the program wasn't confident about resolving. For example QRZ and LoTW often differ about user-entered information like ITU and CQ zones.The problems option will a .json file that is approximately human readable list of unresolved issues you may wish to fix--first organized by field, and again organized by QSO.Feedback & DisclaimerThis code is learning and evolving. Please save copies of all of your log files before replacing them with this augmented file.If you disagree with choices I've made in preference when attempting to merge, such as frequency harmonization or deferring to LoTW when there is a conflict for some fields, please let me know.Copyright & LicenseCopyright (c) 2020 by Paul Traina, All rights reserved.This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.You should have received a copy of the GNU General Public License along with this program. If not, seehttps://www.gnu.org/licenses/.
adi.fullscreen
IntroductionThis Plone-add-on supplies a fullscreen-button below the title of any plone article, symbolized with expanding and collapsing arrows.The button consists of four unicode-arrow-characters, so can easily obtain visual control over the element via, f.e. change color, include a background-image, or such.When clicked, left- and right-columns and top- and footer-elements are hidden and the main column expands to full width and height of the window.On reload or calling another page, the view falls back to non-fullscreen in order to not accidently hide releveant context information, if the user forgets being in fullscreenmode.Please note, that the term ‘fullscreen’ here doesn’t mean, the browserwindow dissapears, but any element but the actual article is hidden, which let’s you focus more on the content. For the former you can also just user the fulscreenmode of your browser, conventionally accessible via the F11-key.Changelog0.2 (2013-03-10)Also hide top- and footer-elements on fullscreen. [ida]Replaced bg-img with unicode-characters (arrows). [ida]0.1 (2012-01-02)Initial release
adi.init
IntroductionDeletes Plone’s default contents: - Users, news, events and front-pageDeletes Plone’s default portlets: - Navigation, events and newsChangelog0.4 (2012-12-02)Pypi release [ida]0.3 (2012-05-18)Added MAINIFEST.in [ida]0.2 (2012-05-18)Corrective release, missing files. [ida]0.1 (2012-05-17)Deletes Plone’s default portlets ‘navigation’, ‘events’ and ‘news’Deletes Plone’s default contents ‘front-page’, ‘events’, ‘news’ and ‘Members’Initial release
adil-cebd-1100-week9
CEBD-1100-Week10
adilmar-libpythonpro-package
libpythonproSuporta versão 3 do pythonPara instalar:python3 -m venv .venv.venv/scripts/activate.batpip install -r requirements-dev.txtPara conferir a qualidade do código:flake8Exercicios aula pytools
adilo-api-client
Adilo API ClientOverviewThis is a Python client for interacting with the Adilo API.Installationpipinstalladilo-api-clientUsagefromadilo_api.adilo_apiimportAdiloAPI# Replace 'your_public_key' and 'your_secret_key' with your actual API keysapi=AdiloAPI(public_key='your_public_key',secret_key='your_secret_key')# Create a new projectproject_title='Project Title'result_create=api.create_project(title=project_title,description="Project Description",locked=False,drm=False,private=False,password="")print(result_create)# List all projectsresult_list=api.list_projects()print(result_list)Replaceyour_public_keyandyour_secret_keywith your actual API keys. Customize the project_title and other parameters according to your needs.API DocumentationFor more details on the Adilo API, refer to theofficial documentation.ContributingIf you would like to contribute to this project, please open an issue or submit a pull request.TestingCopy and rename.env.sampleto.envCopy the API Key and API Secret fromAdilo API Settingto.envYou can run your tests using:python-munittesttests/test_adilo_api.pyLicenseThis project is licensed under the MIT License - see the LICENSE file for details.
ad-interface-functions
DescriptionThis repository provides some interface functions we frequently use in our trajectory planning software stack at FTM/TUM. Next to the simple UDP interface we also use ZeroMQ (http://zeromq.org/) and its Python bindings PyZMQ.List of componentsudp_export: UDP export function.udp_import: UDP import function.zmq_export: ZMQ export function handling pyobj, json, str.zmq_import: ZMQ import function handling pyobj, json, str.zmq_import_poll: ZMQ import function handling pyobj with polling option.Contact persons:Alexander Heilmeier,Tim Stahl.
adios
No description available on PyPI.
adios2
ADIOS2 : The Adaptable Input Output System version 2This is ADIOS2: The Adaptable Input/Output (I/O) System.ADIOS2 is developed as part of the United States Department of Energy's Exascale Computing Project. It is a framework for scientific data I/O to publish and subscribe to data when and where required.ADIOS2 transports data as groups of self-describing variables and attributes across different media types (such as files, wide-area-networks, and remote direct memory access) using a common application programming interface for all transport modes. ADIOS2 can be used on supercomputers, cloud systems, and personal computers.ADIOS2 focuses on:PerformanceI/O scalability in high performance computing (HPC) applications.Adaptabilityunified interfaces to allow for several modes of transport (files, memory-to-memory)Ease of Usetwo-level application programming interface (APIs)Full APIs for HPC applications: C++11, Fortran 90, C 99, Python 2 and 3Simplified High-Level APIs for data analysis: Python 2 and 3, C++11, MatlabIn addition, ADIOS2 APIs are based on:MPIAlthough ADIOS2 is MPI-based, it can also be used in non-MPI serial code.Data GroupsADIOS2 favors a deferred/prefetch/grouped variables transport mode by default to maximize data-per-request ratios. Sync mode, one variable at a time, is treated as the special case.Data StepsADIOS2 follows the actual production/consumption of data using an I/O “steps” abstraction removing the need to manage extra indexing information.Data EnginesADIOS2 Engine abstraction allows for reusing the APIs for different transport modes removing the need for drastic code changes.DocumentationDocumentation is hosted atreadthedocs.CitingIf you find ADIOS2 useful, please cite ourSoftwareX paper, which also gives a high-level overview to the motivation and goals of ADIOS; complementing the documentation.Getting ADIOS2From source:Install ADIOS2 documentation. For acmakeconfiguration example seescripts/runconf/runconf.shConda packages:https://anaconda.org/conda-forge/adios2Spack:adios2 packageDocker images: underscripts/dockerOnce ADIOS2 is installed refer to:Linking ADIOS2ReleasesLatest release:v2.10.0-rc1Previous releases:https://github.com/ornladios/ADIOS2/releasesCommunityADIOS2 is an open source project: Questions, discussion, and contributions are welcome. Join us at:Mailing list:[email protected] Discussions:https://github.com/ornladios/ADIOS2/discussionsReporting BugsIf you find a bug, please open anissue on ADIOS2 github repositoryContributingSee theContributor's Guide to ADIOS 2for instructions on how to contribute.LicenseADIOS2 is licensed under the Apache License v2.0. See the accompanyingCopyright.txtfor more details.Directory layoutbindings - public application programming interface, API, language bindings (C++11, C, Fortran, Python and Matlab)cmake - Project specific CMake modulesexamples - Simple set of examples in different languagesscripts - Project maintenance and development scriptssource - Internal source code for private componentsadios2 - source directory for the ADIOS2 library to be installed under install-dir/lib/libadios2.utils - source directory for the binary utilities, to be installed under install-dir/bintesting - Tests usinggtest
adios4dolfinx
ADIOS2Wrappers for DOLFINxRead Latest DocumentationThis is an extension forDOLFINxto checkpoint meshes, meshtags and functions usingADIOS2.The code uses the adios2 Python-wrappers to write DOLFINx objects to file, supporting N-to-M (recoverable) and N-to-N (snapshot) checkpointing. See:Checkpointing in DOLFINx - FEniCS 23for more information.For scalability, the code usesMPI Neighbourhood collectivesfor communication across processes.InstallationDockerADIOS2 is installed in the official DOLFINx containers.dockerrun-ti-v$(pwd):/root/shared-w/root/shared--name=dolfinx-checkpointghcr.io/fenics/dolfinx/dolfinx:nightlyCondaTo use with conda (DOLFINx release v0.7.0 works with v0.7.2 of ADIOS4DOLFINx)condacreate-ndolfinx-checkpointpython=3.10 condaactivatedolfinx-checkpoint condainstall-cconda-forgefenics-dolfinxpipadios2 python3-mpipinstallgit+https://github.com/jorgensd/[email protected] and writing meshes, usingadios4dolfinx.read/write_meshReading and writing meshtags associated to meshesadios4dolfinx.read/write_meshtagsReading checkpoints for any element (serial and parallel, arbitrary number of functions and timesteps per file). Useadios4dolfinx.read/write_function.[!IMPORTANT]For a checkpoint to be valid, you first have to store the mesh withwrite_mesh, then usewrite_functionto append to the checkpoint file.[!IMPORTANT]A checkpoint file supports multiple functions and multiple time steps, as long as the functions are associated with the same mesh[!IMPORTANT]Only one mesh per file is allowedBackwards compatibility[!WARNING] If you are using checkpoints written withadios4dolfinx<0.7.2please use thelegacy=Trueflag for reading in the checkpoint with with any newer versionLegacy DOLFINOnly checkpoints forLagrangeorDGfunctions are supported from legacy DOLFINReading meshes from the DOLFIN HDF5File-formatReading checkpoints from the DOLFIN HDF5File-format (one checkpoint per file only)Reading checkpoints from the DOLFIN XDMFFile-format (one checkpoint per file only, and only uses the.h5file)See theAPIfor more information.Long term planThe long term plan is to get this library merged into DOLFINx (rewritten in C++ with appropriate Python-bindings).
adiosiker
No description available on PyPI.
adios_mpi
No description available on PyPI.
adi-parser
adi_parserThis is a simple utility package, originally to parse.adi filesfromLoTW, but now from anywhere.By default, adi_parser parsesLoTW .adi arguments. You can optionally parseall arguments as a python dictionary.ExamplesParsing in LoTW .adi formatfromadi_parserimportparse_adiadi_header,qso_reports=parse_adi("path/to/your_data.adi")Whereadi_header: Headerandqso_reports: list[QSOReport]. These are dataclasses, which are defined inadi_parser.dataclasses.All dataclass values are assumed to be missing by default.BothGRIDSQUAREandMY_GRIDSQUAREare converted to lat/lat withmaidenhead.Parsing all arguments from a .adifromadi_parserimportparse_adiresult=parse_adi("path/to/your_data.adi",return_type=dict)Where the result dictionary follows this format:result={"header":{"full_text":str,"argument_1":str,"argument_2":str,...},"qso_reports":[{"full_text":str,"argument_1":{"value":str,"length":int,"type":str|None,"comment":str|None,},"argument_2":{...},...},...]}
adi.playlist
IntroductionA Plone add-on to turn a folder holding audio-files into a playlist.UsageFill a folder with audiofiles, select ‘adi_playlist’ of the “Display”-dropdown-menu to change the view of the folder and a playlist will be shown.Default behaviour is to play the list until its end, one track after another, optionally click the inifinity-symbol to play the list infinitely in a loop.You can use the space-bar to play/pause the current track, the tab-key to walk through the tracks and Enter-key for starting the selected track.MotivationMy dear sister Angela, who likes to turn the tables and wanted a non-proprietary solution to have her sets “in the cloud” with a decent player avaible right away.BackgroundThis product takes advantage of browser-native audio-players, using the audio-tag introduced with HTML5 and the fact that all major browsers support this by now, dropping the need to provide a serverside-player.However there are restrictions of supporting all of the possible audio-file-formats, depending on the browser’s capabilities or choosen lack of support.The add-on was written to use in conjunction with OGG-formats (‘.ogg’-extension), expressing the love of the author for open (=non-proprietary) standards, dropping support to Safari, the only major-browser not supporting Vorbis.This leaves out support for Safari, yet it should be fairly easy enough extending this add-on to hold each track in two formats, the other satisfying Safari and distinct which format to use, by checking which browser the client uses.Used techniqueECMAscriptAuthorIda Ebkes, 2014, <[email protected]>CreditsjQuery, which made writing this a breeze.FurthermoreHave a look at collective.transcode.star, if you want your arbitrary audio-formats transformed to OGG-format (or another) during upload, using beloved ffmpeg.Changelog0.3 (2014-06-22)Adjust MANIFEST and remove trash.0.2 (2014-06-22)Add MANIFEST.in and repo-url.0.1 (2014-06-20)Initial release
adi-reader
adinstruments_sdk_pythonUse this code to read .adicht (Labchart) files into Python. Interfacing with the ADIstruments DLL is done viacffi.The code utilizes the SDK from ADIstruments to read files in Python as NumPy arrays.Currently only works for Windows.A slightly more flushed out Matlab version can be foundhere.Installationpip install adi-readerTest codeimportadif=adi.read_file(r'C:\Users\RNEL\Desktop\test\test_file.adicht')# All id numbering is 1 based, first channel, first block# When indexing in Python we need to shift by 1 for 0 based indexing# Functions however respect the 1 based notation ...# These may vary for your file ...channel_id=2record_id=1data=f.channels[channel_id-1].get_data(record_id)importmatplotlib.pyplotaspltplt.plot(data)plt.show()DependenciescffiNumPyPython 3.6-3.9Setup for other Python versionsRunning the code might require compiling the cffi code depending on your Python version.This requires running cffi_build.py in the adi package.This might require installing cffi as well as some version of Visual Studio.The currently released code was compiled for Python 3.6-3.9 on Visual Studio 14.0 or greater was required.For upgrading to 3.8, I installed Python 3.8. Within the interpreter I ran the following:Jim note to self, rather than installing Anaconda I simply:download Python fromhttps://www.python.org/downloads/windows/cd to Python directory or run directly, these go to something like:C:\Users\RNEL\AppData\Local\Programs\Python\Python39-32\pythonNote the above path is specific to my computer, might need to change user nameimportsubprocessimportsys#https://stackoverflow.com/questions/12332975/installing-python-module-within-codedefinstall(package):subprocess.call([sys.executable,"-m","pip","install",package])install("cffi")importos#This would need to be changed based on where you keep the codeos.chdir('G:/repos/python/adinstruments_sdk_python/adi')# For 64 bit windowsexec(open("cffi_build.py").read())# For 32 bit windowsexec(open("cffi_build_win32.py").read())ImprovementsThis was written extremely quickly and is missing some features. Feel free to open pull requests or to open issues.
adi.revertorder
IntroductionSets the position of any new born content-item (Archetypes&Dexterity) to be the first child of its parent, in order to prepend, instead of appending it, like it is the default behaviour.Changelog1.3 (2016-11-02)Add missing MANIFEST.1.2 (150924)Also regard Dexterity-based contenttypes.1.1 (150924)Typos, add author.1.0dev (unreleased)Initial release
adis
ADISA python package for parsing and creating ADIS (Agricultural Data Interchange Syntax) files.This parser supportsClass AADIS format.ADIS Standardization documentWikipedia artice (unfortunately only available in german)Installationpip install adisExamplesParse an ADIS file and turn it to JSON# example_adis_to_json.pyfromadisimportAdisadis=Adis.parse_from_file("sample.ads")generated_json=adis.to_json()print(generated_json)Prettyprinted output:[{"990001":{"definitions":[{"item_number":"00000000","field_size":20,"decimal_digits":0},{"item_number":"00000001","field_size":9,"decimal_digits":6},{"item_number":"00000002","field_size":10,"decimal_digits":0}],"data":[{"00000000":"Euler number","00000001":2.718281,"00000002":null},{"00000000":"Pi","00000001":3.141592,"00000002":null},{"00000000":"Gravity on Earth","00000001":9.81,"00000002":"ms^(-2)"}],"status":"H"},"990002":{"definitions":[{"item_number":"00000008","field_size":10,"decimal_digits":0},{"item_number":"00000009","field_size":10,"decimal_digits":0}],"data":[{"00000008":"abc","00000009":"xyz"},{"00000008":"def","00000009":"uvw"}],"status":"N"}},{"990001":{"definitions":[{"item_number":"00000006","field_size":10,"decimal_digits":0},{"item_number":"00000007","field_size":5,"decimal_digits":2}],"data":[{"00000006":"1","00000007":1.23},{"00000006":"2"}],"status":"H"}}]Turn a JSON file to ADIS# example_json_to_adis.pyfromadisimportAdisadis=Adis.from_json_file("sample.json")generated_adis_text=adis.dumps()print(generated_adis_text)Output:DH990001000000002000000000109600000002100 VH990001Euler number 2718281?????????? VH990001Pi 3141592?????????? VH990001Gravity on Earth 9810000ms^(-2) DN9900020000000810000000009100 VN990002abc xyz VN990002def uvw EN DH9900010000000610000000007052 VH990001 1 123 VH990001 2||||| ZNAbout the ADIS formatEach physical file can contain multiple logical ADIS files, these are represented by objects of the typeAdisFile. Each of those logical ADIS files contains one or multiple blocks, these are represented by objects of the typeAdisBlock. Each block consists of the definitions for the fields (list of objects of typeAdisFieldDefinition) and one or multiple data rows (list of list ofAdisValue).DocumentationThis documentation only contains methods that are inteded to be used by the user. Take a look at the docstrings for more information about methods.AdisStatic methods:parse(text): Creates anAdisobject from a text that's in the ADIS formatparse_from_file(path_to_file): Creates anAdisobject from an ADIS filefrom_json(json_text): Create anAdisobject from a json textfrom_json_file(path_to_json_file): Create anAdisobject from a json fileNormal methods:__init__(adis_files): Creates anAdisobject from a list ofAdisFilesto_json(strip_string_values=True): Creates a json text containing the files, definitions and datadumps(): Creates a text in the ADIS formatget_files(): Returns a list ofAdisFilesAdisFileNormal methods:__init__(blocks): Creates anAdisFilefrom a list ofAdisBlocksget_blocks(): Returns a list ofAdisBlocksAdisBlockNormal methods:__init__(entity_number, status, field_definitions, data_rows): Creates anAdisBlockget_entity_number(): Returns the entity number of thisAdisBlockget_field_definitions(): Returns the field definitions as list ofAdisFieldDefinitionsget_data_rows(): Returns the data rows as list. Each data row is a list ofAdisValuesAdisFieldDefinitionNormal methods:__init__(item_number, field_size, decimal_digits): Creates anAdisFieldDefinitionget_item_number(): Returns the item numberget_field_size(): Returns the field sizeget_decimal_digits(): Returns the number of decimal digitsAdisValueStatic flags:strip_string_values: String values that are returned byto_dict()will be stripped if this flag is set.Normal methods:__init__(item_number, value): Creates anAdisValueto_dict(): Returns a dict containing the item number and value of thisAdisValue
adi.samplecontent
IntroductionThis package is an add-on for Plone and part of the adi.simplesite-package-bundle.It adds some sample content for simple sites to your portal: Welcome, About and Contact.The default view of the portal will be a link, that redirects to Welcome.Changelog0.3 (2012-12-02)Re-release.0.1dev (unreleased)Sets contact-info-view as default-view for folder ‘contact’Makes contact-info-view available for foldersSets ‘go-to-welcome’ as default-view for the portalAdds a static-text-portlet for the left and right columnAdds a link ‘go-to-welcome’ that directs to ‘welcome’Adds folders ‘welcome’, ‘about’ and ‘contact’Initial release
adiscstudies
This package contains the source files and built models for an "ADI" (Application Data Interface) schema describingsingle cell data collection and analysis studies.ExamplespipinstalladiscstudiesTables and fieldsimportimportlib.resourcesimportpandasaspdwithimportlib.resources.path('adiscstudies','tables.tsv')aspath:tables=pd.read_csv(path,sep='\t')print(tables)Name ... Entity subject ... Study subject diagnosis ... Diagnosis event diagnostic_selection_criterion ... Diagnostic selection criterion specimen_collection_study ... Biospecimen collection study specimen_collection_process ... Biospecimen collection process histology_assessment_process ... Histology assessment process ...importimportlib.resourceswithimportlib.resources.path('adiscstudies','fields.tsv')aspath:fields=pd.read_csv(path,sep='\t')print(fields[fields['Table']=='Histological structure identification'][['Label','Table','Foreign table','Foreign key','Ordinality',]])Label Table Foreign table Foreign key Ordinality Histological structure Histological structure identification Histological structure Identifier 1 Data source Histological structure identification Data file SHA256 hash 2 Shape file Histological structure identification Shape file Identifier 3 Plane coordinates reference Histological structure identification Plane coordinates reference system Name 4 Identification method Histological structure identification NaN NaN 5 Identification date Histological structure identification NaN NaN 6 Annotator Histological structure identification NaN NaN 7SQLimportimportlib.resourceswithimportlib.resources.path('adiscstudies','schema.sql')aspath:sql_create=open(path,'rt').read()print(sql_create)CREATE TABLE IF NOT EXISTS subject ( identifier VARCHAR(512) PRIMARY KEY, species VARCHAR(512), sex VARCHAR(512), birth_date VARCHAR, death_date VARCHAR, cause_of_death VARCHAR ); CREATE TABLE IF NOT EXISTS diagnosis ( subject VARCHAR(512) REFERENCES subject(identifier), condition VARCHAR, result VARCHAR(512), assessor VARCHAR(512), date VARCHAR ); ...OWLimportimportlib.resourceswithimportlib.resources.path('adiscstudies','schema.owl')aspath:schema=open(path,'rt').read()withopen('schema.owl','wt')asfile:file.write(schema)Then openschema.owle.g. withProtege.
adi.simplesite
Introduction============This is a Plone-4 add-on that aims to help especially smaller sites to get started quicker and easier.The goal is to give the possibility to configure most common website-usecases via the webinterface for mortals. No programming needed.It does this mainly by replacing viewlets with portlets.Most functionalities are pulled in by small splitted plone-add-ons, named below, so you can roll your own combinations, installing them individually, in case you don't need parts of the whole package.The pulled subpackages are:- adi.initDeletes Plone's default contents- adi.simplestructureHides viewlets and adds some samaple portlets instead in top and footer via ContentWellPortlets.- adi.samplecontentAdds some samplecontent and sampleportlets in left- and right column.- adi.slickstyleAdds a decent CSS to the portal, let's you override col, bg-col and link-col globally.- adi.dropdownmenueAdds a main drodwonmenu on top, showing first-level-folders, replaces Plone's globalnav.See their READMES for further details.Change history**************Changelog=========0.5 (2014-03-14)----------------- Corrected missing doc-folder.0.5 (2012-12-02)----------------- Typos in description.0.4 (2012-12-02)----------------- Better description of the package.0.3 (2012-12-02)----------------- Use adi.slickstyle instead of adi.simplestyle as theme.0.2 (2012-05-19)----------------- Corrective release, missing files. [ida]0.1 (2012-05-17)----------------- Show only folders in navigation- Pulls adi.dropdownmenu, adi.samplecontent, adi.init, adi.simplestyle and adi.simplestructure- Initial release
adi.simplestructure
IntroductionThis package is an add-on for Plone and part of the adi.simplesite-package-bundle.Add and place additional elements you need with ContentWellPortlets TTW, no viewlets-hassle.HidesPlone’s default viewlets in top and footer area of the portal and adds some sample portlets there instead.Changelog0.4 (2013-04-10)Re-release on pypi, doc-folder still missing. [ida]0.3 (2012-12-02)Re-release. [ida]0.2 (2012-05-19)Corrective release, missing files. [ida]0.1 (2012-05-17)Initial release
adi.slickstyle
IntroductionThis is a Plone add-on and part of the adi.simplesite-package-bundle.adi.slickstyle is a theme add-on for Plone, which overrides the default styles (namely public.css) with some decent styles and let’s you easily override font-color, link-styles and bg-color globally.See the stylesheet (adi/slickstyle/adi_slick.css) for the collected selectors.Changelog1.0 (2012-12-02)Initial release
adisp
Adisp is a library that allows structuring code with asynchronous calls andcallbacks without defining callbacks as separate functions. The code thenbecomes sequential and easy to read. The library is not a framework by itselfand can be used in other environments that provides asynchronous working model(see an example with Tornado server in proxy_example.py).Usage:## Organizing calling codeAll the magic is done with Python 2.5 decorators that allow for control flow toleave a function, do sometihing else for some time and then return into thecalling function with a result. So the function that makes asynchronous callsshould look like this:@processdef my_handler():response = yield some_async_func()data = parse_response(response)result = yield some_other_async_func(data)store_result(result)Each `yield` is where the function returns and lets the framework around it todo its job. And the code after `yield` is what usually goes in a callback.The @process decorator is needed around such a function. It makes it callableas an ordinary function and takes care of dispatching callback calls back intoit.## Writing asynchronous functionIn the example above functions "some_async_func" and "some_other_async_func"are those that actually run an asynchronous process. They should follow twoconditions:- accept a "callback" parameter with a callback function that they should callafter an asynchronous process is finished- a callback should be called with one parameter -- the result- be wrapped in the @async decoratorThe @async decorator makes a function call lazy allowing the @process thatcalls it to provide a callback to call.Using async with @-syntax is most convenient when you write your ownasynchronous function (and can make your callback parameter to be named"callback"). But when you want to call some library function you can wrap it inasync in place.# call http.fetch(url, callback=callback)result = yield async(http.fetch)# call http.fetch(url, cb=safewrap(callback))result = yield async(http.fetch, cbname='cb', cbwrapper=safewrap)(url)Here you can use two optional parameters for async:- `cbname`: a name of a parameter in which the function expects callbacks- `cbwrapper`: a wrapper for the callback iself that will be applied beforecalling it## Chain calls@async function can also be @process'es allowing to effectively chainasynchronous calls as it can be done with normal functions. In this case the@async decorator shuold be the outer one:@async@processdef async_calling_other_asyncs(arg, callback):# ....## Multiple asynchronous callsThe library also allows to call multiple asynchronous functions in parallel andget all their result for processing at once:@asyncdef async_http_get(url, callback):# get url asynchronously# call callback(response) at the end@processdef get_stat():urls = ['http://.../', 'http://.../', ... ]responses = yield map(async_http_get, urls)After *all* the asynchronous calls will complete `responses` will be a list ofresponses corresponding to given urls.
adispatch
A relatively sane approach to multiple dispatch in Python.Forked from to support and use annotations for dispatch. This implementation of multiple dispatch is efficient, mostly complete, performs static analysis to avoid conflicts, and provides optional namespace support. It looks good too.Example>>>fromadispatchimportadispatch>>>@adispatch()...defadd(x:int,y:int):...returnx+y>>>@adispatch()...defadd(x:object,y:object):...return"%s+%s"%(x,y)>>>add(1,2)3>>>add(1,'hello')'1 + hello'What this doesDispatches on all non-keyword argumentsSupports inheritanceSupports instance methodsSupports union types, e.g.(int, float)Supports builtin abstract classes, e.g.Iterator, Number, ...Caches for fast repeated lookupIdentifies possible ambiguities at function definition timeProvides hints to resolve ambiguities when they occurSupports namespaces with optional keyword argumentsWhat this doesn’t doVararg dispatch@adispatch()defadd(*args:[int]):...Diagonal dispatcha=arbitrary_type()@adispatch()defare_same_type(x:a,y:a):returnTrueInstallation and Dependenciesadispatchsupports Python 3.2+, is pure python and requires no other dependencies.LicenseNew BSD. SeeLicense.LinksFive-minute Multimethods in Python by Guidomultimethods package on PyPIsingledispatch in Python 3.4’s functoolsClojure ProtocolsJulia methods docsKarpinksi notebook: *The Design Impact of Multiple Dispatch*Wikipedia articlePEP 3124 - *Overloading, Generic Functions, Interfaces, and Adaptation*
adistributions
No description available on PyPI.
adi-study-watch
Study Watch Python SDKThe adi-study-watch provides an object-oriented interface for interacting with ADI's VSM study watch platform.Installationpipinstalladi-study-watchDescriptionA user application can use the SDK to receive complete packets of bytes over a physical interface (USB or BLE) and decode it. The functionality is organized into applications, some of which own sensors, some own system-level functionality (i.e. file system), and while others own algorithms. The hierarchy of objects within the SDK mirrors the applications present on the device. Each application has its own object within the SDK hierarchy, which is used to interact with that application. A brief guide on using the SDK and few examples have been added below.Firmware Setuphttps://github.com/analogdevicesinc/study-watch-sdk/blob/main/firmware/Study_Watch_Firmware_Upgrade.pdfGetting started with SDKImport the adi-study-watch module into your application codefromadi_study_watchimportSDKInstantiate the SDK object by passing the com port numbersdk=SDK('COM28')The application objects can be instantiated from the sdk object. In order to instantiate an application object, we'll have to pass a call-back function as an input argument which can be used to retrieve the data from the application object. Define a callback function as displayed below.defadxl_cb(data):print(data)Once the call-back function is defined, you can instantiate the application object as shown below.adxl_app=sdk.get_adxl_application()adxl_app.set_callback(adxl_cb)Each application object has various methods that can be called by referring to the application. An example of retrieving the sensor status is shown below. Almost all method in an application returns result in a dict.packet=adxl_app.get_sensor_status()# returns dictprint(packet)Basic Example:importtimefromdatetimeimportdatetimefromadi_study_watchimportSDK# callback function to receive adxl datadefcallback_data(data):sequence_number=data["payload"]["sequence_number"]forstream_dataindata["payload"]["stream_data"]:dt_object=datetime.fromtimestamp(stream_data['timestamp']/1000)# convert timestamp from ms to sec.print(f"seq :{sequence_number}timestamp:{dt_object}x,y,z :: ({stream_data['x']}, "f"{stream_data['y']},{stream_data['z']})")if__name__=="__main__":sdk=SDK("COM4")application=sdk.get_adxl_application()application.set_callback(callback_data)# quickstart adxl streamapplication.start_sensor()application.enable_csv_logging("adxl.csv")# logging adxl data to csv fileapplication.subscribe_stream()time.sleep(10)application.unsubscribe_stream()application.disable_csv_logging()application.stop_sensor()Permission Issue in Ubuntu1 - You can run your script with admin (sudo).2 - If you don't want to run scripts as admin follows the steps below:add user tottyanddialoutgroupsudo usermod -aG tty <user> sudo usermod -aG dialout <user>create a file at/etc/udev/rules.d/with name10-adi-usb.rules:ACTION=="add", SUBSYSTEMS=="usb", ATTRS{idVendor}=="0456", ATTRS{idProduct}=="2cfe", MODE="0666", GROUP="dialout"rebootAll streams packet structure :https://analogdevicesinc.github.io/study-watch-sdk/python/_rst/adi_study_watch.core.packets.html#module-adi_study_watch.core.packets.stream_data_packetsDocumentation :https://analogdevicesinc.github.io/study-watch-sdk/pythonExamples :https://github.com/analogdevicesinc/study-watch-sdk/tree/main/python/samplesLicense :https://github.com/analogdevicesinc/study-watch-sdk/blob/main/LICENSEChangeloghttps://github.com/analogdevicesinc/study-watch-sdk/blob/main/python/CHANGELOG.md
adi.suite
Adi Suite==========This product will add a content-type 'Suite' to your portal.In a suite you can add several galleries, that will be displayed directly in a fancyboxbox-popup on click.A gallery can contain mixed contenttypes and file-formats and will also show it's items in a fancybox.Previewimages can be provided via the contentleadimage-field of an item.A quickuploadportlet is assigned to every gallery for batch uploading.Currently supported contenttypes in a popup-display:- Images- Files- Pages- News-Items- Links [if URL contains 'youtube' or' 'vimeo', movie will be embedded in a flowplayer-view]Currently supported file-formats in a popup-display:- all common image-formats- mp4 and flv for movies- mp3 for audio- swf for flash-animationsDependencies:- collective.fancybox- collective.contentleadimage- collective.quickuploadFurther credits:- flowplayer http://flowplayer.org- flashicon http://www.freeiconsweb.com/Free-Downloads.asp?id=1403 by Barry MienyChange history**************Changelog=========0.6 (2ß3-01-02)-------------------- Added video-embed-support for ATLink-destinations containing 'youtube' or 'vimeo'. [ida]- Show arrow-down on preview-title, if they are longer than one row. [ida]- Make item-title in gal-view a link. [ida]- Vertical alignment for image previews in suite- and gallery-view. [ida]0.5 (2012-09-07)-------------------- Fixed typo [ida]0.4 (2012-08-29)-------------------- Added support for popupview of links directly to target. [ida]0.3 (2012-02-06)-------------------- Removed not used templates. [ida]- Fixed doubled previewimage for galleries. [ida]- Don't show leadimage of a gallery in gallery_view. [ida]0.2 (2011-12-27)-------------------- Assigned quickuploadportlet to galleries- Added previewimages in galleryview using contentleadimages.- Added dependencies to be pulled automaticallycollective.contentleadimagecollective.fancyboxcollective.quickupload0.1 (2011-05-11)-------------------- Initial release
adit
No description available on PyPI.
aditam.agent
ContentsDescriptionRequiresOS Independent installationModule installationConfigure the agentDescriptionADITAM is a remote task scheduler which facilitates mass task managing over heterogeneous network. The project contains :adtitam.core(Python) : the common parts of the Aditam agent and serveraditam.server(Python) : the server in charge of scheduling and distributing the tasksaditam.agent(Python) : the agent handles orders sent by the tasks manager and execute the tasks sent by itaditam web gui(php) : the aditam web interfacehttp://www.aditam.org/downloads/cake.tar.gzThis package contains the agent of the ADITAM project. The agents are installed on every server of the farm. It handles orders sent by the tasks manager and execute the tasks sent by it. Then the agent sends an activity report that is stored in database. Data needed for tasks execution must be available locally or through a network file system.RequiresPython 2.5 :http://www.python.org/download/easy_install :http://peak.telecommunity.com/DevCenter/EasyInstall#installing-easy-installYou must add C:\Python2.5 and C:Python2.5\Scripts in your Path on Windows.OS Independent installationModule installationEnter in a console:easy_install aditam.agentConfigure the agentEnter in a console:aditam-admin.py --agentFollow the instructions.
aditam.core
ADITAM is a remote task scheduler which facilitates mass task managing over heterogeneous network. The project contains :adtitam.core(Python) : the common parts of the Aditam agent and serveraditam.server(Python) : the server in charge of scheduling and distributing the tasksaditam.agent(Python) : the agent handles orders sent by the tasks manager and execute the tasks sent by itaditam web gui(php) : the aditam web interfacehttp://www.aditam.org/downloads/cake.tar.gzThis package is the common part of the aditam packages and it contains a script to configure the agent, the server and the database.
aditam.server
ContentsDescriptionRequiresOS Independent installationModule installationConfigure and install the databaseConfigure the serverDescriptionADITAM is a remote task scheduler which facilitates mass task managing over heterogeneous network. The project contains :adtitam.core(Python) : the common parts of the Aditam agent and serveraditam.server(Python) : the server in charge of scheduling and distributing the tasksaditam.agent(Python) : the agent handles orders sent by the tasks manager and execute the tasks sent by itaditam web gui(php) : the aditam web interfacehttp://www.aditam.org/downloads/cake.tar.gzThis package contains the server of the ADITAM project. It is in charge of scheduling and distributing the tasks. Informations are stored in database and gave to the agents when tasks are to be executed.RequiresPython 2.5 :http://www.python.org/download/easy_install :http://peak.telecommunity.com/DevCenter/EasyInstall#installing-easy-installYou must add C:\Python2.5 and C:\Python2.5\Scripts in your Path on Windows.OS Independent installationModule installationEnter in a console:easy_install aditam.serverConfigure and install the databaseInstall the Python module for your database :http://www.sqlalchemy.org/docs/04/dbengine.html#dbengine_supportedEnter in a console:aditam-admin.py --create-db --config-dbFollow the instructions.Configure the serverEnter in a console:aditam-admin.py --serverFollow the instructions.
adit-client
Adit ClientAboutAdit Client is the official Python client ofADIT (Automated DICOM Transfer).UsagePrerequisitesGenerate an API token in your ADIT profile.Make sure to have the permissions to access the ADIT API.Also make sure you have access to the DICOM nodes you want query.Codeadit_url="https://adit"# The host URL of aditadit_token="my_token"# The generated auth tokenclient=AditClient(server_url=adit_url,auth_token=adit_token)# Search for studiesstudies=client.search_for_studies("ORTHANC1",{"PatientName":"Doe, John"})# The client returns pydicom datasetsstudy_descriptions=[study.StudyDescriptionforstudyinstudies]
adit-dicomweb-client
Adit DICOM Web ClientThis is a simple Wrapper of the DICOM Web Client provided by thedicomweb-clientlibrary. It is slighly adjusted and restricted to provide a simple python API to the DICOM Web API ofAdit.Installationpipinstalladit-dicomweb-clientUsagefromadit_dicomweb_clientimportAditDicomWebClient# Create a new clientclient=AditDicomWebClient(adit_base_url,# URL to the ADIT serverdicom_server,# AE title of the associated DICOM serverauth_token,# Authentication token for the ADIT server)# Find all studiesstudies=client.find_studies()# Find all series of a studyseries=client.find_series(study_instance_uid)# Find all seriesseries=client.find_series()# Include additional query parametersstudies=client.find_studies({"PatientID":"1001"})# Get a studystudy=client.get_study(study_instance_uid)# Get a seriesseries=client.get_series(study_instance_uid,series_instance_uid)# Get study metadatastudy_metadata=client.get_study_metadata(study_instance_uid)# Get series metadataseries_metadata=client.get_series_metadata(study_instance_uid,series_instance_uid)# Upload pydicom.Dataset instancesclient.upload_instances(instance_list)
adi.trash
Introduction============An addon for Plone, which changes the deletion-behaviour.If a user deletes items, move them to a trashcan-folder named 'trash',living in the upper next available navigation-root-folder – which is usuallythe site-root-folder – instead of really deleting them.Items inside of 'trash'-folders, or trash-folder themselves, will stillbe actually, really, deleted.Missing trash-folders are created on the fly.Immediately after installation you won't see any trash-folders,go on, delete something, they'll appear.After an item has been trashed, its workflow-state is set to 'private',so its content is not unintentionally exposed to the public.Installation============Add 'adi.trash' to the eggs-section of your buildout-config,run buildout, restart instance(s).Then go to the add-on-controlpanel of your site and activate this add-on.Contributors============Ida Ebkes [ida], [idgspro], Aurore Mariscal [amariscal].License=======GPL2, a copy is attached.Changelog for adi.trash=======================0.6 (2018-10-15)----------------- Fix bug on deletion from folder-contents (thanks to 'laulaz' for reporting).[ida]0.5 (2018-09-27)----------------- Set trashed items' workflow-state to 'private' after trashing,so trashed content does not get unintentionally exposed. [ida]- Give temporary Contributor-role to user, for pasting intotrashcan and creating trashcans. Fix #7. [ida]- Include CHANGES in long-description of this egg, so it getsdisplayed on pypi. [ida]0.4 (2018-03-03)----------------- 'ParseResult' object has no attribute 'endswith', fix error. [amariscal]0.3 (2017-07-27)----------------- Regard URL can contain parameters, fixes #3. [idgserpro]- Add plone.api as a dependency. [idgserpro]0.2 (2017-04-12)----------------- Merge PR from idgserpro, correct repo-URL. [ida]0.1 (2015-11-01)---------------- Initial commit.
adi.ttw_styles
GoalGive people with CSS-skills the ability to apply styles through the web (TTW), using a simple page as style-source and provide a preview-possibility.UsageFor live-developing the CSS and immediately see the changes, while to vistors the look will remain unchanged:On top-level of your site add a page, name it ‘ttw_live_styles’, insert yout CSS-rules in the bodytext-fieild, save page.Add ‘/@@ttw_styles_view’ to any URL of your site and see the changes.For permanently adding the styles to your site and make them visible to everybody:Create a local copy of your style-rules on your computer with a texteditor, name it ‘ttw_permanent_styles.css’ and upload it as a file on top-level of your site.Note: Permanent changes require to remerge stylesheets for caching,which you can achieve by putting ithe portal_css-debug-mode off, (accessible via ‘http://localhost:8080/yourPlonesiteId/portal_css/manage_cssForm’). You might need to ask your siteadministrator to do that for you.InstallationAdd ‘adi.ttw_styles’ to the eggs-section of your instance-part in the buildout-cfg. Run buildout, restart instance, activate product in a Plonesite via ‘http://localhost:8080/yourPlonesiteId/prefs_install_products_form’.AuthorIda EbkesCreditsLa communidas de PloneChangelog1.1 (2013-07-26)Added MANIFEST.in1.0 (2013-07-26)Initial release
aditya1
firstpackage
aditya-102103464
TOPSIS Python PackageTOPSIS (Technique for Order Preference by Similarity to Ideal Solution) is a method for multicriteria decision analysis. This Python package provides a simple implementation of TOPSIS for ranking alternatives based on multiple criteria.InstallationYou can install thetopsispackage using pip:pip install topsis-Aditya-102103464Usageimport numpy as np from topsis-Aditya-102103464 import topsisExample decision matriximport .csv data fileCriteria weightsweights = [1, 1, 1, 1]Impact criteriaimpacts = [True, True, True, True]Perform TOPSIS analysisscores = topsis(matrix, weights, is_benefit) ranking = np.argsort(scores)[::-1] print("Scores:", scores) print("Ranking:", ranking+1)OutputA .csv file containing input columns with two additional columns ie. topsis score and ranking will be generated.
adityadfunctionrec
No description available on PyPI.
aditya-distribution
No description available on PyPI.
adityapackage
this too much long description
aditya-pypi
Sample Python Package Hello Pypi
adityas-pipi-01
No description available on PyPI.
aditya-string-uppercase
No description available on PyPI.
aditys
No description available on PyPI.
adium-sh
Adium Shell (adium-sh) is a command-line tool and Python wrapper for Adium.Descriptionadium-sh provides shell utilities and Python wrapper based onAppleScript supportof Adium.FeatureThe current features are:Set default service and accountSend messages using exact account name or aliasReceive and reply to messages using patterns or external API (SimSimi currently supported)React to eventsInstallation$ pip install adium-shUsageYou must specify the account and service to associate with the current use, either as command-line arguments or in the config file. When specifying them as arguments, you must put them before the sub-commands.Send messagesSend a message using account name:$ adiumsh -s GTalk -t yourname@gmail send -b [email protected] Hello, there <<EOFSend a message using alias:$ adiumsh -s GTalk -t [email protected] send -a 'John Smith' Hello, there <<EOFSet default configuration file at~/.adiumsh:[default] service = GTalk account = [email protected] you can send messages without specifying-s/--serviceand-t/--account:$ adiumsh send -a 'John Smith'You can also pass as argument your message:$ adiumsh send -a 'John Smith' -m 'Hello, there'Receive messagesYou must specify a chat method to receive messages. By default, adium-sh uses “Simple Chat”, which basically replies to received messages according to the patterns you set. You must set the patterns in the config file, possibly like the following settings:[default] service = GTalk account = [email protected] [chat-default] type = wildcard patterns = *hello*: hi *what*: sorry *: I'm not available nowThen, you can invoke the “receive” sub-command with the-c/--chatarguments:$ adiumsh receive -c defaultThe patterns is a list of string pairs where each pair is separated by a colon. The string to the left of the colon is the pattern against which the received text will be matched, and the right one is the corresponding reply text. There is also a “type” option in the chat section, which defaults to “wildcard” that uses globbing pattern matching; another value to it is “regex”, which uses regular expression.You can also use “SimSimi Chat” which hits the SimSimi API with the messages received. You have to set the API key in the config file and the key type (“trial”, which is default, or “paid”):[chat-simi] simi-key = some-really-long-key simi-key-type = trialThen, invoke “receive” with this chat from command line:$ adiumsh receive -c simiSet the default chat in the default settings:[default] service = GTalk account = [email protected] chat = default [chat-default] patterns = *hello*: hi *what*: sorry *: I'm not available now [chat-another] patterns = *: not hereNow you can also switch between chats from the command line other than the default:$ adiumsh receive -c anotherTODOComplete Python wrapper API to AppleScript supportExhaustive commands based on the wrapper
adi.workingcopyflag
This package extends all Archetypes of a Plonesite with a boolean field, that will be set to true if a workingcopy was created for an item via plone.app.iterate.This way, it is possible to have this information availiable as a criterion in a collection (old-style collections, a.k.a. topics).It’s friendly name is ‘Has workingcopy’.The field is set to hidden, we don’t want it in the UI of an item.Changelog1.3 (2012-11-19)Corrected namespace in MANIFEST.in [ida]1.3 (2012-11-19)Added MAINIFEST.in [ida]1.2 (2012-11-19)try again with different pypirc-config [ida]1.1 (2012-11-19)configure.zcml and profiles are missing in pypi-release for unknown reason, let’s try to make it work with this release [ida]Better description [ida]1.0 (2012-11-01)Initial release [ida]
adix
Making Data Science Fun, One Color at a Time!What is it?ADIXis a free, open-source, color-customizable data analysis tool that simplifies Exploratory Data Analysis (EDA) with a single commandix.eda(). Experience a streamlined approach to uncovering insights, empowering you to focus on your data without distraction.Color customizationis at your fingertips, allowing you to tailor your analysis to your exact needs. Explore your data with confidence and efficiency, knowing thatadix(Automatic Data Inspection and eXploration) has your back every step of the way.⭐️ if you like the project, please consider giving it a star, thank you :)Main FeaturesCustomizable ThemesSpruce up theadixenvironment with your own personal touch by playing with color schemes!Eficient Cache UtilizationExperience faster load times through optimized caching mechanisms, enhancing overall system performance.Rapid Data Insightadixprioritizes swiftly showcasing crucial data insights, ensuring quick access to important information.Automatic Type DetectionDetects numerical, categorical, and text features automatically, with the option for manual overrides when necessary.Statistically Rich Summary Information:Unveil the intricate details of your data with a comprehensive summary, encompassing type identification, unique values, missing values, duplicate rows, the most frequent values and more.Delve deeper into numerical data, exploring properties like min-max range, quartiles, average, median, standard deviation, variance, sum, kurtosis, skewness and more.Univariate and Bivariate Statistics UnveiledExplore univariate and bivariate insights with adix's versatile visualization options. From bar charts to matrices, and box plots, uncover a multitude of ways to interpret and analyze your data effectively.DocumentationDocsInstallationThe best way to installadix(other than from source) is to use pip:pip install adixadix is still under developmentIf you encounter any data, compatibility, or installation issues, please don't hesitate to reach out!Quick startThe system is designed for rapid visualization of target values and dataset, facilitating quick analysis of target characteristics with just one functionix.eda(). Similar to pandas' df.describe() function, it provides extended analysis capabilities, accommodating time-series and text data for comprehensive insights.importadixasixfromadix.datasetsload_datasettitanic=load_dataset('titanic')10 minutes toadix1. Rendering the whole dataframeix.eda(titanic)usingforest color theme2. Accesing variables of specific dtypeRender the DataFrame containing only categorical variables.ix.eda(titanic,vars='categorical')3. Accesing individual variablesix.eda(titanic,'Age')usingforest color theme4. Pandas .loc & .ilocAn easy way to render only a part of the DataFrame you are interested in.ix.eda(titanic.loc[:10:2,['Age','Pclass','Fare'])5. Changing theme colorsix.Configs.get_theme()...ix.Configs.set_theme('FOREST')6. Heatmap correlationThis visualization depicts the correlation between all numerical variables within the DataFrame, offering valuable insights into the magnitude and direction of their relationships.# Show correlation for the entire DataFrame.ix.eda(titanic,corr=True)Furthermore, it is possible to use categorical variables since they undergo one-hot encoding to enable their inclusion in correlation analysis. It's recommended to use ANOVA. You can choose whatever variables you want to explore and analyze.# Show correlation for selected parts of the DataFrameix.eda(titanic.loc[:,['Age','Fare','Sex','Survived']],vars=['categorical','continuous'],corr=True)7. Bivariate relationships: numerical & numericalix.eda(titanic,'Age','Fare')8. Bivariate relationships: categorical & numericalix.eda(titanic,'Sex','Age')9. Bivariate relationships: categorical & categoricalix.eda(titanic,'Sex','Survived')LicenseMITFree Software, Hell Yeah!DevelopmentContributions are welcome, so feel free to contact, open an issue, or submit a pull request!For accessing the codebase or reporting bugs, please visit the GitHub repository.This program is provided WITHOUT ANY WARRANTY.ADIXis still under heavy development and there might be hidden bugs.AcknowledgementThe goal foradixis to make valuable information and visualization readily available in a user friendly environment at the click of a mouse, without reinventing the wheel. All of the libraries stated below are powerful and excellent alternatives to adix. Several functions ofadixwere inspired from the following:Sweetviz: The inception of this project found inspiration from Sweetviz, particularly its concept of consolidating all data in one place and using the blocks for individual features.Dataprep: Dataprep stands out as an excellent library for data preparation, and certain structural elements of adix have been inspired by it.Pandas-Profiling: Alerts served as inspiration for a segment of the dashboard's design, contributing to its functionality and user-friendly features."Kagglesource of Titanic dataset
adjacent
Centrifuge integration with Django framework
adjacent-attention-pytorch
No description available on PyPI.
adj-dataparrots-5
Copyright (c) 2021 The Python Packaging AuthorityPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
adjdatatools
No description available on PyPI.
adjectiveanimalnumber
AdjectiveAnimalNumberThis is a port in Python of theadjectiveadjectiveanimalInstallationpipinstalladjectiveanimalnumberUsageCLIusage: adjectiveanimalnumber [-h] [-a ADJS] [-s SEP] [-n NUM] [-i INF] [-u SUP] Generate a random adjective with an animal name and a random integer. optional arguments: -h, --help show this help message and exit -a ADJS, --adjs ADJS Number of adjectives to generate -s SEP, --sep SEP Separator between adjectives and animal name -n NUM, --num NUM Add a random integer to the end of the string -i INF, --inf INF Inferior limit for random integer -u SUP, --sup SUP Superior limit for random integerExamples:adjectiveanimalnumber-a2-n1-i0-u100adjectiveanimalnumber-a3-s"_"-i0-u1024Pythonfromadjectiveanimalnumberimportgenerategenerate()
adjointShapeOptimizationFlux
UNKNOWN
adjsim
Designed and developed by Sever TopanNavigationFeature AbstractInstallation InstructionsTutorialClass DocumentationFeaturesEngineAt its core, AdjSim is an agent-based modelling engine. It allows users to define simulation environments through which agents interact through ability casting and timestep iteration. The framework is targeted towards agents that behave intelligently, for example a bacterium chasing down food. However, the framework is extremely flexible - from enabling physics simulation to defining an environment in whichConway’s Game of Lifeplays out! AdjSim aims to be a foundational architecture on top of which reinforcement learning can be built.Graphical Simulation RepresentationThe simulation can be viewed in real time as it unfolds, with graphics are rendered and animated using PyQt5. Below are four of the distinct examples packadged with AdjSim, ranging from bacteria to moon system simulation.Post Simulation Analysis ToolsAgent properties can be marked for tracking during simulation, allowing for viewing the results of these values once the simulation completes. For example, we can track the population of each different type of agent, or the efficacy of the agent’s ability to meet its intelligence module-defined goals.||| |:————-:|:————-:|Intelligence ModulePerhaps the most computationally interesting aspect of AdjSim lies in its intelligence module. It allows agents to set goals (for example, the goal of a bacterium may be to maximize its calories), and assess its actions in terms of its ability to meet its goals. This allows the agents to learn which actions are best used in a given situation. Currently the intelligence module implementsQ-Learning, but more advanced reinforcement learning techniques are coming soon!
adjsoned
AdjsonedThis Python library can help you easily load your program's configuration/properties from a json file to a Python runtime object making key-value data accessible via python's "." syntax—as these key-value pairs were fields of this object.Installationpip install adjsonedGet startedInstantiate a FileJsonProperties object giving it your config file's path and it will just work:fromadjsonedimportFileJsonPropertiesproperties=FileJsonProperties(filepath="examples/some_properties.json")# We can now access key-value/arrays data stored in our JSON file the following way:# (NB: the ROOT element of your JSON file MUST be a DICT!)ifproperties.debug_mode:print("We're in debug mode!")print("P.S. Properties told me that :)")# 'properties' is an instance of FileJsonProperties (as well as JsonProperties — a parent class)# 'properties.app_version' is also a JsonProperties instance, so we can access its fields like that:print(properties.app_version.code)# prints '4.1.2'# arrays are interpreted as regular Python lists, you can just normally iterate through them:forsectioninproperties.project_settings.ignored_sections:print("Ignoring section",section)# if an array element is a dictionary, it gets interpreted as a JsonProperties object as well:print(properties.messages[1].title)# prints: 'Hello again.'The JSON file used in this example:{"debugMode":true,"appVersion":{"code":"4.1.2","build":2},"projectSettings":{"frameRate":60,"ignoredSections":[2,6,9]},"messages":[{"title":"Hello!","description":"This is the first message."},{"title":"Hello again.","description":"This is the second one. Nice to see you again!"}]}
adjspecies
TheadjspeciesPython module generates random names formed from an animal and a descriptor.Installation$pipinstall-UadjspeciesUsageFrom the command line$adjspecies--helpusage:adjspecies.py[-h][--maxlenMAXLEN][--sepSEPARATOR][--countCOUNT][--prevent-stutter]Printthenameofarandomadjective/species,moreorless…optionalarguments:-h,--helpshowthishelpmessageandexit--maxlenMAXLENMaximumlengthforthename,excludinganyseparator.(default=8)--sepSEPARATORSeparatorbetweentheadjectiveandspecieswords.(default='')--countCOUNTNumberofadjective/speciescombinationstoprint.--prevent-stutterPreventthesameletterfromappearingonanadjective/speciesboundary.(default=True)$pythonadjspecies.py--count4sillyfoxredpigpinkdogelynxpawIn Python code>>>importadjspecies>>>help(adjspecies.random_adjspecies)Helponfunctionrandom_adjspeciesinmoduleadjspecies:random_adjspecies(sep='',maxlen=8,prevent_stutter=True)Returnarandomadjective/species,separatedby`sep`.Thekeywordarguments`maxlen`and`prevent_stutter`arethesameasfor`random_adjspecies_pair`,butnotethatthemaximumlengthargumentisnotaffectedbytheseparator.>>>adjspecies.random_adjspecies('.',7)'wolf.toy'AboutWhile writing a deployment system targettingDigitalOceanDroplets, the author found the largest bottleneck was finding names for the transient test servers.The adjective/species contrivance comes from the furry culture in general and more directly from the site[adjective][species]. It provides a wide namespace of easy-to-remember randomness.Everything up until the initial commit was an exercise in yak shaving and procrastinating getting out of bed.CreditsTheadjspeciesmodule is written and maintained byAdam Wright, who plays a cheetah on Twitter under the guise of@chipikat, a Python developer called@pypikatand a human being named@hipikat.
adjspecies3
Print the name of a random adjective/species, more or less…Home-page:http://github.com/tavallaie/adjspecies/Author: Ali Tavallaie Author-email:[email protected]: BSD 2-Clause Description: UNKNOWN Platform: UNKNOWN Classifier: Development Status :: 5 - Beta Classifier: Environment :: Console Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: BSD License Classifier: Natural Language :: English Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.7 Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: Topic :: Utilities