package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
activecache
UNKNOWN
activecampaign
Simple and Pythonic ActiveCampaign API clientFree software: BSD licenseFeaturesView & sync a contactAdd a contact tagRemove a contact tagBasic usageInitialize the client with your custom ActiveCampaign host name and API key:from activecampaign.client import ActiveCampaignClient client = ActiveCampaignClient(ACTIVECAMPAIGN_HOST, ACTIVECAMPAIGN_KEY)Sync contact information:client.contacts.sync( email=customer_data['email'], first_name=customer_data['first_name'], last_name=customer_data['last_name'], orgname=custmoer_data['orgname'], phone=customer_data['phone'], )Add and remove tags:client.contacts.tag_add("new-tag", email=customer_data['email']) client.contacts.tag_remove("old-tag", email=customer_data['email'])History0.2.0 (2019-07-09)Dropped support for Python 2.70.1.0 (2016-02-22)First release on PyPI.
activecampaign3
No description available on PyPI.
activecampaign-api
No description available on PyPI.
active-campaign-python
## InstallationYou can installactive-campaign-pythonfrom pypipip install active-campaign-python## Example Usage<pre> from activecampaign import ActiveCampaign# url provided to you by ActiveCampaign base_url = ‘<your url>’ # api key provided to you by ActiveCampaign api_key = ‘<your api_key>’ac = ActiveCampaign(base_url, api_key) print(ac.api(‘account/view’))</pre>Each of the endpoint subclasses have comments at the bottom## PrerequisitesA valid ActiveCampaignhostedaccount (trial or paid).
activecampaign-python
activecampaign-pythonActiveCampaign API wrapper written in python.This library supports the latest API version 3. If you are looking for API version 1 which is also supported by ActiveCampaign then check below.Installing (API v3)pip install activecampaign-pythonRequirements- requestsUsageClient instantiationfrom activecampaign.client import Client client = Client(URL, API_KEY)AutomationsList all automationsresponse = client.automations.list_all_automations()ContactsCreate a contactdata = { "contact": { "email": "[email protected]", "firstName": "John", "lastName": "Doe", "phone": "7223224241" } } response = client.contacts.create_a_contact(data)Create or update contactdata = { "contact": { "email": "[email protected]", "firstName": "John", "lastName": "Doe", "phone": "7223224241" } } response = client.contacts.create_or_update_contact(data)Retrieve a contactresponse = client.contacts.retrieve_a_contact("contact_id")Update list status for a contactdata = { "contactList": { "list": 2, "contact": 1, "status": 1 } } response = client.contacts.update_list_status_for_a_contact(data)Update a contactdata = { "contact": { "email": "[email protected]", "firstName": "John", "lastName": "Doe" } } response = client.contacts.update_a_contact("contact_id", data)Delete a contactresponse = client.contacts.delete_a_contact("contact_id")List all contactsresponse = client.contacts.list_all_contacts() Additionally, you can filter a contact: response = client.contacts.list_all_contacts(email="[email protected]") For more query params: https://developers.activecampaign.com/reference#list-all-contactsList all automations the contact is inresponse = client.contacts.list_all_automations_the_contacts_is_in("contact_id")Retrieve a contacts score valueresponse = client.contacts.retrieve_a_contacts_score_value("contact_id")Add a contact to an automationdata = { "contactAutomation": { "contact": 1, "automation": 1 } } response = client.contacts.add_a_contact_to_an_automation(data)Retrieve an automation a contact is inresponse = client.contacts.retrieve_an_automation_a_contact_is_in("contact_automation_id")Remove a contact from an automationresponse = client.contacts.remove_a_contact_from_an_automation("contact_automation_id")List all automations a contact is inresponse = client.contacts.list_all_automations_a_contact_is_in()Create a custom fielddata = { "field": { "type": "textarea", "title": "Field Title", "descript": "Field description", "isrequired": 1, "perstag": "Personalized Tag", "defval": "Defaut Value", "visible": 1, "ordernum": 1 } } response = client.contacts.create_a_custom_field(data)Retrieve a custom fieldresponse = client.contacts.retrieve_a_custom_field("field_id")Update a custom fielddata = { "field": { "type": "textarea", "title": "Field Title", "descript": "Field description", "isrequired": 1, "perstag": "Personalized Tag", "defval": "Defaut Value", "visible": 1, "ordernum": 1 } } response = client.contacts.create_a_custom_field("field_id", data)Delete a custom fieldresponse = client.contacts.delete_a_custom_field("field_id")List all custom fieldsresponse = client.contacts.list_all_custom_fields()Create a custom field relationship to list(s)data = { "fieldRel": { "field": 8, "relid": 2 } } response = client.contacts.create_a_custom_field_relationship_to_list(data)Create custom field optionsdata = { "field": 1, "label": "my custom label", "value": 1, "orderid": 1, "isdefault": True } response = client.contacts.create_custom_field_options(data)Retrieve field optionsresponse = client.contacts.retrieve_field_options("field_id")Create a custom field valueresponse = activecampaign_client.contacts.create_a_custom_field_value(data)Retrieve a custom field valueresponse = activecampaign_client.contacts.retrieve_a_custom_field_value(field_value_id="some-id")Update a custom field value for contactresponse = activecampaign_client.contacts.update_a_custom_field_value_for_contact(data, field_value_id="some-id")Delete a custom field valueresponse = activecampaign_client.contacts.delete_a_custom_field_value(field_value_id="some-id")List all custom field valuesresponse = activecampaign_client.contacts.list_all_custom_field_values()Retrieve a contact's field valuesresponse = activecampaign_client.contacts.retrieve_a_contacts_field_values(contact_id="some-id")Retrieve a contact's tracking logsresponse = activecampaign_client.contacts.retrieve_a_contacts_tracking_logs(contact_id="some-id")Retrieve a contact's dataresponse = activecampaign_client.contacts.retrieve_a_contacts_data(contact_id="some-id")Retrieve a contact's bounce logsresponse = activecampaign_client.contacts.retrieve_a_contacts_bounce_logs(contact_id="some-id")Retrieve a contact's geo ipsresponse = activecampaign_client.contacts.retrieve_a_contacts_geo_ips(contact_id="some-id")Retrieve a contact's organizationresponse = activecampaign_client.contacts.retrieve_a_contacts_organization(contact_id="some-id")Retrieve a contact's account contactsresponse = activecampaign_client.contacts.retrieve_a_contacts_account_contacts(contact_id="some-id")Retrieve a contact's automation entry countsresponse = activecampaign_client.contacts.retrieve_a_contacts_automation_entry_counts(contact_id="some-id")Add a tag to a contactdata = { "contactTag": { "contact": "1", "tag": "20" } } response = client.contacts.add_a_tag_to_contact(data)Remove a tag from a contactresponse = client.contacts.remove_a_tag_from_a_contact("contact_tag_id")Retrieve contact tagsresponse = client.contacts.retrieve_contact_tags("contact_id")DealsCreate a dealdata = { "deal": { "contact": "51", "description": "This deal is an important deal", "currency": "usd", "group": "1", "owner": "1", "percent": None, "stage": "1", "status": 0, "title": "AC Deal", "value": 45600 } } response = client.deals.create_a_deal(data)Retrieve a dealresponse = client.deals.retrieve_a_deal("deal_id")Update a dealdata = { "deal": { "contact": "51", "description": "This deal is an important deal", "currency": "usd", "group": "1", "owner": "1", "percent": None, "stage": "1", "status": 0, "title": "AC Deal", "value": 45600 } } response = client.deals.update_a_deal(data)Delete a dealresponse = client.deals.delete_a_deal("deal_id")List all dealsresponse = client.deals.list_all_deals() Additionally, you can filter a deal: query = { "filters[stage]": 1 } response = client.deals.list_all_deals(**query) For more query params: https://developers.activecampaign.com/reference#list-all-dealsCreate a deal notedata = { "note": { "note": "Note for the deal" } } response = client.deals.create_a_deal_note("deal_id", data)Update a deal notedata = { "note": { "note": "Update with more info" } } response = client.deals.update_a_deal_note("deal_id", "note_id", data)List all pipelinesresponse = client.deals.list_all_pipelines() Additionally, you can filter a pipeline: query = { "filters[title]": "My pipeline" } response = client.deals.list_all_pipelines(**query) For more query params: https://developers.activecampaign.com/reference#list-all-pipelinesList all stagesresponse = client.deals.list_all_stages() Additionally, you can filter a stage: query = { "filters[d_groupid]": 1 } response = client.deals.list_all_stages(**query) For more query params: https://developers.activecampaign.com/reference#list-all-deal-stagesListsCreate a listdata = { "list": { "name": "Name of List", "stringid": "Name-of-list", "sender_url": "http://activecampaign.com", "sender_reminder": "You are receiving this email as you subscribed to a newsletter when making an order on our site.", "send_last_broadcast": 0, "carboncopy": "", "subscription_notify": "", "unsubscription_notify": "", "user": 1 } } response = client.lists.create_a_list(data)Retrieve a listresponse = client.lists.retrieve_a_list("list_id")Delete a listresponse = client.lists.delete_a_list("list_id")Retrieve all listsresponse = client.lists.retrieve_all_lists()Create a list group permissiondata = { "listGroup": { "listid": 19, "groupid": 1 } } response = client.lists.create_a_list_group_permission(data)NotesCreate a notedata = { "note": { "note": "This is the text of the note", "relid": 2, "reltype": "Subscriber" } } response = client.notes.create_a_note(data)Retrieve a noteresponse = client.notes.retrieve_a_note("note_id")Update a notedata = { "note": { "note": "This is the text of the note", "relid": 2, "reltype": "Subscriber" } } response = client.notes.update_a_note("note_id", data)Delete a noteresponse = client.notes.delete_a_note("note_id")TasksCreate a taskdata = { "dealTask": { "title":null, "ownerType":"contact", "relid":"7", "status":0, "note":"Testing Task", "duedate":"2017-02-25T12:00:00-06:00", "edate":"2017-02-25T12:15:00-06:00", "dealTasktype":"1" } } response = client.tasks.create_a_task(data)Retrieve a taskresponse = client.tasks.retrieve_a_task("task_id")Update a taskdata = { "dealTask": { "title":null, "ownerType":"contact", "relid":"7", "status":0, "note":"Testing Task", "duedate":"2017-02-25T12:00:00-06:00", "edate":"2017-02-25T12:15:00-06:00", "dealTasktype":"1" } } response = client.tasks.update_a_task("task_id", data)Delete a taskresponse = client.tasks.delete_a_task("task_id")List all tasksresponse = client.tasks.list_all_tasks() Additionally, you can filter a task: query = { "filters[title]": "My task" } response = client.deals.list_all_tasks(**query) For more query params: https://developers.activecampaign.com/reference#list-all-tasksUsersCreate a userresponse = client.users.create_a_user(data)Retrieve a userresponse = client.users.retrieve_a_user("user_id")Retrieve a user by emailresponse = client.users.retrieve_a_user_by_email("email")Retrieve a user by usernameresponse = client.users.retrieve_a_user_by_username("username")Retrieve logged-in userresponse = client.users.retrieve_logged_in_user()Update a userresponse = client.users.update_a_user("user_id", data)Delete a userresponse = client.users.delete_a_user("user_id")List all usersresponse = client.users.list_all_users()WebhooksCreate a webhookdata = { "webhook": { "name": "My Hook", "url": "http://example.com/my-hook", "events": [ "subscribe", "unsubscribe", "sent" ], "sources": [ "public", "system" ] } } response = client.webhooks.create_a_webhook(data)Retrieve a webhookresponse = client.webhooks.retrieve_a_webhook("webhook_id")Delete a webhookresponse = client.webhooks.delete_a_webhook("webhook_id")List all webhooksresponse = client.webhooks.list_all_webhooks() Additionally, you can filter a webhook: query = { "filters[name]": "My webhook" } response = client.deals.list_all_webhooks(**query) For more query params: https://developers.activecampaign.com/reference#get-a-list-of-webhooksList all webhook eventsresponse = client.webhooks.list_all_webhook_events()TagsCreate a tagdata = { "tag":{ "tag": "My Tag", "tagType": "contact", "description": "Description" } } response = client.tags.create_a_tag(data)Retrieve a tagresponse = client.tags.retrieve_a_tag("tag_id")Update a tagdata = { "tag":{ "tag": "My Tag", "tagType": "contact", "description": "Description" } } response = client.tags.update_a_tag("tag_id", data)Delete a tagresponse = client.tags.delete_a_tag("tag_id")List all tagsresponse = client.tags.list_all_tags(search='Tag Name')Custom ObjectsCreate a schemadata = { "schema": { "slug": "object-name", "labels": { "singular": "ObjectName", "plural": "ObjectNames" }, "description": "Some Description", "fields": [ { "id": "some-field-id", "labels": { "singular": "ID", "plural": "IDs" }, "type": "text", "required": True, }, ], "relationships": [ { "id": "primary-contact", "labels": { "singular": "Primary Contact", "plural": "Primary Contacts" }, "description": "Primary contact to this object", "namespace": "contacts", "hasMany": False } ] } } response = client.customobjects.create_a_schema(data=data)Retrieve a schemaresponse = client.customobjects.retrieve_a_schema(schema_id="some-id", show_all_fields=False)Update a schemadata = { "schema": { "slug": "object-name", "labels": { "singular": "ObjectName", "plural": "ObjectNames" }, "description": "Some Description", "fields": [ { "id": "some-field-id", "labels": { "singular": "ID", "plural": "IDs" }, "type": "text", "required": True, }, { "id": "some-other-field-id", "labels": { "singular": "OtherID", "plural": "OtherIDs" }, "type": "text", "required": True, }, ], "relationships": [ { "id": "primary-contact", "labels": { "singular": "Primary Contact", "plural": "Primary Contacts" }, "description": "Primary contact to this object", "namespace": "contacts", "hasMany": False } ] } } response = client.customobjects.update_a_schema(schema_id="some-schema-id", data=data, show_all_fields=False)Delete a schemaresponse = client.customobjects.delete_a_schema(schema_id="some-id")WARNING: This deletes all associated recordsList all schemasresponse = client.customobjects.list_all_schemas(schema_relationship="contact", limit=20, offset=0, ordering=None, show_all_fields=False)Delete field in schemaresponse = client.customobjects.delete_a_field(schema_id="some-id", field_id="some-field-id", show_all_fields=False)Create a public schemadata = { } response = client.customobjects.create_a_public_schema(data=data)Create a child schemaresponse = client.customobjects.create_a_child_schema(parent_id="some-parent-schema-id")Upsert custom object recorddata = { "record": { "fields": [ { "id": "some-field-id", "value": "asdf-1234" }, { "id": "some-other-field-id", "value": "asdf-5678" }, ] } } response = client.customobjects.create_or_update_record(schema_id="some-id", data=data)Retrieve a recordresponse = client.customobjects.retrieve_a_record(schema_id="some-id", record_id="some-record-id")Retrieve a record by external idresponse = client.customobjects.retrieve_a_record_by_external_id(schema_id="some-id", external_id="some-record-id")Delete a recordresponse = client.customobjects.delete_a_record(schema_id="some-id", record_id="some-record-id")Delete a record by external idresponse = client.customobjects.delete_a_record_by_external_id(schema_id="some-id", external_id="some-record-id")List all recordsresponse = client.customobjects.list_all_records( schema_id="some-id", contact_id="some-contact-id", deal_id=None, account_id=None, limit=20, offset=0)AddressesCreate an addressresponse = activecampaign_client.addresses.create_an_address(data)Retrieve an addressresponse = activecampaign_client.addresses.retrieve_address(address_id="some-id")Update an addressresponse = activecampaign_client.addresses.update_address(data, address_id="some-id")Delete an addressresponse = activecampaign_client.addresses.delete_address(address_id="some-id")Delete an address associated with a user groupresponse = activecampaign_client.addresses.delete_address_associated_with_user_group(group_id="some-id")Delete an address associated with a listresponse = activecampaign_client.addresses.delete_address_associated_with_list(list_id="some-id")Retrieve all addressesresponse = activecampaign_client.addresses.retrieve_all_addresses()CampaignsList all campaignsresponse = activecampaign_client.campaigns.list_all_campaigns()Retrieve a link associated campaignresponse = activecampaign_client.campaigns.retrieve_a_link_associated_campaign(campaign_id="some-id")Retrieve a campaignresponse = activecampaign_client.campaigns.retrieve_a_campaign(campaign_id="some-id")BrandingsRetrieve a brandingresponse = activecampaign_client.brandings.retrieve_a_branding(branding_id="some-id")Update a brandingresponse = activecampaign_client.brandings.update_a_branding(data, branding_id="some-id")List all brandingsresponse = activecampaign_client.brandings.list_all_brandings()About API v1You can clone and checkout our tag v0.1.1.$ git clone https://github.com/GearPlug/activecampaign-python.git $ git checkout tags/v0.1.1 -b <branch_name>Also you can install this version in pip$ pip install activecampaign-python=0.1.1
activeconfigparser
ActiveConfigParserThe ActiveConfigParser package provides extended handling of .ini files beyond whatConfigParserprovides by adding an active syntax to embed operations with options.For example, astandard.inifile is generally formatted like this:[Section 1]Foo:BarBaz:Bif[Section 2]Foo:Bar2Bif:BazThese files are used to organize sets of key - value pairs called “options” within groups called “sections”. In the example above there are two sections, “Section 1” and “Section 2”. Each of them contains two options where Section 1 has the keys ‘Foo’ and ‘Baz’ which are assigned the values ‘Bar’ and ‘Bif’, respectively. For more details on .ini files please see the documentation for ConfigParser.Internally, these handlers methods defined according to a naming convention likehandler_<operation>().ActiveConfigParser only provides one pre-defined operation: use which is formatted asuse TARGET:whereparam1is the TARGET (there is no value field for this one). The TARGET paramter takes thename of a target sectionthat will be loaded in at this point. This works in the same way a#includewould work in C++ and serves to insert the contents or processing of the target section into this location.Theuseoperation is useful for .ini files for complex systems by allowing developers to create a common section and then have specializations where they can customize options for a given project. For example:[COMMON]Key C1:Value C1Key C2:Value C2Key C3:Value C3[Data 1]Key D1:Value D1use COMMONKey D2:Value D2In this example, processing sectionData 1via ActiveConfigParser will result in the following options:Key D1: Value D1,Key C1: Value C1,Key C2: Value C2,Key C2: Value C2,Key D2: Value D2.An alternative way of looking at this is it’s like having a .ini file that is effectively the following where theuseoperations are replaced with the results of a Depth-First expansion of the linked sections:[COMMON]Key C1:Value C1Key C2:Value C2Key C3:Value C3[Data 1]Key D1:Value D1Key C1:Value C1Key C2:Value C2Key C3:Value C3Key D2:Value D2Linked ProjectsSetProgramOptions- depends on ActiveConfigParserRTD,GitHubExamplesHere we show some example usages of ActiveConfigParser. Additional examples can be found in theexamples/directory of the repository.Example 1[SECTION-A]key-A1:value-A1key-A2:value-A2key-A3:value-A3[SECTION-B]use SECTION-Akey-B1:value-B1In this example, the entryuse SECTION-Athat is inside[SECTION-B]instructs the core parser to recurse into[SECTION-A]and process it before moving on with the rest of the entries in[SECTION-B]. In this example the following code could be used to parseSECTION-B.ActiveConfigParser.activeconfigparserdata['SECTION-B']would return the following result:>>>fromactiveconfigparserimportActiveConfigParser>>>cpe=ActiveConfigParser(filename='config.ini')>>>cpe.activeconfigparserdata['SECTION-B']{'key-A1':'value-A1','key-A2':'value-A2','key-A3':'value-A3','key-B1':'value-B1',}UpdatesSee theCHANGELOGfor information on changes.
activeconfigprogramoptions
ActiveConfigProgramOptionsTheActiveConfigProgramOptionspackage extendsActiveConfigParserto enable the processing of.inifiles that specifycommand line program options.As a subclass of ActiveConfigParser, ActiveConfigProgramOptions supports all theoperationsthatActiveConfigParsersupports and adds some of its own. The following table notes the newoperationsthat ActiveConfigProgramOptions adds:OperationFormatDefined Byuseuse <section>ActiveConfigParseropt-setopt-set Param1 [Param2..ParamN] [: <VALUE>]ActiveConfigProgramOptionsopt-removeopt-remove Param [SUBSTR]ActiveConfigProgramOptionsGenerally speaking, when we parse a section from a .ini file, the following steps are taken:Parse the section, resolving anyuseoperations to fully parse the DAG generated by the section and its dependents.This generates alist of actionsthat are specified in the configuration file. Anyremoveoperations encountered will execute their removal search during parsing to ensure they only remove matches that arepreviouslydefined.Once parsed, ageneratorcan be invoked to process the actions-list and generate the requested back-end format. Currently, ActiveConfigProgramOptions only generates output forbashscripts but this can easily be extended to support other formats such as Windows batch or Powershell files, or other kinds of shell commands. Subclasses can also be extended to include their own generator types as well.Supported OperationsuseTheuseoperation is inherited fromActiveConfigParser. Please see its documentation on this command and its use.opt-setSets a genericcommand linestyle option.The format of this isopt-set Param1 [Param2] [Param3] ... [ParamN] : [VALUE]In abashcontext, this operation attempts to generate an option for some command that will be executed.ActiveConfigProgramOptionswill concactenate theParamstogether and then append=VALUEif a VALUE field is present. For example,opt-set Foo Bar : Bazwill becomeFooBar=Baz.opt-removeRemovesexisting entries that have been processed up to the point theopt-removeis encountered that match a pattern.The format of this isopt-remove Param [SUBSTR]When aremoveis encountered,ActiveConfigProgramOptionswill search through all processed options and will delete any that contain anyParam-ithat matchesParam. By default the parameters much be anexact matchofParam, but if the optionalSUBSTRparameter is provided thenActiveConfigProgramOptionswill treatParamas a substring and will remove all existing options ifany parameter contains Param.ActiveConfigProgramOptions Config FilesA.inifile that can be processed byActiveConfigProgramOptionscan be formatted like this:[COMMAND_LS]opt-set lsThis is perhaps the most simple thing we could do. Usinggen_option_list('COMMAND_LS', generator="bash")this operation would generate the commandlswhen processed.A more complex section which creates a CMake command call might look like this:[COMMAND_CMAKE]opt-set cmakeopt-set -G:Ninjaopt-set -D CMAKE_CXX_FLAGS:"-O3"and this would generate the commandcmake -G=Ninja -DCMAKE_CXX_FLAGS="-O3"when processed forbashoutput.Variable Expansion within VALUE fieldsVariables can be added to the VALUE fields in handled instructions, but they have their own format that must be used:${VARNAME|VARTYPE}VARNAMEis the variable name that you might expect for a bash style environment variable that might be defined like this:export VARNAME=VALUE.VARTYPEis thetypeof the variable that is being declared. ForActiveConfigProgramOptionsthe only recognized type isENVwhich definesenvironment variables. Subclasses such asActiveConfigProgramOptionsCMakedefine their own types.We do not provide adefaulttype for this because we wish it to beexplicitthat this is a pseudo-type and do not want it to be confused with some specific variable type since that meaning can change depending on the kind of generator being used. For example,${VARNAME}is anenvironment variablewithin a bash context but in a CMake fragment file it would be aninternal CMake variableand$ENV{VARNAME}would be anenvironment variable. By not providing a default we force type consideration to be made explicitly during the creation of the .ini file.Linked ProjectsActiveConfigParser- prerequisite -Docs,Source,PyPi[ExceptionControl][] - prerequisite -Docs,Source,PyPi[StronglyTypedProperty][] - prerequisite - [Docs][], [Source][], [PyPi][]ActiveConfigProgramOptions ExamplesExample 1ActiveConfigProgramOptions-example-01.ini[BASH_VERSION]opt-set bashopt-set --version[LS_COMMAND]opt-set ls[LS_LIST_TIME_REVERSED]opt-set "-l -t -r"[LS_CUSTOM_TIME_STYLE]opt-set --time-style:"+%Y-%m-%d %H:%M:%S"[MY_LS_COMMAND]use LS_COMMANDuse LS_LIST_TIME_REVERSEDuse LS_CUSTOM_TIME_STYLEActiveConfigProgramOptions-example-01.py#!/usr/bin/env python3importactiveconfigprogramoptionsfilename="ActiveConfigProgramOptions-example-01.ini"section="MY_LS_COMMAND"# Create ActiveConfigProgramOptions instancepopts=activeconfigprogramoptions.ActiveConfigProgramOptions(filename)# Parse sectionpopts.parse_section(section)# Generate the list of bash options for the commandbash_options=popts.gen_option_list(section,generator="bash")# Print out the commandsprint(" ".join(bash_options))generates the output:ls-l-t-r--time-style="+%Y-%m-%d %H:%M:%S"Example 2We can utilize theuseoperation to create a more complex configuration file that provides some sort of common sections and then point-of-use sections that would generate customized configurations for a particular use:[CMAKE_COMMAND]opt-set cmakeopt-set -G:Ninja[CMAKE_OPTIONS_COMMON]opt-set -D CMAKE_CXX_FLAGS:"-fopenmp"[CMAKE_OPTIONS_APPLICATION]opt-set -D MYAPP_FLAG1:"foo"opt-set -D MYAPP_FLAG2:"bar"[APPLICATION_PATH_TO_SOURCE]opt-set /path/to/source/.[APPLICATION_CMAKE_PROFILE_01]use CMAKE_COMMANDuse CMAKE_OPTIONS_COMMONuse CMAKE_OPTIONS_APPLICATIONuse APPLICATION_PATH_TO_SOURCE[APPLICATION_CMAKE_PROFILE_02]use APPLICATION_PROFILE_01opt-remove MYAPP_FLAG2This example follows a pattern that larger projects might wish to use when there are many configurations that may be getting tested. Here, we set up some common option groups and then create aggregation sections that will include the other sections to compose a full command line.Using this .ini file, if we generatebashoutput for sectionAPPLICATION_CMAKE_PROFILE_01the resulting command generated would be:cmake -G=Ninja -DCMAKE_CXX_FLAGS="-fopenmp" -DMYAPP_FLAG1="foo" -DMYAPP_FLAG2="bar" /path/to/source/.Alternatively, we can generatebashoutput for sectionAPPLICATION_CMAKE_PROFILE_02which first clonesAPPLICATION_CMAKE_PROFILE_01and thenremovesall entries containing the parameterMYAPP_FLAG2using theopt-removeoperation. This will result in a generated comandcmake -G=Ninja -DCMAKE_CXX_FLAGS="-fopenmp" -DMYAPP_FLAG1="foo" /path/to/source/..This example shows how theopt-removeoperation will fully remove occurrences that contain that substring from the list of actions.This example shows some of the capabilities thatActiveConfigProgramOptionsprovides for managing many build configurations within a single .ini file.ActiveConfigProgramOptionsCMakeActiveConfigProgramOptionsCMakeis a subclass ofActiveConfigProgramOptionsthat adds additional operations and generators to handle processingCMakeoptions:Addsopt-set-cmake-var.Addscmake_fragmentgenerator.AddsCMAKEtype to variables.New operations defined inActiveConfigProgramOptionsCMake:OperationFormatDefined Byopt-set-cmake-varopt-set-cmake-var VARNAME [TYPE] [FORCE] [PARENT_SCOPE]: VALUEActiveConfigProgramOptionsCMakeSupported Operationsopt-set-cmake-varThis adds a CMake variable program option. These have a special syntax inbashthat looks like-DVARNAME:TYPE=VALUEwhere the:TYPEis an optional parameter. If thetypeis left out then CMake assumes the value is aSTRING.We may not wish to generate bash only output though. For CMake files, we might wish to generate acmake fragmentfile which is basically a snippet of CMake that can be loaded during a CMake call using the-Soption:cmake -S cmake_fragment.cmake. The syntax within a CMake fragment file is the same as in a CMake script itself.If the back-end generator is creating a CMake fragment file, thesetcommand generated will use [CMake set syntax]. This looks something likeset(<variable> <value>)but can also contain additional options. These extra options can be provided in theopt-set-cmake-varoperation in the .ini file:FORCE-By default, aset()operation does not overwrite entries in a CMake file. This can be added toforcethe value to be saved.This is only applicable to generatingcmake fragmentfiles.PARENT_SCOPE- If provided, this option instructs CMake to set the variable in the scope that is above the current scope.This is only applicable to generatingcmake fragmentfiles.TYPE- Specifies theTYPEthe variable can be.This is apositionalargument and must always come afterVARNAME.Valid options for this areSTRING(default),BOOL,PATH,INTERNAL,FILEPATH.Adding aTYPEoption implies that theCACHEanddocstringparameters will be added to aset()command in a CMake fragment file according to the syntax:set(<variable> <value> CACHE <type> <docstring> [FORCE])as illustrated on theCMakeset()documentation.This is applicable to bothcmake fragmentandbashgeneration.ActiveConfigProgramOptionsCMake Config FilesHere is an example of what a .ini file may look like using the CMake operations provided by this class:[SECTION_A]opt-set cmakeopt-set-cmake-var MYVARIABLENAME:VALUEopt-set-cmake-var MYVARIABLENAME2 PARENT_SCOPE:VALUEHandling CMake VariablesACMake variablein this context would be aninternal variablethat is known to CMake. Because this is not a variable that would be known outside of the context of.cmakefiles, this kind of variable is only applicable when generating CMake fragment files.It is necessary to provide a CMake variant for variable expansions because the CMake syntax for variables is different than that used by Bash, and CMake fragments have a specialized syntax forenvironmentvariables as well. In CMake fragment files:environment variables are written as$ENV{VARNAME}internal CMake variables are written as:${VARNAME}We saw variables inActiveConfigProgramOptionsfollow the syntax:${VARNAME|ENV}whereENVspecifies the kind of variable we're declaring. We extend this inActiveConfigProgramOptionsCMakeby adding a${VARNAME|CMAKE}variation which indicates that the variable is expected to be aninternalcmake variable and is more suited towards being used within a CMake fragment file since it has no meaning at the command line.You can still use aCMakevariable expansion entry when generatingbashoutput but there is a catch. The variablemustbe resolvable to something that isnota CMake variable through its transitive closure. This is achieved by caching thelast known valueof a variable as we process a .ini file and provided that the value ultimately resolves to either a string or an environment variable we can still use it. If it cannot be resolved to something that isn't a CMake variable then an exception should be generated.For example, if we have a .ini file that sets upCMAKE_CXX_FLAGSto include-O0in a common section like this:[COMMON]opt-set-cmake-var CMAKE_CXX_FLAGS STRING FORCE:"-O0"and then we have a later section that adds an OpenMP flag to it like this:[ADD_OPENMP] use COMMON opt-set-cmake-var CMAKE_CXX_FLAGS STRING FORCE: "${CMAKE_CXX_FLAGS|CMAKE} -fopenmp"This is valid since${CMAKE_CXX_FLAGS|CMAKE}will get replaced with-O0so the resultingCMAKE_CXX_FLAGSvariable would be set to-O0 -fopenmpafter processing. If we generatebashoutput for theADD_OPENMPsection we'll get a-Doption that looks like-DCMAKE_CXX_FLAGS:STRING="-O0 -fopenmp".But what if we have a .ini file with a CMake variable that can't be resolved to something that is not a CMake flag, such as:[COMMON]opt-set-cmake-var FOO:${SOME_CMAKE_VAR|CMAKE}If we tried to process this and write out the resulting script using thebashgenerator an exception should be raised citing that we don't know what to do with that unresolved CMake variable. This would be the equivalent to a bash option-DFOO=<SOME_CMAKE_VAR>and bash can't handle that because it has no idea what it should put in that cmake var field.Note: if the same CMake option is provided in multiple lines they will all be included in the generated output. In that case, the behaviour will match what will occur if one called cmake directly with the same option multiple times. In that case, thelast one winssince all-Doptions are treated as though they both haveFORCEandCACHEflags set.ActiveConfigProgramOptionsCMake ExamplesExampleThis example shows a configuration file that can be used to generate build files using Ninja or Makefile. In the .ini file we set up some common sections that contain the arguments and then the point-of-use sections (MYPROJ_CONFIGURATION_NINJAandMYPROJ_CONFIGURATION_MAKEFILES) can compose their command lines by importing the argument definition sections viause.ActiveConfigProgramOptions-example-02.ini## ActiveConfigProgramOptions-example-02.ini#[CMAKE_COMMAND]opt-set cmake[CMAKE_GENERATOR_NINJA]opt-set -G:Ninja[CMAKE_GENERATOR_MAKEFILES]opt-set -G:"Unix Makefiles"[MYPROJ_OPTIONS]opt-set-cmake-var MYPROJ_CXX_FLAGS STRING:"-O0 -fopenmp"opt-set-cmake-var MYPROJ_ENABLE_OPTION_A BOOL FORCE:ONopt-set-cmake-var MYPROJ_ENABLE_OPTION_B BOOL:ON[MYPROJ_SOURCE_DIR]opt-set /path/to/source/dir[MYPROJ_CONFIGURATION_NINJA]use CMAKE_COMMANDuse CMAKE_GENERATOR_NINJAuse MYPROJ_OPTIONSuse MYPROJ_SOURCE_DIR[MYPROJ_CONFIGURATION_MAKEFILES]use CMAKE_COMMANDuse CMAKE_GENERATOR_MAKEFILESuse MYPROJ_OPTIONSuse MYPROJ_SOURCE_DIRActiveConfigProgramOptions-example-02.pyThis python code shows generating a bash script and a CMake fragment of the configuration specified in the .ini file.#!/usr/bin/env python3# -*- mode: python; py-indent-offset: 4; py-continuation-offset: 4 -*-frompathlibimportPathimportactiveconfigprogramoptionsprint(80*"-")print(f"-{Path(__file__).name}")print(80*"-")filename="ActiveConfigProgramOptions-example-02.ini"popts=activeconfigprogramoptions.ActiveConfigProgramOptionsCMake(filename)section="MYPROJ_CONFIGURATION_NINJA"popts.parse_section(section)# Generate BASH outputprint("")print("Bash output")print("-----------")bash_options=popts.gen_option_list(section,generator="bash")print("\\\n".join(bash_options))# Generate a CMake Fragmentprint("")print("CMake fragment output")print("---------------------")cmake_options=popts.gen_option_list(section,generator="cmake_fragment")print("\n".join(cmake_options))print("\nDone")OutputUsing theNinjaspecialization from the above code, we generate the following output:$python3ActiveConfigProgramOptions-example-02.py -------------------------------------------------------------------------------- -ActiveConfigProgramOptions-example-02.py -------------------------------------------------------------------------------- **Bashoutput** cmake\-G=Ninja\-DMYPROJ_CXX_FLAGS:STRING="-O0 -fopenmp"\-DMYPROJ_ENABLE_OPTION_A:BOOL=ON\-DMYPROJ_ENABLE_OPTION_B:BOOL=ON\/path/to/source/dir CMakefragmentoutput --------------------- set(MYPROJ_CXX_FLAGS"-O0 -fopenmp"CACHESTRING"from .ini configuration")set(MYPROJ_ENABLE_OPTION_AONCACHEBOOL"from .ini configuration"FORCE)set(MYPROJ_ENABLE_OPTION_BONCACHEBOOL"from .ini configuration")Done
activedetect
UNKNOWN
activedirectory
activedirectoryEasiest way to interact with ActiveDirectory / AD / LDAP Servers from Python, pure python approach.
active_directory
******************************Python Active Directory Module******************************What is it?===========Active Directory (AD) is Microsoft's answer to LDAP, the industry-standarddirectory service holding information about users, computers andother resources in a tree structure, arranged by departments orgeographical location, and optimized for searching.The Python Active Directory module is a lightweight wrapper on top of thepywin32 extensions, and hides some of the plumbing needed to get Python totalk to the AD API. It's pure Python and should work with any version ofPython from 2.2 onwards (generators) and any recent version of pywin32.Where do I get it?==================http://timgolden.me.uk/python/active_directory.htmlHow do I install it?====================When all's said and done, it's just a module. But for thosewho like setup programs:python setup.py installPrerequisites=============If you're running a recent Python (2.2+) on a recent Windows (2k, 2k3, XP)and you have Mark Hammond's pywin32 extensions installed, you're probablyup-and-running already. Otherwise...Windows-------If you're running Win9x / NT4 you'll need to get AD supportfrom Microsoft. Microsoft URLs change quite often, so I suggest youdo this:http://www.google.com/search?q=site%3Amicrosoft.com+active+directory+downloadsPython------http://www.python.org/ (just in case you didn't know)pywin32 (was win32all)----------------------http://pywin32.sf.netHow do I use it?================There are examples at: http://timgolden.me.uk/python/ad_cookbook.htmlbut as a quick taster, try this, to list all users' display names:import active_directoryfor person in active_directory.search (objectCategory='Person'):print person.displayNameWhat License is it released under?==================================(c) Tim Golden <[email protected]> 2004-2007Licensed under the (GPL-compatible) MIT License:http://www.opensource.org/licenses/mit-license.php
activefires-pp
[![Build status](https://github.com/adybbroe/activefires-pp/workflows/CI/badge.svg?branch=main)](https://github.com/adybbroe/activefires-pp/workflows/CI/badge.svg?branch=main) [![Coverage Status](https://coveralls.io/repos/github/adybbroe/activefires-pp/badge.svg)](https://coveralls.io/github/adybbroe/activefires-pp)Post-processing (including regional filtering) of Satellite Active Fires and notify end-users Supports reading and processing VIIRS Active Fires EDR. There is support for filtering out fire detections using three different levels:National filtering, where all detections outside boarders as given by a shapefile are left outFiltering to exclude fire detections inside a set of polygons (meant to define populated areas and local industries known to be heat sources that can turn up as detections) from a shapefileRegional filtering, where detections are localisized in regions and output messages are treated accordingly.
activegit
No description available on PyPI.
activegraf-python
ActiveGraf Python IntegrationPythonThis library requires Python 3.10+Building packageInstall build tools withpython3 -m pip install --upgrade buildBuild package withpython3 -m buildOutput result will be in thedistfolder
activejson
ActivejsonA convenient library to deal with large json dataA convenient library to deal with large json data. The purpose of this package is help to deal with complex json-like data, converting them into a more manageable data structure.InstallationOS X & Linux:From PYPI$pip3installactivejsonfrom the source$gitclonehttps://github.com/dany2691/activejson.git $cdactivejson $python3setup.pyinstallUsage exampleYou can flat a complex dict the next way:complex_json={'cat':{'grass':'feline','mud':'you never know','horse':'my joke'},'dolphin':[{'tiger':[{'bird':'blue jay'},{'fish':'dolphin'}]},{'cat2':'feline'},{'dog2':'canine'}],'dog':'canine'}fromactivejsonimportflatten_jsonflatten_complex_json=flatten_json(complex_json)print(flatten_complex_json)The result could be the next:{'cat_grass':'feline','cat_horse':'my joke','cat_mud':'you never know','dog':'canine','dolphin_0_tiger_0_bird':'blue jay','dolphin_0_tiger_1_fish':'dolphin','dolphin_1_cat2':'feline','dolphin_2_dog2':'canine'}On the other hand, is possible to convert that dict into an object with dynamic attributes:fromactivejsonimportFrozenJSONfrozen_complex_json=FrozenJSON(complex_json)print(frozen_complex_json.cat.grass)print(frozen_complex_json.cat.mud)print(frozen_b.dolphin[2].dog2)The result could be the next:'feline''you never know''canine'To retrieve the underlying json, is possible to use the json property:frozen_complex_json.json{'cat_grass':'feline','cat_horse':'my joke','cat_mud':'you never know','dog':'canine','dolphin_0_tiger_0_bird':'blue jay','dolphin_0_tiger_1_fish':'dolphin','dolphin_1_cat2':'feline','dolphin_2_dog2':'canine'}Development setupThis project usesPoetryfor dependecy resolution. It's a kind of mix between pip and virtualenv. Follow the next instructions to setup the development enviroment.$pipinstallpoetry$gitclonehttps://github.com/dany2691/activejson.git $cdactivejson $poetryinstallTo run the test-suite, inside the pybundler directory:$poetryrunpytesttest/-vvMetaDaniel Omar Vergara Pérez –@__danvergara __–[email protected]/danvergaraValery Briz [email protected]/valerybrizContributingFork it (https://github.com/BentoBox-Project/activejson)Create your feature branch (git checkout -b feature/fooBar)Commit your changes (git commit -am 'Add some fooBar')Push to the branch (git push origin feature/fooBar)Create a new Pull Request
active-learn
Deep Active Learning LibraryIntroductionSupervised machine learning models are trained to map input data to desired outputs. Often, unlabeled samples are abundant, but obtaining corresponding labels is costly. In particular, acquiring these labels may require human annotators, extensive compute, or a substantial amount of time. In these scenarios, it's best to think carefully about how we allocate these scarce resources, and to prioritize labeling samples that will, once labeled and trained on, facilitate the largest improvements in model quality. The problem of identifying these high-information samples, given a constraint on how much data we are willing to have labeled, is referred to asActive Learning.Active learning problems are ubiquitous in practical, real-world machine learning deployments. Building on an array of state-of-the-art algorithms developed in our lab, this library provides a general-purpose tool for active learning with deep neural networks.Why Does it Matter?Active learning matters because it allows us to train higher-performing models at a reduced cost. While the potential applications are abundant, the underlying technology is somewhat general, allowing us to build one tool that can handle a wide array of use cases.One big active learning success at Microsoft involves a Bing language model called "RankLM." RankLM predicts the quality of a search query-result pair, and obtaining these labels for training is costly — requiring either human annotators or compute-intensive models. By using active learning to construct an information-dense training set, the Bing team was able to obtain a significant boost in the predictive quality of RankLM.Build & InstallationUsers can build a wheel package and use it as follows.python-mpipinstall--upgradebuildpython-mbuildpipinstalldist/active_learn-*.whlExample UsageUsers can use anytorch.nnmodule with the library. Seea demo here.pipinstall-rexamples/requirements.txtpythonexamples/demo.pyUsing a pretrained model (TBD)fromactive_learnimportActiveSamplerimporttorchfromtransformersimportAutoTokenizer,AutoModelForSequenceClassificationfromdatasetsimportload_dataset# 1) Load a pretrained sentiment classifiermodel_name="finiteautomata/bertweet-base-sentiment-analysis"tokenizer=AutoTokenizer.from_pretrained(model_name)model=AutoModelForSequenceClassification.from_pretrained(model_name)# 2) Load a new sentiment dataset unlike what the model was trained on.# -# Here we're pretending labels are not available, and we want to identify the# most useful 'n' samples to send to an expert to have labeled before# integrating # them into our training datasentences=samples['train']['sentence']# 3) Get 100 most valuable samplessampler=ActiveSampler('classification',(model,tokenizer),100)valuable_samples=sampler.select(sentences)# 4) Now get these new samples labeled and update your model!API overviewSeedetailed API description here.RoadmapSeea list of future work items here.ReferencesAsh, Jordan T., et al. "Deep batch active learning by diverse, uncertain gradient lower bounds." International Conference on Learning Representations. 2020.Ash, Jordan T., et al. "Gone fishing: Neural active learning with fisher embeddings." Advances in Neural Information Processing Systems. 2021.Saran, Akanksha, et al. "Streaming Active Learning with Deep Neural Networks." International Conference on Machine Learning. 2023.
activelearner
Active LearnerA Python package which will do Active Learning process to determine whether to use Text Classification or Topic Modeling. For Usage of this package, you need to consist of a dataset which has folders to represent the classes for Text Classification, and contain text files in each class folders.UsageExampleFollowing query will allow to find thresholds for the classes available in the dataset. In whichdirectoryis the directory of your dataset, andfileis the name of the file needed to find similarity.from activelearner import similarity from activelearner import threshold threshold = threshold(directory) similarity = similarity(file, directory,threshold)
active-learner
0.# activeLearner_autophraseRunning StepsFirst, run lsh_qutophrase.py to get lsh similarity groupsNext, run lsh_analyzer.py to start the active learning process
active-learning
ExampleSample code inAL_Notebook.ipynbnotebookInstallpip install active_learningORpython setup.py sdistpython setup.py installEnvironment SetupMake sure thatcondais installed.Run the following command in the root directory to build the conda environment "trews":conda env create -f environment.ymlRunsource activate trewsbefore executing the Jupyter notebook."trews" has a package nb_conda which allows you to specify the conda env you want as a Jupyter kernel.You must have "trews" activated for the Jupyter notebook server to manage conda environmentsObjectiveProvide justification for custom sepsis definition generated from soliciting rich feedback from physicians.Create an AL implementation that incorporates rich feedback from physicians to improve the TREWS tool.Long-term goal: Create a library of active learning tools for any CDSS we design for new clinical problems.Code OrganizationGeneral functions should be written in .py files under folder 'python_scripts'Experiments should load these python files into iPython notebooks for visualization/output neatness and reproducibility.Large experimental datasets should be stored locally and tracked using plaintext files or logs.Datasets used for testing implementation should be kept under repository 'dev_data'
active-learning-img-augmentation-utils
No description available on PyPI.
activeledgerPythonSDK
No description available on PyPI.
activeledgersdk
# Python-sdk for ActiveledgerPython sdk for python developers who wish use activeledger.## Getting StartedFor detailed documentation please visithttps://activeledgersdk.readthedocs.io/en/latest/### PrerequisitesCurrently only python 3.5+ version is suported### InstallingSimply download the sdk with the following command:` pip installactiveLedger-sdk`## DeploymentAdd additional notes about how to deploy this on a live system## Built With[cryptography](https://github.com/pyca/cryptography) - Used for key generation## LicenseThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details
active-list-mc
This module is a Facade to simplify usage of the Gtk.TreeView with Gtk.ListStore model.The module creates a list of Gtk.TreeViewColumn's and initialises it with the required column specifications and cell renderers.The information as to the required columns and cell renderers is specified as a list of tuples in the calling module.The following excerpts from my book "Programming Python with Gtk and SQLite" will illustrate.4.21.2 The example dataLet's imagine we want to display some data selected from the Chinook example database (which will be discussed in Part 3). We want it to look as below.The data will be presented in a separate file,employees.csv, to avoid any questions of database access at this stage. (Note: csv stands for "Comma Separated Variables".)You will notice there are more data items in the file than are shown above. This is deliberate; the columns Address,City,State,Country etc. from the example database are present in the .csv file but are ignored for simplicity and to show that we aren't forced to display all the data.Also note the fourth column "ReportsTo" is a numeric reference to the record for the named person's supervisor. This will come in handy to illustrate the TreeStore model later.4.21.5 The ActiveList moduleMy ActiveList module was written to encapsulate the setting up of the tree model and treeview to avoid having to re-write this code for every new application, and repeat very similar code for each treeview column.The ActiveList module (ActiveList.py) is provided under the MIT license.If you import this module, you need only write a specification of the required columns, like this.fromActiveListimportTVColumn,ActiveList# TreeModel column ID'sCOL_SEQ=0COL_LAST=1# LastNameCOL_FIRST=2# FirstNameCOL_TITLE=3# TitleCOL_BOSS=4# ReportsToCOL_BORN=5# Birth DateCOL_HIRED=6# Hire DateCOL_SORT=7# Sort Key - not usedCOL_KEY=8# Database Key - not used#...or (easier to modify correctly)COL_SEQ,\COL_LAST,\COL_FIRST,\COL_TITLE,\COL_BOSS,\COL_BORN,\COL_HIRED,\COL_SORT,\COL_KEY=range(9)classTV(ActiveList):# The ActiveList defines the columns for the treeview_columns=[# We'll use column 0 of each row to specify a background colour for# the row. This is not compulsory but I found it a useful convention.# This column is not displayed.TVColumn(COL_SEQ,str)# The following columns are obtained from the data source# and displayed in the treeview.# column 1 (LastName),TVColumn(COL_LAST,str,"LastName",75,COL_SEQ,gtk.CellRendererText)# column 2 (FirstName),TVColumn(COL_FIRST,str,"FirstName",75,COL_SEQ,gtk.CellRendererText)# column 3 (Title),TVColumn(COL_TITLE,str,"Title",93,COL_SEQ,gtk.CellRendererText)# column 4 (Reports To),TVColumn(COL_BOSS,str,"ReportsTo",75,COL_SEQ,gtk.CellRendererText)# column 5 (BirthDate),TVColumn(COL_BORN,str,"Born",70,COL_SEQ,gtk.CellRendererText)# column 6 (HireDate),TVColumn(COL_HIRED,str,"Hired",70,COL_SEQ,gtk.CellRendererText)# The following column is used but not displayed# KEY - e.g. database key to identify a record for UPDATE etc,TVColumn(COL_KEY,int)]The parameters to TVColumn aretree model column IDcolumn data typecolumn headingcolumn width (pixels)model column to specify this column's background colourcell renderer typecolumn justification (0 = left (default), 0.5 = centre, 1 = right)ActiveList sets up the treeview which you supply with the columns and cell renderers you specify and returns the resulting "column_type_list" which can be used to create the treeview's model.my_TV=TV(self.treeview)# an instance of our descendant of ActiveList# which defines the columns of the treeviewmy_TV.model=gtk.ListStore(*my_TV.column_type_list)self.treeview.set_model(my_TV.model)If the column should be present in the model but not displayed in the view, only the first two parameters should be given. This is useful for example to keep a database key for every record in the model so we could update or delete the record if required.You can specify any number of columns which can be used and/or displayed as you wish. You can use this for "future-proofing" e.g. specify columns for which the display code is not written yet.The parameter "model column to specify this column's background colour" may need further explanation. The intention is to allow rows to have alternating background colours to help with reading the data, as in the section on "The example data". This could have been implemented within ActiveList, but I have just provided the framework for doing it yourself, to give users the choice.I understand that some users will want this feature, but prefer it to be provided by their desktop theme. I might have gone that route but couldn't find a theme which worked.
activelog
ActiveLogA simple logging utility for Python.UsageDescription of Constructors's ArgumentnameDescriptionshift_freqfrequency of log file rotation. [sec]shift_sizefile size of log file rotation. [byte]levelLogging severity threshold. Default values is logger.DEBUGinitWhether to clear log files. [True / False]datetime_formatDate and time format. Default value is '%Y-%m-%d %H:%M:%S.%f'.shift_period_suffixThe log file suffix format for rotation. Default is '_%Y-%m-%d'.Simple ExamplefromactivelogimportLogger# Create a Logger that prints to STDOUTlog=Logger(Logger.STDOUT)log.debug("Created Logger")log.info("Program finished")# Create a Logger that prints to Log Filelog=Logger('log file path')log.warn("This is WARN message")log.error("This is ERROR message")Log File Shift Example# Switch log file dailylog=Logger('log file path',shift_freq=60*60*24)# Switch log file weeklylog=Logger('log file path',shift_freq=60*60*24*7)# Switch log file every 10MBlog=Logger('log file path',shift_size=100000000)Output Example[2020-01-01 07:11:09.887267 #sample.py line:4] INFO -- : Program Start!! [2020-01-01 07:11:23.535936 #sample.py line:9] ERROR -- : Program End...Installation$pipinstallactivelogContributingBug reports and pull requests are welcome on GitHub athttps://github.com/AjxLab/ActiveLog.
activeml
🔮 ActiveML
activemq-api-client
ActiveMQ API ClientAn auxiliary Python client for the ActiveMQ REST API, designed to facilitate administrative and monitoring tasks, rather than replacing the native messaging protocols (STOMP, MQTT) for sending and receiving messages.OverviewActiveMQ is a powerful open-source message broker that supports various messaging protocols like AMQP, MQTT, OpenWire, and WebSocket. While these protocols are recommended for regular message processing due to their efficiency, reliability, and feature set, there are certain administrative and monitoring tasks that can be efficiently handled using the ActiveMQ REST API. This Python client acts as a wrapper around the ActiveMQ REST API, providing developers and system administrators with a simple and convenient way to manage and monitor their ActiveMQ broker, queues, topics, and connections.FeaturesThis package is designed to assist with the following administrative and monitoring tasks:Broker Status: Check if the broker is running, its uptime, etc.Queue and Topic Monitoring: Monitor the number of messages enqueued, dequeued, and the number of consumers connected.Connection Management: Monitor the clients connected, their IP addresses, etc.Subscription Management: Monitor the clients subscribed to various topics.Important NoteThis package is not intended to replace the native messaging protocols (STOMP, MQTT, etc.) supported by ActiveMQ for sending and receiving messages. The REST API, and by extension this package, may not provide the same level of performance, scalability, reliability, and real-time updates as the native protocols. Therefore, it is recommended to use this package only for administrative and monitoring purposes, and to use the native messaging protocols for regular message processing in your application.InstallationInstall the package via pip:pip install activemq-api-clientUsageAfter installing the package, you can use it to interact with the ActiveMQ REST API.fromactivemq_api_client.clientimportActiveMQClient# Create an ActiveMQClient instanceclient=ActiveMQClient('http://localhost:8161','admin','admin')# Get details about all queuesqueues=client.get_queues_details()print(queues)# Get the number of consumers connected to a specific queueconsumer_count=client.get_queue_consumer_count('exampleQueue')print(consumer_count)# Close the connection to the brokerclient.close()MethodsTheActiveMQClientclass provides the following methods to interact with the ActiveMQ REST API:get_queues_details() -> List[dict]Returns a list of dictionaries containing details about all the queues. Each dictionary contains the following keys:name: The name of the queue.consumerCount: The number of consumers connected to the queue.enqueueCount: The number of messages enqueued in the queue.dequeueCount: The number of messages dequeued from the queue.get_queue_consumer_count(queue_name: str) -> Union[int, None]Returns the number of consumers connected to the specified queue, orNoneif the queue does not exist.get_queue_enqueue_count(queue_name: str) -> Union[int, None]Returns the number of messages enqueued in the specified queue, orNoneif the queue does not exist.get_queue_dequeue_count(queue_name: str) -> Union[int, None]Returns the number of messages dequeued from the specified queue, orNoneif the queue does not exist.get_connections_details() -> List[dict]Returns a list of dictionaries containing details about all the connections. Each dictionary contains the following keys:clientId: The client ID of the connection.remoteAddress: The remote address of the connection.get_topics_details() -> List[dict]Returns a list of dictionaries containing details about all the topics. Each dictionary contains the following keys:name: The name of the topic.consumerCount: The number of consumers connected to the topic.enqueueCount: The number of messages enqueued in the topic.get_subscribers_details() -> List[dict]Returns a list of dictionaries containing details about all the subscribers. Each dictionary contains the following keys:clientId: The client ID of the subscriber.subscriptionName: The subscription name of the subscriber.destinationName: The destination name of the subscriber.close()Closes the connection to the ActiveMQ broker.All methods exceptclosesend a GET request to the ActiveMQ REST API and return the response in the specified format. Theclosemethod does not send any request and simply closes the connection to the broker.Closes the connection to the ActiveMQ broker.LicenseThis package is released under the MIT License. See theLICENSEfile for more details.
active-pr
active_pran active report of PRs in the github.installpipinstallactive_praccess tokenWe should get a reading only github tokenherefor access, and set in our env.exportGITHUB_TOKEN=<yourtoken>usageexport|grep"GITHUB_TOKEN"GITHUB_TOKEN=<githubtoken># search created_at='begin..end' and state is the current state.active_pr\--authorkyoto7250\--begin2023-04-01\--end2023-04-30\--typeboth\--stateclosed# --github-token <github token>|repo_name|type|title|created_at|closed_at||-----------------------|--------|----------------------------------------------------------------------------------------------------------------------|--------------|-------------||astral-sh/ruff|PR|[[`flake8-simplify`]Implement`dict-get-with-none-default`(`SIM910`)](https://github.com/astral-sh/ruff/pull/3874)|2023-04-04|2023-04-04||astral-sh/ruff|PR|[Supportsmorecasesin`SIM112`](https://github.com/astral-sh/ruff/pull/3876)|2023-04-04|2023-04-04||dondongwon/LPMDataset|ISSUE|[Whereisthe`scrape_scope`columnat`raw_video_links.csv`?](https://github.com/dondongwon/LPMDataset/issues/3)|2023-04-25|2023-06-07|contributeIf you have suggestions for features or improvements to the code, please feel free to create an issue or PR.
active-pre-train-ppg
Unsupervised On-Policy Reinforcement LearningThis work combinesActive Pre-Trainingwith an On-Policy algorithm,Phasic Policy Gradient.Active Pre-TrainingIs used to pre-train a model free algorithm before defining a downstream task. It calculates the reward based on an estimatie of the particle based entropy of states. This reduces the training time if you want to define various tasks - i.e. robots for a warehouse.Phasic Policy GradientImproved Version of Proximal Policy Optimization, which uses auxiliary epochs to train shared representations between the policy and a value network.
activereader
activereaderPython library for reading Garmin TCX and GPX running activity files.Exampleactivereader provides theTcxandGpxfile reader classes.TCX and GPX files can be exported fromGarmin Connect.UseTcxto read and access data from a TCX file:importpandasaspdfromactivereaderimportTcxreader=Tcx.from_file('tests/testdata.tcx')# Build a DataFrame using only trackpoints (as records).initial_time=reader.activity_start_timerecords=[{'time':int((tp.time-initial_time).total_seconds()),'lat':tp.lat,'lon':tp.lon,'distance':tp.distance_m,'elevation':tp.altitude_m,'heart_rate':tp.hr,'speed':tp.speed_ms,'cadence':tp.cadence_rpm,}fortpinreader.trackpoints]df=pd.DataFrame.from_records(records)BackgroundThis project originated as the file-reading part of myheartandsole package. Lately, I've been interested in keeping my work in more self-contained modules with lighter dependencies, so I split it out.The idea is to provide a simple API for accessing data from Garmin files, similar to the waypython-fitparseprovides access to Garmin's impenetrable.fitfiles. I don't aim to do everything, though; I want to just focus on activity files that represent runs (and maybe walks/hikes) for now. When I try to cover all cases, the schemas and profiles quickly grow out of control. Garmin seems to have a reputation for making their files indecipherable, and I like solving puzzles, so I will focus on translating Garmin's language into human language. This is in opposition to waiting for Garmin to document all the features of all its files.Tangent time: when I was working on picking apart Garmin's.fitfile structure with my own device's files, there were a number of undocumented, indecipherable fields. Add to that, Garmin does not seem to keep documentation online for its older.fitSDKs, so if your device uses an older one, you might just be out of luck trying to decipher it. I would rather keep my own separate source of truth, than count on Garmin's being forthcoming with info.Dependencies and Installationlxmlandpython-dateutilare required.The package is available onPyPiand can be installed withpip:$ pip install activereaderLicenseThis project is licensed under the MIT License. SeeLICENSEfile for details.Project StatusThe project has reached a stable point and I don't expect to be changing much for now - future versions will likely build on what's here. But sometimes I change my mind and tear everything apart, so who knows. This package will remain focused on extracting data from GPX and TCX files...of that I feel sure.CompleteDevelop capability to read runningtcxandgpxfiles.Current ActivitiesHandle pauses and laps in files (things I avoid in my own workouts because they complicate and obscure the DATA!). The body keeps the score, but the watch keeps the stats.Future WorkExpand capability to read running activity files.pwx(is this Garmin?)Make a project wiki so I can be as verbose as I please. (You mean this isn't you being verbose?)ContactReach out to me at one of the following places!Website:trailzealot.comLinkedIn:linkedin.com/in/aarondschroederTwitter:@trailzealotInstagram:@trailzealotGitHub:github.com/aaron-schroeder
active-record-mc
This module implements a simple ORM for basic operations in SQLite databases.The module is derived from a working example of the active record pattern created by Chris Mitchell to supplement a talk given at the Oregon Academy of Sciences meeting on January 26, 2011.The example is published on GitHub ashttps://github.com/ChrisTM/Active-Record-Example-for-a-Gradebookand the code is understood to be freely available under the MIT license as above.The original code has been modified so thatThe column names in the selected table are obtained automatically by introspection of the database.The primary key column is no longer required to be 'pk'.Errors are reported via a dialog box.
active_redis
# Example:from active_redis import ActiveRedis, ActiveRedisModel, UUIDGeneratorclass Movie(ActiveRedisModel):stored_attrs = ['title', 'year', 'author']def __init__(self, uuid, title, year, author):self.uuid = uuidself.title, self.year, self.author = title, year, authorif __name__ == '__main__':ActiveRedis.config = {'connexion': {'db': 3},'namespace_prefix': 'mycinema'}assert Movie.delete_all()assert 0, Movie.count()topgun = Movie(UUIDGenerator.generate(), 'TopGun', 1987, 'Tony S.')assert topgun.save()titanic = Movie(UUIDGenerator.generate(), 'Titanic', 1997, 'James C.')assert titanic.save()topgun.update_attr('title', 'Top Gun')assert 'Top Gun', Movie.find(topgun.uuid).titleassert 2, Movie.count()assert 2, len(Movie.find_all())m = Movie.find(topgun.uuid)m.delete()assert 1, Movie.count()assert 1, len(Movie.find_all())
activerest
Python REST resource client, modeled on Ruby on Rails’ ActiveResource.InstallationTo install and upgrade to the latest release:pip install --upgrade activerestTo install this package from source:python setup.py installTo test this package:python setup.py test
active-sample
Active SampleThis is an implementation of active learning sampler
active-semi-supervised-clustering
active-semi-supervised-clusteringActive semi-supervised clustering algorithms for scikit-learn.AlgorithmsSemi-supervised clusteringSeeded-KMeansConstrainted-KMeansCOP-KMeansPairwise constrained K-Means (PCK-Means)Metric K-Means (MK-Means)Metric pairwise constrained K-Means (MPCK-Means)Active learning of pairwise clusteringExplore & ConsolidateMin-maxNormalized point-based uncertainty (NPU) methodInstallationpip install active-semi-supervised-clusteringUsagefromsklearnimportdatasets,metricsfromactive_semi_clustering.semi_supervised.pairwise_constraintsimportPCKMeansfromactive_semi_clustering.active.pairwise_constraintsimportExampleOracle,ExploreConsolidate,MinMaxX,y=datasets.load_iris(return_X_y=True)First, obtain some pairwise constraints from an oracle.# TODO implement your own oracle that will, for example, query a domain expert via GUI or CLIoracle=ExampleOracle(y,max_queries_cnt=10)active_learner=MinMax(n_clusters=3)active_learner.fit(X,oracle=oracle)pairwise_constraints=active_learner.pairwise_constraints_Then, use the constraints to do the clustering.clusterer=PCKMeans(n_clusters=3)clusterer.fit(X,ml=pairwise_constraints[0],cl=pairwise_constraints[1])Evaluate the clustering using Adjusted Rand Score.metrics.adjusted_rand_score(y,clusterer.labels_)
active-sessions
Shows the online users in the system using django.You need just insert this app on INSTALLED_APPS. When you run the command: python manage.py runserver, this app you print on shell the count of users and your names.You can get a queryset of online users using get_current_users()
activesoup
A simple library for interacting with the web from pythonDescriptionactivesoupcombines familiar python web capabilities for convenient headless “browsing” functionality:Modern HTTP support withrequests- connection pooling, sessions, …Convenient access to the web page with an interface inspired bybeautifulsoup- convenient HTML navigation.Robust HTML parsing withhtml5lib- parse the web like browsers do.Full documentation can be found athttps://activesoup.dev.Use casesactivesoupaims to provide just enough functionality for basic web automation / crawler tasks. Consider usingactivesoupwhen:You’ve already checked outrequests-htmlYou need to actively interact with some web-page from Python (e.g. submitting forms, downloading files)You don’t control the site you need to interact with (if you do, just make an API).You don’t need javascript support (you’ll needseleniumorphantomjs).Usage examplesIn the example below, we’ll load a page with a simple form, enumerate the fields, and make a submission:>>>importactivesoup>>># Start a session>>>d=activesoup.Driver()>>>page=d.get("https://httpbin.org/forms/post")>>># conveniently access elements, inspired by BeautifulSoup>>>form=page.form>>># get the power of raw xpath search too>>>form.find('.//input[@name="size"]')BoundTag<input>>>># any element, searching by attribute>>>form.find('.//*',name="size")BoundTag<input>>>># or just search by attribute>>>form.find(name="size")BoundTag<input>>>># inspect element attributes>>>print([i['name']foriinform.find_all('input')])['custname','custtel','custemail','size','size','size','topping','topping','topping','topping','delivery']>>># work actively with objects on the page>>>r=form.submit({"custname":"john","size":"small"})>>># responses parsed and ready based on content type>>>r.keys()dict_keys(['args','data','files','form','headers','json','origin','url'])>>>r['form']{'custname':'john','size':'small','topping':'mushroom'}>>># access the underlying requests.Session too>>>d.session<requests.sessions.Sessionobjectat0x7f283dc95700>>>># log in with cookie support>>>d.get('https://httpbin.org/cookies/set/foo/bar')>>>d.session.cookies['foo']'bar'
active-subspaces
Python utilities for working with active subspaces.WARNING: Development is very active right now, so the interfaces are far from stable. It should settle down by mid-March, when the Active Subspaces book comes out.Right now I’m using Enthought’s Python Distribution and Canopy for development. You’ll need numpy and scipy for these tools.You also need a linear program and quadratic program solver. I’m using Gurobi; you can see the gurobi_wrapper.py that solves the problems I need. To get Gurobi’s Python interface working with Enthought/Canopy, take a look at this thread:https://groups.google.com/forum/#!searchin/gurobi/canopy/gurobi/ArCkf4a40uU/R9U1XFuMJEkJQuestions? Contact Paul Constantine at Colorado School of Mines. Google me for contact info.
activeSVC
activeSVCActiveSVC selects features for large matrix data with reduced computational complexity or limited data acquisition. It approaches Sequential Feature Selection through an active learning strategy with a support vector machine classifier. At each round of iteration, the procedure analyzes only the samples that classify poorly with the current feature set, and the procedure extends the feature set by identifying features within incorrectly classified samples that will maximally shift the classification margin. There are two strategy, min_complexity and min_acquisition. Min_complexity strategy tends to use less samples each iteration while min_acquisition strategy tends to re-use samples used in previous iterations to minimize the total samples we acquired during the procedure.Why is activeSVC better than other feature selection methods?Easy to useGood for large datasetsReduce MemoryReduce computational complexityMinimize the data size we needUsageActiveSVC processes a datasets with training set and test set and returns the features selected, training accuracy, test accuracy, training mean squared error, test mean squared error, the number of samples acquired after every features are selected. We highly recommend to do l2-normalization for each sample before activeSVC to improve accuracy and speed up model training.Requiresnumpy, random, math, os, time, multiprocessing, sklearn, spcipyImportfrom activeSVC import min_complexity from activeSVC import min_acquisition from activeSVC import min_complexity_cv from activeSVC import min_acquisition_cv from activeSVC import min_complexity_h5py from activeSVC import min_acquisition_h5pyFunctionmin_complexity: fix SVM parameters for each loopmin_acquisition: fix SVM parameters for each loopmin_complexity_cv: Every SVMs trained during the procedure are the best estimator by cross validation on parameters "C" and "tol".min_acquisition_cv: Every SVMs trained during the procedure are the best estimator by cross validation on parameters "C" and "tol".min_complexity_h5py: only load part of data matrix of selected features and samples into memory instead of loading the entire dataset to save memory usage. The data should be h5py file.min_acquisition_h5py: only load part of data matrix of selected features and samples into memory instead of loading the entire dataset to save memory usage. The data should be h5py file.min_complexityParametersX_train: {ndarray, sparse matrix} of shape {n_samples_X, n_features} Input data of training set. y_train: ndarray of shape {n_samples_X,} Input classification labels of training set. X_test: {ndarray, sparse matrix} of shape {n_samples_X, n_features} Input data of test set. y_test: ndarray of shape {n_samples_X,} Input classification labels of test set. num_features: integer The total number of features to select. num_samples: integer The number of samples to use in each iteration (for each feature). init_features: integer, default=1 The number of features to select in the first iteration. init_samples: integer, default=None The number of samples to use in the first iteration. balance: bool, default=False Balance samples of each classes when sampling misclassified samples at each iteration or randomly sample misclassified samples. penalty: {‘l1’, ‘l2’}, default=’l2’ Specifies the norm used in the penalization. The ‘l2’ penalty is the standard used in SVC. The ‘l1’ leads to sparse weight. loss: {‘hinge’, ‘squared_hinge’}, default=’squared_hinge’ Specifies the loss function for each SVC to train. ‘hinge’ is the standard SVM loss while ‘squared_hinge’ is the square of the hinge loss. The combination of penalty='l1' and loss='hinge' is not supported. dual: bool, default=True Select the algorithm to either solve the dual or primal optimization problem for each SVC. Prefer dual=False when n_samples > n_features. tol: float, default=1e-4 Tolerance for stopping criteria for each SVC. C: float, default=1.0 Regularization parameter for each SVC. The strength of the regularization is inversely proportional to C. Must be strictly positive. fit_intercept: bool, default=True Whether to calculate the intercept for each SVC. If set to false, no intercept will be used in calculations (i.e. data is already centered). intercept_scaling: float, default=1 When self.fit_intercept is True, instance vector x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equals to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic feature weight Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased. class_weight: dict or ‘balanced’, default=None Set the parameter C of class i to class_weight[i]*C for SVC. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)). random_state: int, RandomState instance or None, default=None Controls the pseudo random number generation for shuffling data for the dual coordinate descent (if dual=True). When dual=False underlying implementation of LinearSVC is not random and random_state has no effect on the results. Pass an int for reproducible output across multiple function calls. max_iter: int, default=1000 The maximum number of iterations to be run for each SVC.Returnfeature_selected: list of integer The sequence of features selected. num_samples_list: list of integer The number of unique samples acquired totally after every features are selected. train_errors: list of float Mean squared error of training set after every features are selected. test_errors: list of float Mean squared error of test set after every features are selected. train_accuracy: list of float Classification accuracy of training set after every features are selected. test_accuracy: list of float Classification accuracy of test set after every features are selected. step_times: list of float The run time of each iteration. Unit is second.min_acquisitionParametersX_train: {ndarray, sparse matrix} of shape {n_samples_X, n_features} Input data of training set. y_train: ndarray of shape {n_samples_X,} Input classification labels of training set. X_test: {ndarray, sparse matrix} of shape {n_samples_X, n_features} Input data of test set. y_test: ndarray of shape {n_samples_X,} Input classification labels of test set. num_features: integer The total number of features to select. num_samples: integer The number of misclassified samples randomly sampled, which are taken union with samples already acquired before. The union of samples are used for next ietration. init_features: integer, default=1 The number of features to select in the first iteration. init_samples: integer, default=None The number of samples to use in the first iteration. balance: bool, default=False Balance samples of each classes when sampling misclassified samples at each iteration or randomly sample misclassified samples. penalty: {‘l1’, ‘l2’}, default=’l2’ Specifies the norm used in the penalization. The ‘l2’ penalty is the standard used in SVC. The ‘l1’ leads to sparse weight. loss: {‘hinge’, ‘squared_hinge’}, default=’squared_hinge’ Specifies the loss function for each SVC to train. ‘hinge’ is the standard SVM loss while ‘squared_hinge’ is the square of the hinge loss. The combination of penalty='l1' and loss='hinge' is not supported. dual: bool, default=True Select the algorithm to either solve the dual or primal optimization problem for each SVC. Prefer dual=False when n_samples > n_features. tol: float, default=1e-4 Tolerance for stopping criteria for each SVC. C: float, default=1.0 Regularization parameter for each SVC. The strength of the regularization is inversely proportional to C. Must be strictly positive. fit_intercept: bool, default=True Whether to calculate the intercept for each SVC. If set to false, no intercept will be used in calculations (i.e. data is already centered). intercept_scaling: float, default=1 When self.fit_intercept is True, instance vector x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equals to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic feature weight Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased. class_weight: dict or ‘balanced’, default=None Set the parameter C of class i to class_weight[i]*C for SVC. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)). random_state: int, RandomState instance or None, default=None Controls the pseudo random number generation for shuffling data for the dual coordinate descent (if dual=True). When dual=False underlying implementation of LinearSVC is not random and random_state has no effect on the results. Pass an int for reproducible output across multiple function calls. max_iter: int, default=1000 The maximum number of iterations to be run for each SVC.Returnfeature_selected: list of integer The sequence of features selected. num_samples_list: list of integer The number of unique samples acquired totally after every features are selected. samples_global: list of integer The indices of samples that are acquired. train_errors: list of float Mean squared error of training set after every features are selected. test_errors: list of float Mean squared error of test set after every features are selected. train_accuracy: list of float Classification accuracy of training set after every features are selected. test_accuracy: list of float Classification accuracy of test set after every features are selected. step_times: list of float The run time of each iteration. Unit is second.min_complexity_cvParametersX_train: {ndarray, sparse matrix} of shape {n_samples_X, n_features} Input data of training set. y_train: ndarray of shape {n_samples_X,} Input classification labels of training set. X_test: {ndarray, sparse matrix} of shape {n_samples_X, n_features} Input data of test set. y_test: ndarray of shape {n_samples_X,} Input classification labels of test set. num_features: integer The total number of features to select. num_samples: integer The number of samples to use in each iteration (for each feature). init_features: integer, default=1 The number of features to select in the first iteration. init_samples: integer, default=None The number of samples to use in the first iteration. balance: bool, default=False Balance samples of each classes when sampling misclassified samples at each iteration or randomly sample misclassified samples. tol: list of float, default=[1e-4] Tolerance for stopping criteria for each SVC. The list of tolerance for gridSearch cross validation. C: list of float, default=[1.0] Regularization parameter for each SVC. The strength of the regularization is inversely proportional to C. Must be strictly positive. The list of C for gridSearch cross validation. n_splits: integer, default=5 N-fold for gridSearch cross validation. penalty: {‘l1’, ‘l2’}, default=’l2’ Specifies the norm used in the penalization. The ‘l2’ penalty is the standard used in SVC. The ‘l1’ leads to sparse weight. loss: {‘hinge’, ‘squared_hinge’}, default=’squared_hinge’ Specifies the loss function for each SVC to train. ‘hinge’ is the standard SVM loss while ‘squared_hinge’ is the square of the hinge loss. The combination of penalty='l1' and loss='hinge' is not supported. dual: bool, default=True Select the algorithm to either solve the dual or primal optimization problem for each SVC. Prefer dual=False when n_samples > n_features. fit_intercept: bool, default=True Whether to calculate the intercept for each SVC. If set to false, no intercept will be used in calculations (i.e. data is already centered). intercept_scaling: float, default=1 When self.fit_intercept is True, instance vector x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equals to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic feature weight Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased. class_weight: dict or ‘balanced’, default=None Set the parameter C of class i to class_weight[i]*C for SVC. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)). random_state: int, RandomState instance or None, default=None Controls the pseudo random number generation for shuffling data for the dual coordinate descent (if dual=True). When dual=False underlying implementation of LinearSVC is not random and random_state has no effect on the results. Pass an int for reproducible output across multiple function calls. max_iter: int, default=1000 The maximum number of iterations to be run for each SVC.Returnfeature_selected: list of integer The sequence of features selected. num_samples_list: list of integer The number of unique samples acquired totally after every features are selected. train_errors: list of float Mean squared error of training set after every features are selected. test_errors: list of float Mean squared error of test set after every features are selected. train_accuracy: list of float Classification accuracy of training set after every features are selected. test_accuracy: list of float Classification accuracy of test set after every features are selected. Paras: list of dictionary The best estimator parameters of every SVMs trained. step_times: list of float The run time of each iteration. Unit is second.min_acquisition_cvParametersX_train: {ndarray, sparse matrix} of shape {n_samples_X, n_features} Input data of training set. y_train: ndarray of shape {n_samples_X,} Input classification labels of training set. X_test: {ndarray, sparse matrix} of shape {n_samples_X, n_features} Input data of test set. y_test: ndarray of shape {n_samples_X,} Input classification labels of test set. num_features: integer The total number of features to select. num_samples: integer The number of misclassified samples randomly sampled, which are taken union with samples already acquired before. The union of samples are used for next ietration. init_features: integer, default=1 The number of features to select in the first iteration. init_samples: integer, default=None The number of samples to use in the first iteration. balance: bool, default=False Balance samples of each classes when sampling misclassified samples at each iteration or randomly sample misclassified samples. tol: list of float, default=[1e-4] Tolerance for stopping criteria for each SVC. The list of tolerance for gridSearch cross validation. C: list of float, default=[1.0] Regularization parameter for each SVC. The strength of the regularization is inversely proportional to C. Must be strictly positive. The list of C for gridSearch cross validation. n_splits: integer, default=5 N-fold for gridSearch cross validation. penalty: {‘l1’, ‘l2’}, default=’l2’ Specifies the norm used in the penalization. The ‘l2’ penalty is the standard used in SVC. The ‘l1’ leads to sparse weight. loss: {‘hinge’, ‘squared_hinge’}, default=’squared_hinge’ Specifies the loss function for each SVC to train. ‘hinge’ is the standard SVM loss while ‘squared_hinge’ is the square of the hinge loss. The combination of penalty='l1' and loss='hinge' is not supported. dual: bool, default=True Select the algorithm to either solve the dual or primal optimization problem for each SVC. Prefer dual=False when n_samples > n_features. fit_intercept: bool, default=True Whether to calculate the intercept for each SVC. If set to false, no intercept will be used in calculations (i.e. data is already centered). intercept_scaling: float, default=1 When self.fit_intercept is True, instance vector x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equals to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic feature weight Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased. class_weight: dict or ‘balanced’, default=None Set the parameter C of class i to class_weight[i]*C for SVC. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)). random_state: int, RandomState instance or None, default=None Controls the pseudo random number generation for shuffling data for the dual coordinate descent (if dual=True). When dual=False underlying implementation of LinearSVC is not random and random_state has no effect on the results. Pass an int for reproducible output across multiple function calls. max_iter: int, default=1000 The maximum number of iterations to be run for each SVC.Returnfeature_selected: list of integer The sequence of features selected. num_samples_list: list of integer The number of unique samples acquired totally after every features are selected. samples_global: list of integer The indices of samples that are acquired. train_errors: list of float Mean squared error of training set after every features are selected. test_errors: list of float Mean squared error of test set after every features are selected. train_accuracy: list of float Classification accuracy of training set after every features are selected. test_accuracy: list of float Classification accuracy of test set after every features are selected. Paras: list of dictionary The best estimator parameters of every SVMs trained. step_times: list of float The run time of each iteration. Unit is second.min_complexity_h5pyParametersdata_cell: {h5py._hl.dataset.Dataset} of {scipy.sparse.csc matrix} with shape { n_features, n_samples}. The h5py data as csc matrix. indices_cell: {h5py._hl.dataset.Dataset} The indices of csc matrix. indptr_cell: {h5py._hl.dataset.Dataset} The indptr of csc matrix. data_gene: {h5py._hl.dataset.Dataset} of {scipy.sparse.csr matrix} with shape {n_samples, n_features}. The h5py data as csr matrix. indices_cell: {h5py._hl.dataset.Dataset} The indices of csr matrix. indptr_cell: {h5py._hl.dataset.Dataset} The indptr of csr matrix. y: ndarray of shape {n_samples,} Input classification labels of entire dataset. shape: {h5py._hl.dataset.Dataset} The shape of { n_features, n_samples}. idx_train: list of integer The indices of samples from entire dataset to produce training set. idx_test: list of integer The indices of samples from entire dataset to produce test set. num_features: integer The total number of features to select. num_samples: integer The number of samples to use in each iteration (for each feature). init_features: integer, default=1 The number of features to select in the first iteration. init_samples: integer, default=None The number of samples to use in the first iteration. balance: bool, default=False Balance samples of each classes when sampling misclassified samples at each iteration or randomly sample misclassified samples. penalty: {‘l1’, ‘l2’}, default=’l2’ Specifies the norm used in the penalization. The ‘l2’ penalty is the standard used in SVC. The ‘l1’ leads to sparse weight. loss: {‘hinge’, ‘squared_hinge’}, default=’squared_hinge’ Specifies the loss function for each SVC to train. ‘hinge’ is the standard SVM loss while ‘squared_hinge’ is the square of the hinge loss. The combination of penalty='l1' and loss='hinge' is not supported. dual: bool, default=True Select the algorithm to either solve the dual or primal optimization problem for each SVC. Prefer dual=False when n_samples > n_features. tol: float, default=1e-4 Tolerance for stopping criteria for each SVC. C: float, default=1.0 Regularization parameter for each SVC. The strength of the regularization is inversely proportional to C. Must be strictly positive. fit_intercept: bool, default=True Whether to calculate the intercept for each SVC. If set to false, no intercept will be used in calculations (i.e. data is already centered). intercept_scaling: float, default=1 When self.fit_intercept is True, instance vector x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equals to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic feature weight Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased. class_weight: dict or ‘balanced’, default=None Set the parameter C of class i to class_weight[i]*C for SVC. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)). random_state: int, RandomState instance or None, default=None Controls the pseudo random number generation for shuffling data for the dual coordinate descent (if dual=True). When dual=False underlying implementation of LinearSVC is not random and random_state has no effect on the results. Pass an int for reproducible output across multiple function calls. max_iter: int, default=1000 The maximum number of iterations to be run for each SVC.Returnfeature_selected: list of integer The sequence of features selected. num_samples_list: list of integer The number of unique samples acquired totally after every features are selected. train_errors: list of float Mean squared error of training set after every features are selected. test_errors: list of float Mean squared error of test set after every features are selected. train_accuracy: list of float Classification accuracy of training set after every features are selected. test_accuracy: list of float Classification accuracy of test set after every features are selected. step_times: list of float The run time of each iteration. Unit is second.min_acquisitionParametersdata_cell: {h5py._hl.dataset.Dataset} of {scipy.sparse.csc matrix} with shape { n_features, n_samples}. The h5py data as csc matrix. indices_cell: {h5py._hl.dataset.Dataset} The indices of csc matrix. indptr_cell: {h5py._hl.dataset.Dataset} The indptr of csc matrix. data_gene: {h5py._hl.dataset.Dataset} of {scipy.sparse.csr matrix} with shape {n_samples, n_features}. The h5py data as csr matrix. indices_cell: {h5py._hl.dataset.Dataset} The indices of csr matrix. indptr_cell: {h5py._hl.dataset.Dataset} The indptr of csr matrix. y: ndarray of shape {n_samples,} Input classification labels of entire dataset. shape: {h5py._hl.dataset.Dataset} The shape of { n_features, n_samples}. idx_train: list of integer The indices of samples from entire dataset to produce training set. idx_test: list of integer The indices of samples from entire dataset to produce test set. num_features: integer The total number of features to select. num_samples: integer The number of misclassified samples randomly sampled, which are taken union with samples already acquired before. The union of samples are used for next ietration. init_features: integer, default=1 The number of features to select in the first iteration. init_samples: integer, default=None The number of samples to use in the first iteration. balance: bool, default=False Balance samples of each classes when sampling misclassified samples at each iteration or randomly sample misclassified samples. penalty: {‘l1’, ‘l2’}, default=’l2’ Specifies the norm used in the penalization. The ‘l2’ penalty is the standard used in SVC. The ‘l1’ leads to sparse weight. loss: {‘hinge’, ‘squared_hinge’}, default=’squared_hinge’ Specifies the loss function for each SVC to train. ‘hinge’ is the standard SVM loss while ‘squared_hinge’ is the square of the hinge loss. The combination of penalty='l1' and loss='hinge' is not supported. dual: bool, default=True Select the algorithm to either solve the dual or primal optimization problem for each SVC. Prefer dual=False when n_samples > n_features. tol: float, default=1e-4 Tolerance for stopping criteria for each SVC. C: float, default=1.0 Regularization parameter for each SVC. The strength of the regularization is inversely proportional to C. Must be strictly positive. fit_intercept: bool, default=True Whether to calculate the intercept for each SVC. If set to false, no intercept will be used in calculations (i.e. data is already centered). intercept_scaling: float, default=1 When self.fit_intercept is True, instance vector x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equals to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic feature weight Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased. class_weight: dict or ‘balanced’, default=None Set the parameter C of class i to class_weight[i]*C for SVC. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)). random_state: int, RandomState instance or None, default=None Controls the pseudo random number generation for shuffling data for the dual coordinate descent (if dual=True). When dual=False underlying implementation of LinearSVC is not random and random_state has no effect on the results. Pass an int for reproducible output across multiple function calls. max_iter: int, default=1000 The maximum number of iterations to be run for each SVC.Returnfeature_selected: list of integer The sequence of features selected. num_samples_list: list of integer The number of unique samples acquired totally after every features are selected. samples_global: list of integer The indices of samples that are acquired. train_errors: list of float Mean squared error of training set after every features are selected. test_errors: list of float Mean squared error of test set after every features are selected. train_accuracy: list of float Classification accuracy of training set after every features are selected. test_accuracy: list of float Classification accuracy of test set after every features are selected. step_times: list of float The run time of each iteration. Unit is second.
activeSVM
Failed to fetch description. HTTP Status Code: 404
activetick-http
activetick_httpPython module that connects to ActiveTick HTTP proxy and supplies Pandas DataFrames. Requires requests for the quoteStream, and redis for caching.Currently unstable, may end up changing the methods from camelCase to pep8 snake_case.tests run usingpytestHow to use:Run the HTTP proxy supplied byActiveTickand instantiate ActiveTick, the defaults are shown with a Redis cache enabled:from activetick_http import ActiveTick # Import the StrictRedis client to enable local persistent caching from redis import StrictRedis # ActiveTick initialized with Redis caching enabled (requires Redis) at = ActiveTick(host='127.0.0.1', port=5000, cache=StrictRedis(host='127.0.0.1'))>From the ActiveTick instance we have access to all the functionality provided by the HTTP proxy with the following methods:quoteDataquoteData(symbols, fields)Returns instantaneous quote information (fields) on symbols checkquote_fields.pyfor available options.:fields = ['LastPrice', 'BidPrice', 'AskPrice'] df = at.quoteData(('SPY', 'TLT', 'TVIX'), fields) print(df[fields].head())LastPriceBidPriceAskPriceSPY216.3216.46216.55TLT137.51137.02137.5TVIX18.1518.218.25quoteStreamquoteStream(symbols)Returns a live updated quote stream iterator:stream = at.quoteStream(('NUGT','DUST')) for tick in stream: print(tick)TODO: example dfbarDatabarData(symbol,historyType='I',intradayMinutes=60, beginTime=datetime, endTime=datetime)Returns OHLCV data for singular symbol:df = at.barData('INTC', historyType='I', beginTime=datetime(datetime.now().year, 9, 27)) print(df.head())openhighlowclosevolume2016-09-28 09:00:0037.5237.5237.2537.3951.79294e+062016-09-28 10:00:0037.437.4637.2737.311.59818e+062016-09-28 11:00:0037.3137.3237.1537.281.32702e+062016-09-28 12:00:0037.2837.3237.237.272.39398e+062016-09-28 13:00:0037.27537.3937.2237.371.23249e+06tickDatatickData(symbol, trades=False, quotes=True, beginTime=datetime, endTime=dateime)Returns historical tick level quote and trade data for a symbol:df = at.tickData('GDX', trades=True, quotes=False) print(df.head())typelastlastzlastxcond1cond2cond3cond42016-09-28 09:30:00.091000T26.2752073P001702016-09-28 09:30:00.091000T26.2752073P160002016-09-28 09:30:00.182000T26.25211T012002016-09-28 09:30:00.184000T26.2589T37121402016-09-28 09:30:00.185000T26.25500T012140optionChainoptionChain(symbol)Returns the symbols making up the optionchain for the underlying:df = at.optionChain('SPY') print(df.head())0OPTION:SPY—161014P001860001OPTION:SPY—161012C001970002OPTION:SPY—161014C001870003OPTION:SPY—161014P001920004OPTION:SPY—161012P00193000
activetigger
(py) Active TiggerPython refactoring of ActiveTigger (Julien Boelaert & Etienne Ollion)https://gitlab.univ-lille.fr/julien.boelaert/activetiggerWork in progress ...Current roadmapTo add : history of actionsTo add : multiuserTo add : queue management (rq)To add : test unitsTechnical specificationsREST-like client/server architectureMixed data storage : sqlite + filesBackend PythonFastAPIindependant Processes to CPU-bound tasts (embeddings/bertmodels)FrontendIPython widgetTo do : Javascript (React ? Vue ?)
active-time-scheduling
DescriptionThis package comprises various algorithms for solving the Active Time Problem developed to this date. The following table provides a list of the implemented algorithms available in the subpackageschedulersand the corresponding articles where these algorithms were developed.SchedulerReferenceLazyActivationSchedulerNLogN"A model for minimizing active processor time" (Chang et al., 2012)LazyActivationSchedulerT"A model for minimizing active processor time" (Chang et al., 2012)MatchingScheduler"A model for minimizing active processor time" (Chang et al., 2012)DegreeConstrainedSubgraphScheduler"A model for minimizing active processor time" (Chang et al., 2012)GreedyScheduler"Brief announcement: A greedy 2 approximation for the active time problem" (Kumar and Khuller, 2018)GreedyIntervalsScheduler--MinFeasScheduler"LP rounding and combinatorial algorithms for minimizing active and busy time" (Chang et al., 2017)GreedyLocalSearchScheduler"Brief announcement: A greedy 2 approximation for the active time problem" (Kumar and Khuller, 2018)GreedyLowestDensityFirstScheduler--BruteForceSchedulerUsed for testingLinearProgrammingScheduler"A model for minimizing active processor time" (Chang et al., 2012)LinearProgrammingRoundedScheduler"LP rounding and combinatorial algorithms for minimizing active and busy time" (Chang et al., 2017)BatchScheduler"Optimal batch schedules for parallel machines" (Koehler and Khuller, 2013)Usage ExamplesTo create a job set, the subclasses ofAbstractJobPoolfrom the subpackagemodelsare used. Different subclasses can be to represent different properties for the jobs in them, for exampleFixedLengthJobPooldemands a fixed length from its jobs. The following example demonstrates the process of creating a job pool and adding a job to it:frommodelsimportJobPooljob_pool=JobPool()job_pool.add_job(release_time=5,deadline=8,duration=2)To process the job pool, the subclasses ofAbstractSchedulerfrom the subpackageschedulersare used. To perform the processing, the job pool should be passed into theprocessfunction. The result of the function is the computed job schedule, which, if the problem instance is feasible, contains the information regarding the active time slots as well as individual schedules:fromschedulersimportFlowMethod,GreedySchedulerscheduler=GreedyScheduler(FlowMethod.PREFLOW_PUSH)schedule=scheduler.process(job_pool,max_concurrency=2)
active-transformers
Active Transformers
activetune
No description available on PyPI.
activewatch
No description available on PyPI.
activewindow
Gives you information about active window in Windows / X11
activeworkflow-agent
ActiveWorkflow Agent for PythonThis library helps you to write your ownActiveWorkflowagents inPythonusing ActiveWorkflow'sremote agent API.InstallationTo install this library run:python-mpipinstallactiveworkflow_agentPython >= 3.7 is supported.DocumentationFor full documentation please seeActiveWorkflow Agent Pythonon ActiveWorkflow's documentation website.LicenseThe library is available as open source under the terms of theMIT License.
active-wrapper
The Orator ORM provides a simple yet beautiful ActiveRecord implementation.It is inspired by the database part of theLaravel framework, but largely modified to be more pythonic.The full documentation is available here:http://orator-orm.com/docsInstallationYou can install Orator in 2 different ways:The easier and more straightforward is to use pippipinstalloratorInstall from source using the official repository (https://github.com/sdispater/orator)The different dbapi packages are not part of the package dependencies, so you must install them in order to connect to corresponding databases:Postgres:psycopg2MySQL:PyMySQLormysqlclientSqlite: Thesqlite3module is bundled with Python by defaultBasic UsageConfigurationAll you need to get you started is the configuration describing your database connections and passing it to aDatabaseManagerinstance.fromoratorimportDatabaseManager,Modelconfig={'mysql':{'driver':'mysql','host':'localhost','database':'database','user':'root','password':'','prefix':''}}db=DatabaseManager(config)Model.set_connection_resolver(db)Defining a modelclassUser(Model):passNote that we did not tell the ORM which table to use for theUsermodel. The plural “snake case” name of the class name will be used as the table name unless another name is explicitly specified. In this case, the ORM will assume theUsermodel stores records in theuserstable. You can specify a custom table by defining a__table__property on your model:classUser(Model):__table__='my_users'The ORM will also assume that each table has a primary key column namedid. You can define a__primary_key__property to override this convention. Likewise, you can define a__connection__property to override the name of the database connection that should be used when using the model.Once a model is defined, you are ready to start retrieving and creating records in your table. Note that you will need to placeupdated_atandcreated_atcolumns on your table by default. If you do not wish to have these columns automatically maintained, set the__timestamps__property on your model toFalse.Retrieving all modelsusers=User.all()Retrieving a record by primary keyuser=User.find(1)print(user.name)Querying using modelsusers=User.where('votes','>',100).take(10).get()foruserinusers:print(user.name)AggregatesYou can also use the query builder aggregate functions:count=User.where('votes','>',100).count()If you feel limited by the builder’s fluent interface, you can use thewhere_rawmethod:users=User.where_raw('age > ? and votes = 100',[25]).get()Chunking ResultsIf you need to process a lot of records, you can use thechunkmethod to avoid consuming a lot of RAM:forusersinUser.chunk(100):foruserinusers:# ...Specifying the query connectionYou can specify which database connection to use when querying a model by using theonmethod:user=User.on('connection-name').find(1)If you are using read / write connections, you can force the query to use the “write” connection with the following method:user=User.on_write_connection().find(1)Mass assignmentWhen creating a new model, you pass attributes to the model constructor. These attributes are then assigned to the model via mass-assignment. Though convenient, this can be a serious security concern when passing user input into a model, since the user is then free to modifyanyandallof the model’s attributes. For this reason, all models protect against mass-assignment by default.To get started, set the__fillable__or__guarded__properties on your model.Defining fillable attributes on a modelThe__fillable__property specifies which attributes can be mass-assigned.classUser(Model):__fillable__=['first_name','last_name','email']Defining guarded attributes on a modelThe__guarded__is the inverse and acts as “blacklist”.classUser(Model):__guarded__=['id','password']You can also blockallattributes from mass-assignment:__guarded__=['*']Insert, update and deleteSaving a new modelTo create a new record in the database, simply create a new model instance and call thesavemethod.user=User()user.name='John'user.save()You can also use thecreatemethod to save a model in a single line, but you will need to specify either the__fillable__or__guarded__property on the model since all models are protected against mass-assignment by default.After saving or creating a new model with auto-incrementing IDs, you can retrieve the ID by accessing the object’sidattribute:inserted_id=user.idUsing the create method# Create a new user in the databaseuser=User.create(name='John')# Retrieve the user by attributes, or create it if it does not existuser=User.first_or_create(name='John')# Retrieve the user by attributes, or instantiate it if it does not existuser=User.first_or_new(name='John')Updating a retrieved modeluser=User.find(1)user.name='Foo'user.save()You can also run updates as queries against a set of models:affected_rows=User.where('votes','>',100).update(status=2)Deleting an existing modelTo delete a model, simply call thedeletemodel:user=User.find(1)user.delete()Deleting an existing model by keyUser.destroy(1)User.destroy(1,2,3)You can also run a delete query on a set of models:affected_rows=User.where('votes','>'100).delete()Updating only the model’s timestampsIf you want to only update the timestamps on a model, you can use thetouchmethod:user.touch()TimestampsBy default, the ORM will maintain thecreated_atandupdated_atcolumns on your database table automatically. Simply add thesetimestampcolumns to your table. If you do not wish for the ORM to maintain these columns, just add the__timestamps__property:classUser(Model):__timestamps__=FalseProviding a custom timestamp formatIf you wish to customize the format of your timestamps (the default is the ISO Format) that will be returned when using theto_dictor theto_jsonmethods, you can override theget_date_formatmethod:classUser(Model):defget_date_format():return'DD-MM-YY'Converting to dictionaries / JSONConverting a model to a dictionaryWhen building JSON APIs, you may often need to convert your models and relationships to dictionaries or JSON. So, Orator includes methods for doing so. To convert a model and its loaded relationship to a dictionary, you may use theto_dictmethod:user=User.with_('roles').first()returnuser.to_dict()Note that entire collections of models can also be converted to dictionaries:returnUser.all().serialize()Converting a model to JSONTo convert a model to JSON, you can use theto_jsonmethod!returnUser.find(1).to_json()Query BuilderIntroductionThe database query builder provides a fluent interface to create and run database queries. It can be used to perform most database operations in your application, and works on all supported database systems.SelectsRetrieving all row from a tableusers=db.table('users').get()foruserinusers:print(user['name'])Chunking results from a tableforusersindb.table('users').chunk(100):foruserinusers:# ...Retrieving a single row from a tableuser=db.table('users').where('name','John').first()print(user['name'])Retrieving a single column from a rowuser=db.table('users').where('name','John').pluck('name')Retrieving a list of column valuesroles=db.table('roles').lists('title')This method will return a list of role titles. It can return a dictionary if you pass an extra key parameter.roles=db.table('roles').lists('title','name')Specifying a select clauseusers=db.table('users').select('name','email').get()users=db.table('users').distinct().get()users=db.table('users').select('name as user_name').get()Adding a select clause to an existing queryquery=db.table('users').select('name')users=query.add_select('age').get()Using where operatorsusers=db.table('users').where('age','>',25).get()Or statementsusers=db.table('users').where('age','>',25).or_where('name','John').get()Using Where Betweenusers=db.table('users').where_between('age',[25,35]).get()Using Where Not Betweenusers=db.table('users').where_not_between('age',[25,35]).get()Using Where Inusers=db.table('users').where_in('id',[1,2,3]).get()users=db.table('users').where_not_in('id',[1,2,3]).get()Using Where Null to find records with null valuesusers=db.table('users').where_null('updated_at').get()Order by, group by and havingquery=db.table('users').order_by('name','desc')query=query.group_by('count')query=query.having('count','>',100)users=query.get()Offset and limitusers=db.table('users').skip(10).take(5).get()users=db.table('users').offset(10).limit(5).get()JoinsThe query builder can also be used to write join statements.Basic join statementdb.table('users')\.join('contacts','users.id','=','contacts.user_id')\.join('orders','users.id','=','orders.user_id')\.select('users.id','contacts.phone','orders.price')\.get()Left join statementdb.table('users').left_join('posts','users.id','=','posts.user_id').get()You can also specify more advance join clauses:clause=JoinClause('contacts').on('users.id','=','contacts.user_id').or_on(...)db.table('users').join(clause).get()If you would like to use a “where” style clause on your joins, you may use thewhereandor_wheremethods on a join. Instead of comparing two columns, these methods will compare the column against a value:clause=JoinClause('contacts').on('users.id','=','contacts.user_id').where('contacts.user_id','>',5)db.table('users').join(clause).get()Advanced whereSometimes you may need to create more advanced where clauses such as “where exists” or nested parameter groupings. It is pretty easy to do with the Orator query builderParameter groupingdb.table('users')\.where('name','=','John')\.or_where(db.query().where('votes','>',100).where('title','!=','admin')).get()The query above will produce the following SQL:SELECT*FROMusersWHEREname='John'OR(votes>100ANDtitle!='Admin')Exists statementdb.table('users').where_exists(db.table('orders').select(db.raw(1)).where_raw('order.user_id = users.id'))The query above will produce the following SQL:SELECT*FROMusersWHEREEXISTS(SELECT1FROMordersWHEREorders.user_id=users.id)AggregatesThe query builder also provides a variety of aggregate methods, ` such ascount,max,min,avg, andsum.users=db.table('users').count()price=db.table('orders').max('price')price=db.table('orders').min('price')price=db.table('orders').avg('price')total=db.table('users').sum('votes')Raw expressionsSometimes you may need to use a raw expression in a query. These expressions will be injected into the query as strings, so be careful not to create any SQL injection points! To create a raw expression, you may use theraw()method:db.table('users')\.select(db.raw('count(*) as user_count, status'))\.where('status','!=',1)\.group_by('status')\.get()InsertsInsert records into a tabledb.table('users').insert(email='[email protected]',votes=0)db.table('users').insert({'email':'[email protected]','votes':0})It is important to note that there is two notations available. The reason is quite simple: the dictionary notation, though a little less practical, is here to handle columns names which cannot be passed as keywords arguments.Inserting records into a table with an auto-incrementing IDIf the table has an auto-incrementing id, useinsert_get_idto insert a record and retrieve the id:id=db.table('users').insert_get_id({'email':'[email protected]','votes':0})Inserting multiple record into a tabledb.table('users').insert([{'email':'[email protected]','votes':0},{'email':'[email protected]','votes':0}])UpdatesUpdating recordsdb.table('users').where('id',1).update(votes=1)db.table('users').where('id',1).update({'votes':1})Like theinsertstatement, there is two notations available. The reason is quite simple: the dictionary notation, though a little less practical, is here to handle columns names which cannot be passed as keywords arguments.Incrementing or decrementing the value of a columndb.table('users').increment('votes')# Increment the value by 1db.table('users').increment('votes',5)# Increment the value by 5db.table('users').decrement('votes')# Decrement the value by 1db.table('users').decrement('votes',5)# Decrement the value by 5You can also specify additional columns to update:db.table('users').increment('votes',1,name='John')DeletesDeleting recordsdb.table('users').where('age','<',25).delete()Delete all recordsdb.table('users').delete()Truncatedb.table('users').truncate()UnionsThe query builder provides a quick and easy way to “union” two queries:first=db.table('users').where_null('first_name')users=db.table('users').where_null('last_name').union(first).get()Theunion_allmethod is also available.Read / Write connectionsSometimes you may wish to use one database connection for SELECT statements, and another for INSERT, UPDATE, and DELETE statements. Orator makes this easy, and the proper connections will always be used whether you use raw queries, the query builder or the actual ORMHere is an example of how read / write connections should be configured:config={'mysql':{'read':{'host':'192.168.1.1'},'write':{'host':'192.168.1.2'},'driver':'mysql','database':'database','user':'root','password':'','prefix':''}}Note that two keys have been added to the configuration dictionary:readandwrite. Both of these keys have dictionary values containing a single key:host. The rest of the database options for thereadandwriteconnections will be merged from the mainmysqldictionary. So, you only need to place items in thereadandwritedictionaries if you wish to override the values in the main dictionary. So, in this case,192.168.1.1will be used as the “read” connection, while192.168.1.2will be used as the “write” connection. The database credentials, prefix, character set, and all other options in the mainmysqldictionary will be shared across both connections.Database transactionsTo run a set of operations within a database transaction, you can use thetransactionmethod which is a context manager:withdb.transaction():db.table('users').update({votes:1})db.table('posts').delete()NoteAny exception thrown within a transaction block will cause the transaction to be rolled back automatically.Sometimes you may need to start a transaction yourself:db.begin_transaction()You can rollback a transaction with therollbackmethod:db.rollback()You can also commit a transaction via thecommitmethod:db.commit()By default, all underlying DBAPI connections are set to be in autocommit mode meaning that you don’t need to explicitly commit after each operation.Accessing connectionsWhen using multiple connections, you can access them via theconnection()method:users=db.connection('foo').table('users').get()You also can access the raw, underlying dbapi connection instance:db.connection().get_connection()Sometimes, you may need to reconnect to a given database:db.reconnect('foo')If you need to disconnect from the given database, use thedisconnectmethod:db.disconnect('foo')
actividadnumerosprimos
Este es un ejemplo simple de un paquete Python. Este paquete contiene un conjunto de funciones para realizar operaciones matemáticas sobre números enteros.This program is free software: You can redistribute it and/or modify it under the terms of license by the Free Software Foundation, either version 3 of the License(at your option) any other version.
activipy
An ActivityStreams 2.0 implementation for Python. Provides an easy API for building ActivityStreams 2.0 based applications as well as a test suite for testing ActivityStreams 2.0 libraries against.
activipy-pgsql
An activipy environment to use PostgreSQL as the data store for ActivityStream objects.Theactivipy moduleenables the use ofActivityStreamsin your applications and includes the support of different “environments” to extend functionality.This package provides a “pgsql” environment that maps storage methods to PostgreSQL queries using psycopg2. Object data is stored using the jsonb data type to simplify the schema and provide maximum performance.Example CodeOpen a database:>>> from activipy import core, vocab >>> from activipy.pgsql import pgsql >>> db = pgsql.JsonPgSQL.open( ... host="<db_server>", dbname="<db_name>", ... user="<db_user>", password="<db_user_pass>") >>> env = pgsql.PgSQLNormalizedEnvCreate a new record and save to database:>>> post_this = core.ASObj({ ... "@type": "Create", ... "@id": "http://tsyesika.co.uk/act/foo-id-here/", ... "actor": { ... "@type": "Person", ... "@id": "https://tsyesika.co.uk/", ... "displayName": "Jessica Tallon"}, ... "to": ["acct:[email protected]", ... "acct:[email protected]", ... "acct:[email protected]"], ... "object": { ... "@type": "Note", ... "@id": "https://tsyesika.co.uk/chat/sup-yo/", ... "content": "Up for some root beer floats?"}}, ... pgsql.PgSQLNormalizedEnv) >>> post_this.m.save(db) {'@type': 'Create', '@id': 'http://tsyesika.co.uk/act/foo-id-here/', 'actor': 'https://tsyesika.co.uk/', 'to': ['acct:[email protected]', 'acct:[email protected]', 'acct:[email protected]'], 'object': 'https://tsyesika.co.uk/chat/sup-yo/'}Note how in this example the record has been normalized. In this environment the actor and object are created in separate records and made into references in the parent record. To retrieve the original denormalized form:>>> normalized_post = pgsql.pgsql_fetch( ... "http://tsyesika.co.uk/act/foo-id-here/", db, env) >>> normalized_post.m.denormalize(db) <ASObj Create "http://tsyesika.co.uk/act/foo-id-here/"> >>> normalized_post.m.denormalize(db).json() {'to': ['acct:[email protected]', 'acct:[email protected]', 'acct:[email protected]'], '@id': 'http://tsyesika.co.uk/act/foo-id-here/', '@type': 'Create', 'actor': {'@id': 'https://tsyesika.co.uk/', '@type': 'Person', 'displayName': 'Jessica Tallon'}, 'object': {'@id': 'https://tsyesika.co.uk/chat/sup-yo/', '@type': 'Note', 'content': 'Up for some root beer floats?}}
activiti
UNKNOWN
activitisdk
UNKNOWN
activity
No description available on PyPI.
activity-client
No description available on PyPI.
activity-detection-evaluation
Activity Detection Evaluation FrameworkThe purpose of this library is to provide an easy and standard way for evaluating activity detection comprehensively. In this context, activity detection is associated with any kind of detection involving the identification of time segments (or intervals) that contain relevant activity inside a timed sample. Given a signal that contains an activity occuring from time X to Y, this framework will evaluate how precisely predictions match the occurance of this activity.A task that can be evaluated using this framework is voice activity detection (VAD). The purpose of this task is to identify what parts of an audio clip contain voiced activity. By providing the time intervals that your detector predicted as having voice activity and those given by ground truth, this module will output several metrics that will reflect how well the detector is performing.Expected formatsGiven the vast ammount of annotation and model prediction schemes out there, it is necessary to standardize annotations and predictions to make evaluation easier.Annotation SchemeThe library expects annotations to be formated according to the following example:{"sample1.wav":[{"category":"activity_1","s_time":0,"e_time":2300},{"category":"activity_1","s_time":5200,"e_time":7800},...],"sample2.wav":[],"sample3.wav":[{"category":"activity_2","s_time":152,"e_time":3000}]...}For each files_timeis the time at which the activity started ande_timethe time at which the activity ended, both in ms. The remaining time intervals inside the sample are automatically considered as a "background" class.A labelcategoryis also included to identify the kind of activity being labeled. Though activity detection most often means a binary classification of samples (has activity or not), we find that by providing annotations with richer information of possible sources of error or truth for those activites can help illustrate what are the weakneses and strengths of the detection system.For example, consider the case in which we wish to detect the times when a flock of birds crosses the sky. In our annotations, we may also include the time intervals when other animal species are moving, so to identify cases when the system gets tricked by these situations.Predictions schemeFor predictions, the library follows a similar structure as the annotation scheme:{"sample1.wav":[[100,4000],[6000,10000]...],"sample2.wav":[1000,10000],"sample3.wav":[]...}In this case, each file sample has an array containing the time intervals in which activity was predicted. So, considering the file "sample1.wav" shown above, the predictions indicate the model predicted activity occuring between 100-4000 ms and 6000-10000 ms.
activity-feed
Activity Feed —An Activity Feed for Python using Redis.
activityinfo_python
No description available on PyPI.
activityio
Exercise/activity data has become a prolific resource, but applying any kind of sophisticated analyses is made difficult by the variety of file formats. Thispythonlibrary is intended to munge a number of these formats and present the data in a predictable and useable form. Moreover, the API is both closely intertwined with, and an extension of, the awesomePandas library.StabilityPlease note this package is still very much analpharelease, sobreaking changesare likely.InstallationThe package is available on PyPI:$ pip install activityioExample UsageThere is areadfunction at the top-level ofactivityiothat dispatches the appropriate reader based on file extension:>>> import activityio as aio >>> data = aio.read('example.srm')NOTEsubstitute'example.srm'with a path to your own activity file.But you can also call sub-packages directly:>>> from activityio import srm >>> data = srm.read('example.srm')datain the above example is a subclass of thepandas.DataFrameand provides some neat additional functionality. Most notably, certain columns are “magic” in that they return specificpandas.Seriessubclasses. These subclasses make unit-switching easy, and provide other useful methods:>>> type(data) <class 'activityio._types.activitydata.ActivityData'>>>> data.head(5) temp lap dist alt cad pwr speed hr time 00:00:00 26.1 1 1.027 67 0 0 1.027 71 00:00:01 26.1 1 2.721 67 0 0 1.694 71 00:00:02 26.2 1 4.415 67 0 0 1.694 71 00:00:03 26.2 1 6.331 67 0 0 1.916 71 00:00:04 26.2 1 8.469 67 0 0 2.138 75>>> data.normpwr() 249.54104255943844>>> type(data.speed) <class 'activityio._types.columns.Speed'>>>> data.speed.base_unit 'm/s' >>> data.speed.kph.mean() # use a different unit 38.485063801685477>>> data.dist.base_unit 'm' >>> data.dist.miles[-1] 134.78580023361226>>> data.alt.base_unit 'm' >>> data.alt.ascent.sum() 1898.0 ```ButNOTEyou lose this functionality if you go changing column names>>> data = data.rename(columns={'alt': 'altitude'}) >>> type(data.altitude) <class 'pandas.core.series.Series'>API NotesThe main package is composed of sub-packages that contain the reading logic for the file format after which they’re named. (e.g.activityio.fitis for parsing ANT/Garmin FIT files.)The ultimate logic is defined in a_readingmodule, which provides two functions:gen_recordsandread_and_format.gen_recordsis a generator function for iterating over the data-points in a file. The rows of the data table if you like. A “record” is a dictionary object.read_and_formatuses the above generator to return anActivityDataobject.read_and_formatis available at the top-level of a sub-package aliased asread; so reading in a file looks likesrm.read('path_to_file.srm').gen_recordsis imported under the same name.There are also some usefultoolsprovided in module by the same name.
activity-monitor
# Activity monitor[![Build Status](https://travis-ci.org/tBaxter/activity-monitor.svg?branch=master)](https://travis-ci.org/tBaxter/activity-monitor)A sort of tumblelog-y activity feed thing heavily based on other people's work. I'd credit them if I could remember where I got this stuff.--------Models you wish to monitor should be registered in your settings.py:Example:ACTIVITY_MONITOR_MODELS = ({'model': 'comments.comment', # Required: the model to watch, in app_label.model format.'date_field': 'post_date', # Optional: the datetime field to watch. Defaults to "created"'user_field': 'submitted_by', # Optional: a related user to watch'verb': "commented on", # The default verb string to be recorded.'check': 'approved', # Optional: a boolean field that must be true for the activity to register'manager': 'SoftDeleteManager' # Optional: if there is a custom manager, you can use it. If not, defaults to "objects"},{'model': 'djangoratings.vote','date_field': 'date_added',},{'model': 'auth.user','date_field': 'date_joined','check': 'is_active',},)In the absence of 'user_field', it will assume whatever is defined in AUTH_USER_MODEL is the user. That defaults to auth.user, just in case you didn't set it either.--------### About the settings`Model` is required: it lets Activity Monitor know which models to watch. All models should be registered as `app_label.model`.`date_field` says when the activity happened -- when the new thing was created or updated. If undefined, Activity Monitor will look for a "created" field. Failing that, it will use the current time.`user_field` tells what field the actor can be found in. If undefined, Activity Monitor will look for a 'user' field. If no user field is found at all, Activity Monitor will fall back to request.user. The result is stored as "actor" on the activity.`verb` is the verb string to use. By default, strings will be output as "{actor} {verb} {model.__unicode__()}", or "Joe Cool commented on '10 reasons beagles are awesome.'"`override_string` overrides the normal output altogether.`check` allows you to say, "Make sure this boolean is true on the object before adding the activity." For example, you wouldn't want any activities registering on unpublished blog posts, so you would check against the "published" field. If this field is false for the activity, no activity is registered.`manager` allows passing a custom manager to be used. Defaults to `objects`.`filter_superuser` suppresses registering activities if a superuser performed them. Useful if the superuser's changes should go unnoted, particularly if you're watching for updates.`filter_staff` suppresses registering activities if a staff member performed them. Like `filter_superuser`, this is useful if the changes should go unnoted, particularly if you're watching for updates.### What happens when the settings are definedOnce the settings are defined, the models are passed to follow_model() in activity_monitor.managers, which will send a signal on object creation or deletion.When an object is created or deleted, the signal is sent and an activity object is created with the object,the user and the time of the event.This is done in activity_monitor.signals.create_or_update, which does the bulk of the work. Among other things, it:* Uses the "check" field to determine if an object should or shouldn't be shown.* Checks if the user is superuser. If so, you don't want to show it in the user activity monitor.* Sorts out user field and determines a valid user object.* Checks if the object has a future timestamp (such as future-published blog entries) before adding.* Throws away activities if the related object is deleted or otherwise removed.* Makes sure the activity does not already exist.* Saves who did it (actor), what they did, what they did it to, and when.To minimize queries, you can access the related user via 'actor', or just a unicode representation of their name with 'actor_name'. Similarly the target object is available as 'content_object', but a simple unicode representation is available as "target"### Simple OutputActivity monitor supports several ways to output the activities.* You can have a simple chronological list* You can group activities by the target being acted on. In this case, output would be something like "Joe Cool and Conrad commented on Woodstock."### Customizing outputYou can define also define custom template snippets for the target content object. In this case, the template should live in `/templates/activity_monitor/includes/models/applabel_modelname.html`. An example is included.You do not have to define all your content types. If you do not, activity monitor will safely fall back on a default output.**NOTE**: Loading these custom templates can lead to more database queries than you'd like. Custom templates should be used sparingly, and if you have a lot of them, you should at least cache the results.
activitypub
This is a Python library to use withActivityPub. ActivityPubis an API for an open, distributed, social network.InstallYou can install the development version of activitypub with:pip install git+git://github.com/dsblank/activitypubor the last packaged version with:pip install activitypubTo use with redis:pip install redis redis_collectionsOR to use with mongodb:pip install pymongoOR to use with SQLAlchemy:pip install sqlalchemyAbstractionsThis module is designed to be a generally useful ActivityPub library in Python. It targets three different levels of use:ActivityPub object APIActivityPub database APIWebserver APIThese levels can be used independently, or together. They can best be used together using a Manager:>>>fromactivitypub.managerimportManager>>>fromactivitypub.databaseimportListDatabase>>>db=ListDatabase()>>>manager=Manager(database=db)>>>p=manager.Person(id="alyssa")>>>p.to_dict(){'@context':'https://www.w3.org/ns/activitystreams','endpoints':{},'followers':'https://example.com/alyssa/followers','following':'https://example.com/alyssa/following','id':'https://example.com/alyssa','inbox':'https://example.com/alyssa/inbox','liked':'https://example.com/alyssa/liked','likes':'https://example.com/alyssa/likes','outbox':'https://example.com/alyssa/outbox','type':'Person','url':'https://example.com/alyssa'}>>>db.actors.insert_one(p.to_dict())>>>db.actors.find_one({"id":'https://example.com/alyssa'}){'@context':'https://www.w3.org/ns/activitystreams','endpoints':{},'followers':'https://example.com/alyssa/followers','following':'https://example.com/alyssa/following','id':'https://example.com/alyssa','inbox':'https://example.com/alyssa/inbox','liked':'https://example.com/alyssa/liked','likes':'https://example.com/alyssa/likes','outbox':'https://example.com/alyssa/outbox','type':'Person','url':'https://example.com/alyssa','_id':ObjectId('5b579aee1342a3230c18fbf7')}activitypub supports the following databases:MongoDBSQL dialects — any that that sqlalchemy supports, including:SQLite (including in-memory)FirebirdMicrosoft SQL ServerMySQLOraclePostgreSQLSybase… and many more!An in-memory, JSON-based database for testingRedisThe activitypub database API is a subset of the MongoDB.activitypub is targeting the following web frameworks:FlaskTornadoOthers can be supported. Please ask!The activitypub webservice API is based on Flask’s.
activitypubdantic
ActivityPubdanticValidate and Interact with ActivityPub JSONGitHub RepositoryActivityPubdantic DocumentationActivityPub ProtocolActivityStreams SpecificationWhat Is ActivityPubdantic?ActivityPubdanticis a suite of tools for validating ActivityPub JSON and constructing consistent representations of ActivityPub notifications and content.Pydanticmodels enable the validation logic and can be imported for use in custom-coded classes or FastAPI routes.Why Does ActivityPub JSON Require Validation?ActivityPubis a protocol for decentralized social networking. It defines client-to-server and server-to-server interactions and relies onActivityStreamsfor its vocabulary. Many of the protocol's specifications are purposefully unrestrictive, giving developers the freedom to implement only the features relevant to their products or to adjust to meet their particular requirements.However, that flexibility presents challenges for assessing data validity and simplifying developers' code.ActivityPubdantichelps developers overcome those challenges by using ActivityPub'stypefield to identify proper checks for other fields and standardize their structures.Examplesare available in the sections below.Mastodonsupports ActivityPub, and Meta'sThreadsapp plans to conform to the protocol sometime in thenear future.ActivityPubdanticincludes apytestscript, which uses examples from ActivityPub, ActivityStreams, and Mastodon to test its parsing and validation. As Threads and other platforms implement ActivityPub, those tests (and more broadly, this package) will be updated to stay current.InstallationInstallActivityPubdanticwithpip:pip install activitypubdanticMost developer use cases will require one or both of the following import statements, which serve different purposes:# Use classes for validation and common operationsimportactivitypubdanticasap# Use models in FastAPI routesfromactivitypubdantic.modelsimport*ExamplesThe following examples include simple use cases and code snippets forActivityPubdantic. For a more thorough listing ofActivityPubdantic's classes, functions, and models, check out itsdocumentation.Parsing Activity, Collection, Link, and Object JSONActivities,Collections,Links, andObjectsare the core concepts around which ActivityPub and ActivityStreams are built. By reducing their complexity and standardizing their representation,ActivityPubdantichelps resolve potential pain points for developers.ActivityPub's protocol includes anexampleof aLikeactivity. The example'stofield is a list, while itsccfield is a string. Both formats are valid, but they require slightly different handling in subsequent lines of code. To resolve that difference,ActivityPubdanticcopies and rewrites the JSON, so those fields are always represented as lists of dictionaries.importactivitypubdanticasap# Example JSON from ActivityPub documentationexample_json={"@context":["https://www.w3.org/ns/activitystreams",{"@language":"en"}],"type":"Like","actor":"https://dustycloud.org/chris/","name":"Chris liked 'Minimal ActivityPub update client'","object":"https://rhiaro.co.uk/2016/05/minimal-activitypub","to":["https://rhiaro.co.uk/#amy","https://dustycloud.org/followers","https://rhiaro.co.uk/followers/"],"cc":"https://e14n.com/evan"}# Get the appropriate class, which is determined by the type fieldoutput_class=ap.get_class(example_json)# Produce the parsed and validated JSON stringoutput_json=output_class.json()print(output_json)# See JSON belowget_class()reads theexample_jsonand uses its type to select the applicable Pydantic model. That model then uses validators for each field to assert they comply with the protocol and then restructures them.Theoutput_jsonis longer and, at first glance, more difficult to read. But because it contains types for each item in its fields and it standardizes the structures of similar fields – liketoandcc– it is more descriptive and easier to consistently manipulate.{"@context":["https://www.w3.org/ns/activitystreams",{"@language":"en"}],"type":"Like","name":"Chris liked 'Minimal ActivityPub update client'","to":[{"@context":"https://www.w3.org/ns/activitystreams","type":"Object","id":"https://rhiaro.co.uk/#amy"},{"@context":"https://www.w3.org/ns/activitystreams","type":"Object","id":"https://dustycloud.org/followers"},{"@context":"https://www.w3.org/ns/activitystreams","type":"Object","id":"https://rhiaro.co.uk/followers/"}],"cc":[{"@context":"https://www.w3.org/ns/activitystreams","type":"Object","id":"https://e14n.com/evan"}],"actor":[{"@context":"https://www.w3.org/ns/activitystreams","type":"Object","id":"https://dustycloud.org/chris/"}],"object":{"@context":"https://www.w3.org/ns/activitystreams","type":"Object","id":"https://rhiaro.co.uk/2016/05/minimal-activitypub"}}However, not every project requires that degree of granularity. For example, some servers may already have logic that ignores additional fields and only iterates throughidURLs in the JSON.# Use the verbose keyword argumentshort_output_json=output_class.json(verbose=False)print(short_output_json)# See JSON belowSettingverbose=Falseshortens the output, retaining consistency but eliminating unneeded data for more concise tasks.{"@context":["https://www.w3.org/ns/activitystreams",{"@language":"en"}],"type":"Like","name":"Chris liked 'Minimal ActivityPub update client'","to":["https://rhiaro.co.uk/#amy","https://dustycloud.org/followers","https://rhiaro.co.uk/followers/"],"cc":["https://e14n.com/evan"],"actor":["https://dustycloud.org/chris/"],"object":"https://rhiaro.co.uk/2016/05/minimal-activitypub"}Validating FastAPI Request BodiesFastAPIuses Pydantic models to validaterequest bodies. After importingActivityPubdanticmodels directly, developers can automatically validate requests and then use theget_class_from_model()function to smoothly interact with ActivityPub JSON.When the sameLikeactivity is sent in the POST request to/outbox, the request body is validated by FastAPI and loaded into anActivityPubdanticclass to produce clean JSON.importactivitypubdanticasapfromactivitypubdantic.modelsimport*fromfastapiimportFastAPI,Responseapp=FastAPI()# Route for an ActivityPub [email protected]("/outbox",status_code=201)asyncdefoutbox(activity:ActivityModel,response:Response):# Initialize the class and perform relevant data manipulationsactivity_class=ap.get_class_from_model(activity)activity_class.make_public()# Save the JSON in the outbox in the databaseprint(activity_class.json())# Use the type to set the headerresponse.headers["Location"]="https://example.com/{0}/{1}".format(activity_class.type.lower(),1,# ID should come from the database)# Return with header and status codereturnMethods – likemake_public()– perform common operations on the data. In this case,make_public()removes thebtoandbccattributes from the class instance, if they exist. Additionally, thetypeattribute specifies a location in the response header, per the ActivityPubdocumentationfor client-to-server interactions.ContributingActivityPubdanticis still a work in progress. If you find it valuable for your project but notice bugs, need changes, or require additional features or support for other ActivityPub platforms,open an issueor fork tostart a PR.Thedeveloper_requirements.txtfile includes all of the packages your virtual environment needs, includingpdoc3for generating new documentation,blackfor formatting, andpytestfor unit tests.Keep in mind, all PRs require a successful run of the GitHub Workflow for testing, so if you significantly changeActivityPubdantic's structure, be sure to add, alter, or remove relevant tests.Thank you for your interest!
activity.py
activity.pyLoad running activities, output into a neat JSON format.InstallationInstall this library usingpip:pip install activity.pyUsageImportActivityand use theload_fitandload_gpxfunction to load your activities. You can access attributes on the activity at that point, or alternatively useas_jsonto dump your activity as a JSON object.fromactivity_pyimportActivityactivity=Activity.load_fit('fitfile.fit')print(activity.duration,activity.distance,activity.pace)DevelopmentTo contribute to this library, first checkout the code. Then create a new virtual environment:cd activity.py python -m venv venv source venv/bin/activateNow install the dependencies and test dependencies:pip install -e '.[test]'To run the tests:pytestUsingpipenv, this looks like:pipenv install .[test] pipenv run pytestActivity JSONDocumentation TBC.
activitysim
[![Build Status](https://travis-ci.com/ActivitySim/activitysim.svg?branch=main)](https://travis-ci.org/github/ActivitySim/activitysim)[![CoverageStatus](https://coveralls.io/repos/github/ActivitySim/activitysim/badge.svg?branch=main)](https://coveralls.io/github/ActivitySim/activitysim?branch=main)The mission of the ActivitySim project is to create and maintain advanced, open-source, activity-based travel behavior modeling software based on best software development practices for distribution at no charge to the public.The ActivitySim project is led by a consortium of Metropolitan Planning Organizations (MPOs) and other transportation planning agencies, which provides technical direction and resources to support project development. New member agencies are welcome to join the consortium. All member agencies help make decisions about development priorities and benefit from contributions of other agency partners.## Documentationhttps://activitysim.github.io/activitysim
activitystreampython
activitystreampythonA Python 3 module for interacting with the Activity Stream Data Service API v1.To find out more about the Activity Stream Data Service API,browse the Swagger documentation.UsageInstallationpip install activitystreampythonSample usagefromactivitystreampythonimportActivityStreamAPIfromdatetimeimportdatetime,timezone,timedeltatenant="demo"# Your Activity Stream tenant (this will be the same as the Activity Stream subdomain)username="[email protected]"# The email address for your API userpassword="correct-horse-battery-staple"# The password for your API user - NB: don't store passwords in code!end_datetime=datetime.now(timezone.utc)start_datetime=end_datetime-timedelta(days=7)activity_stream=ActivityStreamAPI(tenant=tenant,username=username,password=password)# Retrieve customers# This method initialises a generator which can be iterated over to retrieve customer datacustomers=activity_stream.ticketing_data(data_type="customers",start_datetime=start_datetime,end_datetime=end_datetime,filter_type="updatedate",)forcustomerincustomers:print(customer)# Retrieve tickets# This method initialises a generator which can be iterated over to retrieve ticket datatickets=activity_stream.ticketing_data(data_type="tickets",start_datetime=start_datetime,end_datetime=end_datetime,filter_type="updatedate",)forticketintickets:print(ticket)# Retrieve marketing permissions# This method initialises a generator which can be iterated over to retrieve marketing permission datamarketing_permissions=activity_stream.marketing_data(start_datetime=start_datetime)formarketing_permissioninmarketing_permissions:print(marketing_permission)NotesWe recommend using theupdatedatefilter_typeif you are using this package to keep an external database up-to-date with Activity Stream data. That way you can be sure that you will never miss out on data even if historic records are backfilled in Activity Stream. For the initial import, using theeventdatefilter_typemay be more predictable in terms of data volumes for each time period.This package is not officially supported, although we do use it internally and will endeavour to avoid breaking changes.LicenseCopyright 2023 crowdEngage LimitedLicensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
activitystreams
Activity Streams 2.0
activitystreams2
activitystreams2is a python library for producing Activity Streams 2.0 content. It doesn‘t have a lot of features (no extensions) but aims to be give correct results and be easy to understand.At the moment, only supports writing activity streams.InstallingThe recommended way of manually installing activitystreams2 is via pippip3installactivitystreams2Examplesimportactivitystreams2martin=activitystreams2.Actor(id='http://www.test.example/martin')activity=activitystreams2.Create(actor=martin,summary='Martin created an image',object='http://example.org/foo.jpg',)# do this to serialize itjson_string=str(activity)CaveatsWe completely don’t support extension types at the moment.AlternativesThe only python Activity Streams 2 library I know of isactivipy. It supports extension types, but it‘s still pre-alpha and seems to have been forgotten.Changes0.5Rename to activitystreams2Add api documentation0.4.1Document upcoming name changeDrop PBR and switch to using a package0.4Add rudimentary support for extension properties0.3Fix bug printing collections with only 1 item0.2.0Add support for ActivityPub extension0.1.0Initial version
activityTasks-udacity-inventrohyder
Activity Tasks moduleAs part of the Udacity Machine learning nanodegree, creating and uploading a package is on of the projects to carry out.The following package contains code that was part of Minerva schools at KGI algorthims class.The package allows creation of Activities and link them with Tasks.To see an example of usage check:https://github.com/Inventrohyder/CS110/blob/master/Assignments/cs110_assignment_2/cs110_assignment_2.ipynb
activity-tools
activity-toolsThis project contains building blocks that can be used to create ActivityPub applications.Project status: Experimental, work in progressDocumentationFeel free to browse the code, or read the generateddoumentation. If you prefer an example instead I have included arun.pywho runs a small sample instance.InstallInstall it fromPyPI. Versions after 1.0.0 will follow semantic versioning with amajor.minor.patchwhere breaking changes should only occur with a major version bump.pip install activity-tools
activity-trace
Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content.
activity-tracker
A library to perform daily-active-user (and similar) tracking
activizer
ActivizerInterface for Active LearningAbout Active LearningActive learning is the process by which a learning algorithm can query a user interactively to label data points which are close to the decision boundary formed during classificationThe primary objective of this project is to build an Interface for Active learning which simplifies the process of chosing algorithms,query strategies and labels This eliminates the task of writing programs for each task The interface helps annotators of various domains to label data in an interactive manner and also provides features of saving the final model and resultsInterface for Active Learning supports Image dataset where the user can upload data in Zip format. It supports 3 classifiers and 7 query strategies.ClassifiersKNN ClassifierRandom Forest ClassifierDecision Tree ClassifierQuery StrategiesUncertainty SamplingRandom SamplingEntropy SamplingQuery By Committee(Uncertainty Sampling)Query By Committee(Vote Entropy Sampling)Query By Committee(Max Disagreement Sampling)Query By Committee(Consensus Entropy Sampling)This project is implemented with the Active Learning packagemodALHow to RunThis project requires python 3.x installed on your machineInstallationThe package can be installed using the command : pip install activizerOpen Python console and run the following commandsfrom activizer import appapp.run()Copy the url in the browserSelect the Classifier Algorithm, the Query Strategy and give the number of samples you wish to label. Then select the training / testing dataset in Zip formatFor each iteration an image will be shown and a dropdown to label that image. Below them will be shown a graph with current accuracy.After all the iterations are over, Final accuracy with graph will be shownThe user can see the images along with the labels provided by the algorithm selected during training. The trained model can be downloaded in pickle format (.pickle file) and can be used for prediction by clicking "Already have a model" on the Main Page. The user can then upload the pickle file and use the model to classify imagesThe Interface can be used for prediction by uploading the validation dataset in Zip format.The result will be shown in a table consisting of image name and label predicted by the model.
actiwatch
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>Everyone is permitted to copy and distribute verbatim copiesof this license document, but changing it is not allowed.PreambleThe GNU General Public License is a free, copyleft license forsoftware and other kinds of works.The licenses for most software and other practical works are designedto take away your freedom to share and change the works. By contrast,the GNU General Public License is intended to guarantee your freedom toshare and change all versions of a program--to make sure it remains freesoftware for all its users. We, the Free Software Foundation, use theGNU General Public License for most of our software; it applies also toany other work released this way by its authors. You can apply it toyour programs, too.When we speak of free software, we are referring to freedom, notprice. Our General Public Licenses are designed to make sure that youhave the freedom to distribute copies of free software (and charge forthem if you wish), that you receive source code or can get it if youwant it, that you can change the software or use pieces of it in newfree programs, and that you know you can do these things.To protect your rights, we need to prevent others from denying youthese rights or asking you to surrender the rights. Therefore, you havecertain responsibilities if you distribute copies of the software, or ifyou modify it: responsibilities to respect the freedom of others.For example, if you distribute copies of such a program, whethergratis or for a fee, you must pass on to the recipients the samefreedoms that you received. You must make sure that they, too, receiveor can get the source code. And you must show them these terms so theyknow their rights.Developers that use the GNU GPL protect your rights with two steps:(1) assert copyright on the software, and (2) offer you this Licensegiving you legal permission to copy, distribute and/or modify it.For the developers' and authors' protection, the GPL clearly explainsthat there is no warranty for this free software. For both users' andauthors' sake, the GPL requires that modified versions be marked aschanged, so that their problems will not be attributed erroneously toauthors of previous versions.Some devices are designed to deny users access to install or runmodified versions of the software inside them, although the manufacturercan do so. This is fundamentally incompatible with the aim ofprotecting users' freedom to change the software. The systematicpattern of such abuse occurs in the area of products for individuals touse, which is precisely where it is most unacceptable. Therefore, wehave designed this version of the GPL to prohibit the practice for thoseproducts. If such problems arise substantially in other domains, westand ready to extend this provision to those domains in future versionsof the GPL, as needed to protect the freedom of users.Finally, every program is threatened constantly by software patents.States should not allow patents to restrict development and use ofsoftware on general-purpose computers, but in those that do, we wish toavoid the special danger that patents applied to a free program couldmake it effectively proprietary. To prevent this, the GPL assures thatpatents cannot be used to render the program non-free.The precise terms and conditions for copying, distribution andmodification follow.TERMS AND CONDITIONS0. Definitions."This License" refers to version 3 of the GNU General Public License."Copyright" also means copyright-like laws that apply to other kinds ofworks, such as semiconductor masks."The Program" refers to any copyrightable work licensed under thisLicense. Each licensee is addressed as "you". "Licensees" and"recipients" may be individuals or organizations.To "modify" a work means to copy from or adapt all or part of the workin a fashion requiring copyright permission, other than the making of anexact copy. The resulting work is called a "modified version" of theearlier work or a work "based on" the earlier work.A "covered work" means either the unmodified Program or a work basedon the Program.To "propagate" a work means to do anything with it that, withoutpermission, would make you directly or secondarily liable forinfringement under applicable copyright law, except executing it on acomputer or modifying a private copy. Propagation includes copying,distribution (with or without modification), making available to thepublic, and in some countries other activities as well.To "convey" a work means any kind of propagation that enables otherparties to make or receive copies. Mere interaction with a user througha computer network, with no transfer of a copy, is not conveying.An interactive user interface displays "Appropriate Legal Notices"to the extent that it includes a convenient and prominently visiblefeature that (1) displays an appropriate copyright notice, and (2)tells the user that there is no warranty for the work (except to theextent that warranties are provided), that licensees may convey thework under this License, and how to view a copy of this License. Ifthe interface presents a list of user commands or options, such as amenu, a prominent item in the list meets this criterion.1. Source Code.The "source code" for a work means the preferred form of the workfor making modifications to it. "Object code" means any non-sourceform of a work.A "Standard Interface" means an interface that either is an officialstandard defined by a recognized standards body, or, in the case ofinterfaces specified for a particular programming language, one thatis widely used among developers working in that language.The "System Libraries" of an executable work include anything, otherthan the work as a whole, that (a) is included in the normal form ofpackaging a Major Component, but which is not part of that MajorComponent, and (b) serves only to enable use of the work with thatMajor Component, or to implement a Standard Interface for which animplementation is available to the public in source code form. A"Major Component", in this context, means a major essential component(kernel, window system, and so on) of the specific operating system(if any) on which the executable work runs, or a compiler used toproduce the work, or an object code interpreter used to run it.The "Corresponding Source" for a work in object code form means allthe source code needed to generate, install, and (for an executablework) run the object code and to modify the work, including scripts tocontrol those activities. However, it does not include the work'sSystem Libraries, or general-purpose tools or generally available freeprograms which are used unmodified in performing those activities butwhich are not part of the work. For example, Corresponding Sourceincludes interface definition files associated with source files forthe work, and the source code for shared libraries and dynamicallylinked subprograms that the work is specifically designed to require,such as by intimate data communication or control flow between thosesubprograms and other parts of the work.The Corresponding Source need not include anything that userscan regenerate automatically from other parts of the CorrespondingSource.The Corresponding Source for a work in source code form is thatsame work.2. Basic Permissions.All rights granted under this License are granted for the term ofcopyright on the Program, and are irrevocable provided the statedconditions are met. This License explicitly affirms your unlimitedpermission to run the unmodified Program. The output from running acovered work is covered by this License only if the output, given itscontent, constitutes a covered work. This License acknowledges yourrights of fair use or other equivalent, as provided by copyright law.You may make, run and propagate covered works that you do notconvey, without conditions so long as your license otherwise remainsin force. You may convey covered works to others for the sole purposeof having them make modifications exclusively for you, or provide youwith facilities for running those works, provided that you comply withthe terms of this License in conveying all material for which you donot control copyright. Those thus making or running the covered worksfor you must do so exclusively on your behalf, under your directionand control, on terms that prohibit them from making any copies ofyour copyrighted material outside their relationship with you.Conveying under any other circumstances is permitted solely underthe conditions stated below. Sublicensing is not allowed; section 10makes it unnecessary.3. Protecting Users' Legal Rights From Anti-Circumvention Law.No covered work shall be deemed part of an effective technologicalmeasure under any applicable law fulfilling obligations under article11 of the WIPO copyright treaty adopted on 20 December 1996, orsimilar laws prohibiting or restricting circumvention of suchmeasures.When you convey a covered work, you waive any legal power to forbidcircumvention of technological measures to the extent such circumventionis effected by exercising rights under this License with respect tothe covered work, and you disclaim any intention to limit operation ormodification of the work as a means of enforcing, against the work'susers, your or third parties' legal rights to forbid circumvention oftechnological measures.4. Conveying Verbatim Copies.You may convey verbatim copies of the Program's source code as youreceive it, in any medium, provided that you conspicuously andappropriately publish on each copy an appropriate copyright notice;keep intact all notices stating that this License and anynon-permissive terms added in accord with section 7 apply to the code;keep intact all notices of the absence of any warranty; and give allrecipients a copy of this License along with the Program.You may charge any price or no price for each copy that you convey,and you may offer support or warranty protection for a fee.5. Conveying Modified Source Versions.You may convey a work based on the Program, or the modifications toproduce it from the Program, in the form of source code under theterms of section 4, provided that you also meet all of these conditions:a) The work must carry prominent notices stating that you modifiedit, and giving a relevant date.b) The work must carry prominent notices stating that it isreleased under this License and any conditions added under section7. This requirement modifies the requirement in section 4 to"keep intact all notices".c) You must license the entire work, as a whole, under thisLicense to anyone who comes into possession of a copy. ThisLicense will therefore apply, along with any applicable section 7additional terms, to the whole of the work, and all its parts,regardless of how they are packaged. This License gives nopermission to license the work in any other way, but it does notinvalidate such permission if you have separately received it.d) If the work has interactive user interfaces, each must displayAppropriate Legal Notices; however, if the Program has interactiveinterfaces that do not display Appropriate Legal Notices, yourwork need not make them do so.A compilation of a covered work with other separate and independentworks, which are not by their nature extensions of the covered work,and which are not combined with it such as to form a larger program,in or on a volume of a storage or distribution medium, is called an"aggregate" if the compilation and its resulting copyright are notused to limit the access or legal rights of the compilation's usersbeyond what the individual works permit. Inclusion of a covered workin an aggregate does not cause this License to apply to the otherparts of the aggregate.6. Conveying Non-Source Forms.You may convey a covered work in object code form under the termsof sections 4 and 5, provided that you also convey themachine-readable Corresponding Source under the terms of this License,in one of these ways:a) Convey the object code in, or embodied in, a physical product(including a physical distribution medium), accompanied by theCorresponding Source fixed on a durable physical mediumcustomarily used for software interchange.b) Convey the object code in, or embodied in, a physical product(including a physical distribution medium), accompanied by awritten offer, valid for at least three years and valid for aslong as you offer spare parts or customer support for that productmodel, to give anyone who possesses the object code either (1) acopy of the Corresponding Source for all the software in theproduct that is covered by this License, on a durable physicalmedium customarily used for software interchange, for a price nomore than your reasonable cost of physically performing thisconveying of source, or (2) access to copy theCorresponding Source from a network server at no charge.c) Convey individual copies of the object code with a copy of thewritten offer to provide the Corresponding Source. Thisalternative is allowed only occasionally and noncommercially, andonly if you received the object code with such an offer, in accordwith subsection 6b.d) Convey the object code by offering access from a designatedplace (gratis or for a charge), and offer equivalent access to theCorresponding Source in the same way through the same place at nofurther charge. You need not require recipients to copy theCorresponding Source along with the object code. If the place tocopy the object code is a network server, the Corresponding Sourcemay be on a different server (operated by you or a third party)that supports equivalent copying facilities, provided you maintainclear directions next to the object code saying where to find theCorresponding Source. Regardless of what server hosts theCorresponding Source, you remain obligated to ensure that it isavailable for as long as needed to satisfy these requirements.e) Convey the object code using peer-to-peer transmission, providedyou inform other peers where the object code and CorrespondingSource of the work are being offered to the general public at nocharge under subsection 6d.A separable portion of the object code, whose source code is excludedfrom the Corresponding Source as a System Library, need not beincluded in conveying the object code work.A "User Product" is either (1) a "consumer product", which means anytangible personal property which is normally used for personal, family,or household purposes, or (2) anything designed or sold for incorporationinto a dwelling. In determining whether a product is a consumer product,doubtful cases shall be resolved in favor of coverage. For a particularproduct received by a particular user, "normally used" refers to atypical or common use of that class of product, regardless of the statusof the particular user or of the way in which the particular useractually uses, or expects or is expected to use, the product. A productis a consumer product regardless of whether the product has substantialcommercial, industrial or non-consumer uses, unless such uses representthe only significant mode of use of the product."Installation Information" for a User Product means any methods,procedures, authorization keys, or other information required to installand execute modified versions of a covered work in that User Product froma modified version of its Corresponding Source. The information mustsuffice to ensure that the continued functioning of the modified objectcode is in no case prevented or interfered with solely becausemodification has been made.If you convey an object code work under this section in, or with, orspecifically for use in, a User Product, and the conveying occurs aspart of a transaction in which the right of possession and use of theUser Product is transferred to the recipient in perpetuity or for afixed term (regardless of how the transaction is characterized), theCorresponding Source conveyed under this section must be accompaniedby the Installation Information. But this requirement does not applyif neither you nor any third party retains the ability to installmodified object code on the User Product (for example, the work hasbeen installed in ROM).The requirement to provide Installation Information does not include arequirement to continue to provide support service, warranty, or updatesfor a work that has been modified or installed by the recipient, or forthe User Product in which it has been modified or installed. Access to anetwork may be denied when the modification itself materially andadversely affects the operation of the network or violates the rules andprotocols for communication across the network.Corresponding Source conveyed, and Installation Information provided,in accord with this section must be in a format that is publiclydocumented (and with an implementation available to the public insource code form), and must require no special password or key forunpacking, reading or copying.7. Additional Terms."Additional permissions" are terms that supplement the terms of thisLicense by making exceptions from one or more of its conditions.Additional permissions that are applicable to the entire Program shallbe treated as though they were included in this License, to the extentthat they are valid under applicable law. If additional permissionsapply only to part of the Program, that part may be used separatelyunder those permissions, but the entire Program remains governed bythis License without regard to the additional permissions.When you convey a copy of a covered work, you may at your optionremove any additional permissions from that copy, or from any part ofit. (Additional permissions may be written to require their ownremoval in certain cases when you modify the work.) You may placeadditional permissions on material, added by you to a covered work,for which you have or can give appropriate copyright permission.Notwithstanding any other provision of this License, for material youadd to a covered work, you may (if authorized by the copyright holders ofthat material) supplement the terms of this License with terms:a) Disclaiming warranty or limiting liability differently from theterms of sections 15 and 16 of this License; orb) Requiring preservation of specified reasonable legal notices orauthor attributions in that material or in the Appropriate LegalNotices displayed by works containing it; orc) Prohibiting misrepresentation of the origin of that material, orrequiring that modified versions of such material be marked inreasonable ways as different from the original version; ord) Limiting the use for publicity purposes of names of licensors orauthors of the material; ore) Declining to grant rights under trademark law for use of sometrade names, trademarks, or service marks; orf) Requiring indemnification of licensors and authors of thatmaterial by anyone who conveys the material (or modified versions ofit) with contractual assumptions of liability to the recipient, forany liability that these contractual assumptions directly impose onthose licensors and authors.All other non-permissive additional terms are considered "furtherrestrictions" within the meaning of section 10. If the Program as youreceived it, or any part of it, contains a notice stating that it isgoverned by this License along with a term that is a furtherrestriction, you may remove that term. If a license document containsa further restriction but permits relicensing or conveying under thisLicense, you may add to a covered work material governed by the termsof that license document, provided that the further restriction doesnot survive such relicensing or conveying.If you add terms to a covered work in accord with this section, youmust place, in the relevant source files, a statement of theadditional terms that apply to those files, or a notice indicatingwhere to find the applicable terms.Additional terms, permissive or non-permissive, may be stated in theform of a separately written license, or stated as exceptions;the above requirements apply either way.8. Termination.You may not propagate or modify a covered work except as expresslyprovided under this License. Any attempt otherwise to propagate ormodify it is void, and will automatically terminate your rights underthis License (including any patent licenses granted under the thirdparagraph of section 11).However, if you cease all violation of this License, then yourlicense from a particular copyright holder is reinstated (a)provisionally, unless and until the copyright holder explicitly andfinally terminates your license, and (b) permanently, if the copyrightholder fails to notify you of the violation by some reasonable meansprior to 60 days after the cessation.Moreover, your license from a particular copyright holder isreinstated permanently if the copyright holder notifies you of theviolation by some reasonable means, this is the first time you havereceived notice of violation of this License (for any work) from thatcopyright holder, and you cure the violation prior to 30 days afteryour receipt of the notice.Termination of your rights under this section does not terminate thelicenses of parties who have received copies or rights from you underthis License. If your rights have been terminated and not permanentlyreinstated, you do not qualify to receive new licenses for the samematerial under section 10.9. Acceptance Not Required for Having Copies.You are not required to accept this License in order to receive orrun a copy of the Program. Ancillary propagation of a covered workoccurring solely as a consequence of using peer-to-peer transmissionto receive a copy likewise does not require acceptance. However,nothing other than this License grants you permission to propagate ormodify any covered work. These actions infringe copyright if you donot accept this License. Therefore, by modifying or propagating acovered work, you indicate your acceptance of this License to do so.10. Automatic Licensing of Downstream Recipients.Each time you convey a covered work, the recipient automaticallyreceives a license from the original licensors, to run, modify andpropagate that work, subject to this License. You are not responsiblefor enforcing compliance by third parties with this License.An "entity transaction" is a transaction transferring control of anorganization, or substantially all assets of one, or subdividing anorganization, or merging organizations. If propagation of a coveredwork results from an entity transaction, each party to thattransaction who receives a copy of the work also receives whateverlicenses to the work the party's predecessor in interest had or couldgive under the previous paragraph, plus a right to possession of theCorresponding Source of the work from the predecessor in interest, ifthe predecessor has it or can get it with reasonable efforts.You may not impose any further restrictions on the exercise of therights granted or affirmed under this License. For example, you maynot impose a license fee, royalty, or other charge for exercise ofrights granted under this License, and you may not initiate litigation(including a cross-claim or counterclaim in a lawsuit) alleging thatany patent claim is infringed by making, using, selling, offering forsale, or importing the Program or any portion of it.11. Patents.A "contributor" is a copyright holder who authorizes use under thisLicense of the Program or a work on which the Program is based. Thework thus licensed is called the contributor's "contributor version".A contributor's "essential patent claims" are all patent claimsowned or controlled by the contributor, whether already acquired orhereafter acquired, that would be infringed by some manner, permittedby this License, of making, using, or selling its contributor version,but do not include claims that would be infringed only as aconsequence of further modification of the contributor version. Forpurposes of this definition, "control" includes the right to grantpatent sublicenses in a manner consistent with the requirements ofthis License.Each contributor grants you a non-exclusive, worldwide, royalty-freepatent license under the contributor's essential patent claims, tomake, use, sell, offer for sale, import and otherwise run, modify andpropagate the contents of its contributor version.In the following three paragraphs, a "patent license" is any expressagreement or commitment, however denominated, not to enforce a patent(such as an express permission to practice a patent or covenant not tosue for patent infringement). To "grant" such a patent license to aparty means to make such an agreement or commitment not to enforce apatent against the party.If you convey a covered work, knowingly relying on a patent license,and the Corresponding Source of the work is not available for anyoneto copy, free of charge and under the terms of this License, through apublicly available network server or other readily accessible means,then you must either (1) cause the Corresponding Source to be soavailable, or (2) arrange to deprive yourself of the benefit of thepatent license for this particular work, or (3) arrange, in a mannerconsistent with the requirements of this License, to extend the patentlicense to downstream recipients. "Knowingly relying" means you haveactual knowledge that, but for the patent license, your conveying thecovered work in a country, or your recipient's use of the covered workin a country, would infringe one or more identifiable patents in thatcountry that you have reason to believe are valid.If, pursuant to or in connection with a single transaction orarrangement, you convey, or propagate by procuring conveyance of, acovered work, and grant a patent license to some of the partiesreceiving the covered work authorizing them to use, propagate, modifyor convey a specific copy of the covered work, then the patent licenseyou grant is automatically extended to all recipients of the coveredwork and works based on it.A patent license is "discriminatory" if it does not include withinthe scope of its coverage, prohibits the exercise of, or isconditioned on the non-exercise of one or more of the rights that arespecifically granted under this License. You may not convey a coveredwork if you are a party to an arrangement with a third party that isin the business of distributing software, under which you make paymentto the third party based on the extent of your activity of conveyingthe work, and under which the third party grants, to any of theparties who would receive the covered work from you, a discriminatorypatent license (a) in connection with copies of the covered workconveyed by you (or copies made from those copies), or (b) primarilyfor and in connection with specific products or compilations thatcontain the covered work, unless you entered into that arrangement,or that patent license was granted, prior to 28 March 2007.Nothing in this License shall be construed as excluding or limitingany implied license or other defenses to infringement that mayotherwise be available to you under applicable patent law.12. No Surrender of Others' Freedom.If conditions are imposed on you (whether by court order, agreement orotherwise) that contradict the conditions of this License, they do notexcuse you from the conditions of this License. If you cannot convey acovered work so as to satisfy simultaneously your obligations under thisLicense and any other pertinent obligations, then as a consequence you maynot convey it at all. For example, if you agree to terms that obligate youto collect a royalty for further conveying from those to whom you conveythe Program, the only way you could satisfy both those terms and thisLicense would be to refrain entirely from conveying the Program.13. Use with the GNU Affero General Public License.Notwithstanding any other provision of this License, you havepermission to link or combine any covered work with a work licensedunder version 3 of the GNU Affero General Public License into a singlecombined work, and to convey the resulting work. The terms of thisLicense will continue to apply to the part which is the covered work,but the special requirements of the GNU Affero General Public License,section 13, concerning interaction through a network will apply to thecombination as such.14. Revised Versions of this License.The Free Software Foundation may publish revised and/or new versions ofthe GNU General Public License from time to time. Such new versions willbe similar in spirit to the present version, but may differ in detail toaddress new problems or concerns.Each version is given a distinguishing version number. If theProgram specifies that a certain numbered version of the GNU GeneralPublic License "or any later version" applies to it, you have theoption of following the terms and conditions either of that numberedversion or of any later version published by the Free SoftwareFoundation. If the Program does not specify a version number of theGNU General Public License, you may choose any version ever publishedby the Free Software Foundation.If the Program specifies that a proxy can decide which futureversions of the GNU General Public License can be used, that proxy'spublic statement of acceptance of a version permanently authorizes youto choose that version for the Program.Later license versions may give you additional or differentpermissions. However, no additional obligations are imposed on anyauthor or copyright holder as a result of your choosing to follow alater version.15. Disclaimer of Warranty.THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BYAPPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHTHOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTYOF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULARPURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAMIS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OFALL NECESSARY SERVICING, REPAIR OR CORRECTION.16. Limitation of Liability.IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITINGWILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYSTHE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANYGENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THEUSE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OFDATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRDPARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OFSUCH DAMAGES.17. Interpretation of Sections 15 and 16.If the disclaimer of warranty and limitation of liability providedabove cannot be given local legal effect according to their terms,reviewing courts shall apply local law that most closely approximatesan absolute waiver of all civil liability in connection with theProgram, unless a warranty or assumption of liability accompanies acopy of the Program in return for a fee.END OF TERMS AND CONDITIONSHow to Apply These Terms to Your New ProgramsIf you develop a new program, and you want it to be of the greatestpossible use to the public, the best way to achieve this is to make itfree software which everyone can redistribute and change under these terms.To do so, attach the following notices to the program. It is safestto attach them to the start of each source file to most effectivelystate the exclusion of warranty; and each file should have at leastthe "copyright" line and a pointer to where the full notice is found.<one line to give the program's name and a brief idea of what it does.>Copyright (C) <year> <name of author>This program is free software: you can redistribute it and/or modifyit under the terms of the GNU General Public License as published bythe Free Software Foundation, either version 3 of the License, or(at your option) any later version.This program is distributed in the hope that it will be useful,but WITHOUT ANY WARRANTY; without even the implied warranty ofMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See theGNU General Public License for more details.You should have received a copy of the GNU General Public Licensealong with this program. If not, see <https://www.gnu.org/licenses/>.Also add information on how to contact you by electronic and paper mail.If the program does terminal interaction, make it output a shortnotice like this when it starts in an interactive mode:<program> Copyright (C) <year> <name of author>This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.This is free software, and you are welcome to redistribute itunder certain conditions; type `show c' for details.The hypothetical commands `show w' and `show c' should show the appropriateparts of the General Public License. Of course, your program's commandsmight be different; for a GUI interface, you would use an "about box".You should also get your employer (if you work as a programmer) or school,if any, to sign a "copyright disclaimer" for the program, if necessary.For more information on this, and how to apply and follow the GNU GPL, see<https://www.gnu.org/licenses/>.The GNU General Public License does not permit incorporating your programinto proprietary programs. If your program is a subroutine library, youmay consider it more useful to permit linking proprietary applications withthe library. If this is what you want to do, use the GNU Lesser GeneralPublic License instead of this License. But first, please read<https://www.gnu.org/licenses/why-not-lgpl.html>.Description: # ActiwatchActiwatch is a Python module built for interacting with files from [Philips Actiwatch actigraphy devices](https://www.usa.philips.com/healthcare/product/HC1044809/actiwatch-2-activity-monitor).Philips Actiwatches are wrist-worn accelerometers/luxometers that allow clinicians and researchers to track activity, light-exposure, and sleep-patterns of their patients.## Table of Contents- [Getting Started](#getting-started)- [Usage](#usage)- [Development](#development)- [Credits](#credits)- [License](#license)## Getting Started```$ pip install actiwatch```### PrerequisitesCurrently requires Python 3.6 with `pipenv` installed.## Usage```python>>> import actiwatch>>> watch = actiwatch.Actiwatch(path="/path/to/file/.../example.csv",start_time=16,sleep_threshold=40,manually_scored=False)>>> watch<Actiwatch [A12345]>>>> watch.header{'watch_ID': 'A12345', 'patient_sex': 'Male', ..., 'threshold_light': 1000.0}>>> watch.patient_id'A12345'>>> watch.sleep_metricsSplit_Day Interval Sleep Wake TST_Min WASO_Min SE watch_ID0 1 Rest 132 59 264.0 118.0 69.109948 A123451 2 Rest 135 23 270.0 46.0 85.443038 A123452 3 Rest 143 48 286.0 96.0 74.869110 A12345... ... ... ... ... ... ... ... ...>>> watch.sleep_latencySplit_Day Sleep_Latency_Min Wake_Latency_Min watch_ID0 1 42.0 38.0 A123451 2 24.0 80.0 A123452 3 44.0 38.0 A12345... ... ... ... ...>>> watch.bedtimeSplit_Day Rest Waking_Up Time_Bed Time_Wake watch_ID0 0 NaT NaT NaN NaN A123451 1 2018-02-16 01:36:00 2018-02-16 07:58:00 1.600000 7.966667 A123452 2 2018-02-17 01:48:00 2018-02-17 07:04:00 1.800000 7.066667 A12345... ... ... ... ... ... ...>>> watch.relative_amplitudeSplit_Day RA watch_ID0 0 NaN A123451 1 0.896660 A123452 2 0.852598 A12345... ... ... ...>>> watch.total_valuesSplit_Day Activity Light watch_ID0 0 97430 1591.58 A123451 1 186100 2077.75 A123452 2 112832 1364.61 A12345... ... ... ... ...```## Development### ContributingPlease checkout a development branch for whatever features you want to work on.### Running Tests#### Functional Tests`make dev-tests`#### Style Tests`make dev-format`### VersioningOur group uses [Semantic Versioning](http://semver.org/) for versioning.## Credits### Authors- **Ryan Opel**## LicenseThis project is licensed under the GNU General Public License v2.0 - see [LICENSE](LICENSE) for details.Platform: UNKNOWN
actk
actkAutomated Cell ToolkitA pipeline to process field-of-view (FOV) microscopy images and generate data and render-ready products for the cells in each field. Of note, the data produced by this pipeline is used for theCell Feature Explorer.FeaturesAll steps and functionality in this package can be run as single steps or all together by using the command line.In general, all commands for this package will follow the format:actk {step} {command}stepis the name of the step, such as "StandardizeFOVArray" or "SingleCellFeatures"commandis what you want that step to do, such as "run" or "push"Each step will check that the dataset provided contains the required fields prior to processing. For details and definitions on each field, see ourdataset fields documentation.An example dataset can be seenhere.PipelineTo run the entire pipeline from start to finish you can simply run:actkallrun--dataset{pathtodataset}Step specific parameters can additionally be passed by simply appending them. For example: the stepSingleCellFeatureshas a parameter forcell_ceiling_adjustmentand this can be set on both the individual step run level and also for the entire pipeline with:actkallrun--dataset{pathtodataset}--cell_ceiling_adjustment{integer}See thesteps module in our documentationfor a full list of parameters for each stepPipeline ConfigA configuration file can be provided to the underlyingdatasteplibrary that manages the data storage and upload of the steps in this workflow.The config file should simply be calledworkflow_config.jsonand be available from whichever directory you runactkfrom. If this config is not found in the current working directory, defaults are selected by thedatasteppackage.Here is an example of our production config:{"quilt_storage_bucket":"s3://allencell","project_local_staging_dir":"/allen/aics/modeling/jacksonb/results/actk"}You can even additionally attach step-specific configuration in this file by using the name of the step like so:{"quilt_storage_bucket":"s3://example_config_7","project_local_staging_dir":"example/config/7","example":{"step_local_staging_dir":"example/step/local/staging/"}}AICS Distributed ComputingFor members of the AICS team, to run in distributed mode across the SLURM cluster add the--distributedflag to the pipeline call.To set distributed cluster and worker parameters you can additionally add the flags:--n_workers {int}(i.e.--n_workers 100)--worker_cpu {int}(i.e.--worker_cpu 2)--worker_mem {str}(i.e.--worker_mem 100GB)Individual Stepsactk standardizefovarray run --dataset {path to dataset}, Generate standardized, ordered, and normalized FOV images as OME-Tiffs.actk singlecellfeatures run --dataset {path to dataset}, Generate a features JSON file for each cell in the dataset.actk singlecellimages run --dataset {path to dataset}, Generate bounded 3D images and 2D projections for each cell in the dataset.actk diagnosticsheets run --dataset {path to dataset}, Generate diagnostic sheets for single cell images. Useful for quality control.InstallationInstall Requires:The python package,numpy, must be installed prior to the installation of this package:pip install numpyStable Release:pip install actkDevelopment Head:pip install git+https://github.com/AllenCellModeling/actk.gitDocumentationFor full package documentation please visitallencellmodeling.github.io/actk.Published DataFor a large-scale example of what this library is capable of, please see the data produced by this pipeline after running our largest cell dataset through it. The data from the Allen Institute for Cell Science created from this pipeline can be foundhere.This package contains the source microscopy images, segmentation files, pre-processed single cell images and features, and diagnostic sheets.Our source images are of endogenously-tagged hiPSC, grown for 4 days on Matrigel-coated 96-well, glass bottom imaging plates. Each field of view (FOV) includes 4 channels (BF, EGFP, DNA, Cell membrane) collected either interwoven with one camera (workflow Pipeline 4.0 - 4.2) or simultaneously with two cameras (Workflow Pipeline 4.4). You can use the file metadata of each image to target the specific channel you are interested in. FOVs were either selected randomly (mode A), enriched for mitotic events (mode B) or sampling 3 different areas of a colony (edge, ridge, center) using a photo protective cocktail (mode C). The images cataloged in this dataset come in several flavors:Field of view (FOV) images with channels* :BrightfieldEGFPDNACell MembraneSegmentation files with channels:Nucleus SegmentationNucleus ContourMembrane SegmentationMembrane Contour* Some FOV images contain seven channels rather than four. The extra three channels are "dummy" channels added during acquisition that can be ignored.The full details of the Allen Institute cell workflow are available on our websitehere.The full details of the Allen Institute microscopy workflow are available on our websitehere.The following is provided for each cell:Cell IdCell Index (from within the FOV's segmentation)Metadata (Cell line, Labeled protein name, segmented region index, gene, etc.)3D cell and nuclear segmentation, and, DNA, membrane, and structure channels2D max projects for dimension pairs (XY, ZX, and ZY) of the above 3D imagesA whole bunch of features for each cellFor the 3D single cell images the channel ordering is:Segmented DNASegmented MembraneDNA (Hoechst)Membrane (CellMask)Labeled Structure (GFP)Transmitted LightTo interact with this dataset please see theQuilt Documentation.DevelopmentSeeCONTRIBUTING.mdfor information related to developing the code.For more details on how this pipeline is constructed please seecookiecutter-stepworkflowanddatastep.To add new steps to this pipeline, runmake_new_stepand follow the instructions inCONTRIBUTING.mdDeveloper InstallationThe following two commands will install the package with dev dependencies in editable mode and download all resources required for testing.pipinstall-e.[dev]pythonscripts/download_test_data.pyAICS Developer InstructionsIf you want to run this pipeline with the Pipeline Integrated Cell dataset (pipeline 4.*) run the following commands:pipinstall-e.[all]pythonscripts/download_aics_dataset.pyOptions for this script are available and can be viewed with:python scripts/download_aics_dataset.py --helpAcknowledgmentsA previous iteration of this pipeline was created and managed byGregory Johnsonfor work withPyTorch Integrated Cell.This version of this pipeline is more generalized and while still used for the Integrated Cell model, can be used to pre-process a variety of microscopy image datasets.The previous version of this pipeline produced thepipeline_integrated_single_cell dataset.Free software: Allen Institute Software License
actl
Introductionactl is a command line application engine for Python, it provides an utility and library for the development of modular command line applications.Features2 lines of code for themainfunctionUsesClickfor modular command line definitionsAutomatic loaded command modules «fromcommands/*.py»Installationactl is avaiable from the official Python Package Index (PIP), you can install it from the terminal:pipinstallactlHello Worldmain.pyimportactlactl.main(__name__,__file__)commands/[email protected]()defhello():print("Hello")Test the app:pythonmain.py
actlog
ActLogActLog is an activity log that shows you what you did when. It logs when you were active on your desktop and takes periodic screenshots so you later can see what you did.This helps with mental health (I did actually do something!) and with e.g. filling out time sheets.It runs under Xorg. It does not (yet?) run under Windows, Mac or under Wayland.ScreenshotsFeaturesTrack active time using an "active detector". Use the defaults or roll your own.Periodically store sceenshots so you can see what you did when.They can be pruned/expired so we don't use massive disk space.A web interface and CLI for seeing your activity.Open Source MIT license.User documentationQuick version: Installation and running from PyPI:First you'll need to install some prerequisites. We need these Debian/Ubuntu packages:sudoaptinstallscrotimagemagickpngquantxprintidleEitherscrotorimagemagickare needed to create screenshots.imagemagickandpngquantare optional and reduce screenshot size.xprintidleis only required if you're not running under a new GNOME.For other distros, you may need to modify the package names. (Weirdly, there is noxprintidlepackage for Fedora, but it can becompiled manually.)$python3-mvenv/path/to/my/virtual/environment $/path/to/my/virtual/environment/bin/pipinstallactlog# This starts the monitor (that stores logs and creates screenshots)# and the web server.$/path/to/my/virtual/environment/bin/actlogdaemonNow wait 15-20 minutes while you're using your computer and then open your browser athttp://localhost:1248to see some activity.Instead of using the web interface, you can also use the cli:$/path/to/my/virtual/environment/bin/actloglog--help# This will show you a log of your activity from last Thursday$/path/to/my/virtual/environment/bin/actloglog--from"last Thursday"$/path/to/my/virtual/environment/bin/actlogview--help# This will show screenshots from yesterday and today# in `eog`, GNOME's image viewer$/path/to/my/virtual/environment/bin/actlogview--viewereogStarting with systemdStop any runningactloginstance.Copyactlog.serviceto~/.config/systemd/user/actlog.serviceand modify theExecStartpath to be/path/to/my/virtual/environment/bin/actlog daemon.systemctl --user daemon-reloadsystemctl --user enable actlogsystemctl --user start actlogAnd to check it is running:systemctl --user status actlogConfig fileActlog uses a config file:~/.actlog/config.yamlor you can specify one withactlog --config-file FILE. It lets you set all the same options as you see with--help. E.g.:global:storage:/my/custom/actlog-storageview:viewer:eogwill set the global--storageoption and the--vieweroption for theviewsubcommand. Seeactlog --helpand help for the subcommands, e.g.actlog view --help.Future enhancementsTop Priority: Password protection of Web UI - Otherwise screenshots are available to all onhttp://localhost:1250.scrypt password generation, checking, editingsessions and gunicornhttps://pypi.org/project/python-pam/Reducing the number of external/non-python dependencies, to make it easier to install.dbus-python on fedora: sudo dnf install @development-tools dbus-devel cmake glib2-devel python3-develVisualize everything on atimeline - vis.js - A dynamic, browser based visualization library.Also store title of active windowand title/url of current browser tab e.g. re-using ActivityWatch browser plugin?Use screenshot metadata for more features?Support more desktop environments than Debian/Ubuntu GNOME.Wayland support: Tricky since there is no API for taking screenshots without the desktop blinking, but itappearsto be possible in principle at least.Somehow pin python dependencies, (like would be done withpackage-lock.jsonwithnpm). As a start, the dependencies that we know work are stored inrequirements-that-work.txt, created withpip-compileinMakefile. But if I manually downgrade a package,pip-compilestill shows the newest package, not the one actually installed, so I don't really trust it.Related projectsSimilar ideasActivityWatch - Open-source time trackerGetting startedGitHub - selfspy/selfspy: Log everything you do on the computer, for statistics, future reference and all-around fun!GitHub - karpathy/ulogme: Automatically collect and visualize usage statistics in Ubuntu/OSX environments.Relies on the user to input activityKimai - Open Source time-trackerClosed sourceRescueTime: Fully Automated Time Tracking SoftwareTime Tracker Management Tracking SoftwareDeveloper documentationProject organization and how to run with a git checkoutThe project consists of a python "backend" and a JavaScript/svelte/nodejs frontend with the web user interface (under./frontend/).The python backend contains the monitor and the web server. The monitor does the actual logging and creating screenshots, and the webserver serves the svelte frontend and provides APIs for it to get log and screenshot information.The python code by default runs under the assumption that there is avenvenvironment under./venv.makewill build such avenv, build both the python backend and the frontend so they're ready to use../devwill set up some environment variables that allow you to easily run from a non-installed directory. So:$make $./devactlogdaemonAnd then open your browser athttp://localhost:1248. Allow for 15-20 minutes to see some data. Again, you can$./devactloglog--from"last Thursday"$./devactlogview--viewereogRunning developer versions of monitor and webserver(s)By defaultactlog daemonruns the montior and a production webserver (undergunicorn). Using this setup, if you change any code in either the frontend or backend, you need to stop the daemon, perhaps rebuild the frontend and then restart the daemon.Thankfully there are better ways.Start the monitor separately from everything else with:$./devactlogdaemon--monitor-start-onlyNow start the python webserver in a development version with hot reloading so code changes are visible immediately:$./devactlogdaemon--dev-web-server-start-onlyThis still serves a built version of the frontend. If you want to modify the frontend as well, run the development frontend webserver with frontend hot reloading:$cdfrontend $npmrundevNow use the web user interface athttp://localhost:5173/(notice the different port number).This only works with the dev web server (--dev-web-server-start-only), since it also disablesCORS, a requirement for thenpm run devweb server to access the backend API.(I usetmuxfor keeping these three running so I don't need three terminals, but that is up to you.)Data: SQL and screenshots files$sqlite3~/.actlog/storage/actlog.db'.tables'activity_log $sqlite3~/.actlog/storage/actlog.db'.schema activity_log'CREATETABLEactivity_log(timeREAL,activityTEXT,detailsTEXT);$catselect.sql SELECTid,DATETIME(time,'unixepoch','localtime')ast,time,activity,details FROMactivity_log LIMIT2$sqlite3~/.actlog/storage/actlog.db<select.sql2023-10-0107:47:51|1696160871.11847|startup|{"pid":279556}2023-10-0107:47:51|1696160871.12261|active|$ls-1~/.actlog/storage/screenshots|head-n42023-10-05_10:00:19+0200.png2023-10-05_10:15:22+0200.png2023-10-05_10:30:24+0200.png2023-10-05_10:45:28+0200.pngExtension pointsscreenshot_metadata.pyHere is a sample~/.actlog/screenshot_metadata.pythat checks to see if there is a window running with a secret title, and if so, creates screenshot metadata{"secret": true}.importsubprocessdefmetadata():result=subprocess.run("wmctrl -l | grep -q 'Secret Window Title'",shell=True);ifresult.returncode==0:return{"secret":True}else:return{}Activity detectionDetecting user activity an be done in multiple ways. Whether the screensaver is active is one reliable signal, butxprintidleis another.By default, we first see if we can detect the screensaver status. This is currently done for Gnome (tested on Gnome 43.6). But you can also add your own activity detector. See help for the--active-detectoroption for the daemon subcommand, and see an example inexamples/detect_active.py. Bug reports and PRs for other (tested) detection mechanisms, or even just testing for other desktop environments are welcome. Thisdiscussioncould be a starting point.So first we try to see if the user has a custom active detector, if not we try to detect the GNOME screensaver, and fallback to usingxprintidleand declare inactivity after--inactivity-timeminutes (default: 5) of inactivity.ReleasingBump version:vi pyproject.toml frontend/package.json frontend/package-lock.jsonCommit everythingmake packageUpload to testpypi:./venv/bin/python3 -m twine upload --repository testpypi dist/*Username:__token__Password: pypi-.* (for testpypi token)Test installrm -rf /tmp/venv-test && python3 -m venv /tmp/venv-test /tmp/venv-test/bin/pip install -r $w/actlog/requirements-that-work.txt /tmp/venv-test/bin/pip install --index-url https://test.pypi.org/simple/ --no-deps actloggit tag $versiongit push --tags origin mainUpload to pypi:./venv/bin/python3 -m twine upload dist/*Username:__token__Password: pypi-.* (for pypi token)Test installrm -rf /tmp/venv-test && python3 -m venv /tmp/venv-test /tmp/venv-test/bin/pip install actlogTODOUse logger instead of print()Try running this on WindowsTry Wayland with hacked gnome-screenshot
actmon
This package provides a user idle timer for X11.
act-neuron
Automatic Cell Tuner (act)actprovides tools for optimization-based parameter selection for biologically realistic cell models developed inNEURON. The project is inspired by theASCTlibrary.actrelies on a simulation-based optimization, i.e., for a pipelineParameters -> Black-box simulator -> Simulated datait tries to obtain parameter estimates indirectly by working with simulated data.InstallationCurrently,actcan be installed from GitHub using pip or locally with the standardpipinstallation process.pipinstallact-neurongitclonehttps://github.com/V-Marco/ACT.gitcdACT pipinstall.UsagePrerequisitesConceptually,actrequires three components.A.hocfile which declares the cell's properties.Modfiles for this.hocfile.Target voltage data of shape (num_cur_inj, ...) to predict on OR parameters to simulate target data with.Pipelineactoperates in original and segregated modes. Original mode runs in the following steps:Generate a parameter set uniformly randomly from a (lower; upper) interval for each current injection.Simulate a voltage trace for each current injection and respective parameter set.Extract key summary features (e.g., inter-spike time), and keep parameter sets for those voltage traces which match the target voltage trace in these summary features.Repeat steps 1-3 until the specified number of current injections is matched.Train a neural network model to predict conductance values from a voltage trace using saved sets as targets.Predict conductance values by applying the trained model to the target voltage data. Take the maximum of each predicted value across all current injections.Segregated mode changes step 5 so that the model is trained on regions of a voltage trace. The regions can be specified in terms of time (X-axis) or voltage (Y-axis) bounds.Setting up a simulationSimulations' parameters are defined aspythonclasses insimulation/simulation_constants.py.Names of parameters to optimize for are defined in theparamsproperty. The names must match the hoc file. Lower and upper bounds are specified inlowsandhighsproperties.Segregated parameters and respective time/voltage bounds are specified as lists-of-lists in the respectivesegr_...properties.Running a simulationsimulation/run_simulation.pyis an example script of runningacton Pospichil's cells.simulation/analyze_res.pyis an example script which gives a summary of the model's quality.Examples (Jupyter Notebook)examples/Pospischil_sPYr/main.ipynbexample of runningacton Pospichil's cellsOn Google Colab:Pospischil_sPYrLA Type A
actomyosin-analyser
No description available on PyPI.
acton
Acton is a modular Python library for active learning.Actonis a suburb in Canberra, where Australian National University is located.DependenciesMost dependencies will be installed by pip. You will need to manually install:Python 3.4+ProtobufSetupInstall Acton usingpip3:pipinstallgit+https://github.com/chengsoonong/acton.gitThis provides access to a command-line toolactonas well as theactonPython library.Acton CLIThe command-line interface to Acton is available through theactoncommand. This takes a dataset of features and labels and simulates an active learning experiment on that dataset.InputActon supports three formats of dataset: ASCII, pandas, and HDF5. ASCII tables can be any file read byastropy.io.ascii.read, including many common plain-text table formats like CSV. pandas tables are supported if dumped to a file fromDataFrame.to_hdf. HDF5 tables are either an HDF5 file with datasets for each feature and a dataset for labels, or an HDF5 file with one multidimensional dataset for features and one dataset for labels.OutputActon outputs a file containing predictions for each epoch of the simulation. These are encoded as specified inthis notebook.QuickstartYou will need a dataset. Acton currently supports ASCII tables (anything that can be read byastropy.io.ascii.read), HDF5 tables, and Pandas tables saved as HDF5.Here’s a simple classification datasetthat you can use.To run Acton to generate a passive learning curve with logistic regression:acton--dataclassification.txt--labelcol20--featurecol10--featurecol11-opassive.pb--recommenderRandomRecommender--predictorLogisticRegressionThis command uses columnscol10andcol11as features, andcol20as labels, a logistic regression predictor, and random recommendations. It outputs all predictions for test data points selected randomly from the input data topassive.pb, which can then be used to construct a plot. To output an active learning curve using uncertainty sampling, changeRandomRecommendertoUncertaintyRecommender.To show the learning curve, useacton.plot:python3-macton.plotpassive.pbLook at the directoryexamplesfor more examples.
actonet
No description available on PyPI.
actontext
actontextThis module provides functions to load COM DLL on Windows machine, to avoid registering it in Windows registry. You can find more about how it can be done in articles on web:Registration-Free Activation of COM Components: A Walkthrough;Create side-by-side registrationless COM manifest with VS;etc. just Google itPossibly the main use case for package could be lightweight testing of COM objects, created from built in-proc COM servers
actoolkit
NetApp Astra Control Python SDKThe NetApp Astra Control Python SDK is designed to provide guidance for working with the NetApp Astra Control API.You can use theastraSDK/library out of the box, and as a set of example recommended code and processes, "cookbook" style. Thetoolkit.pyscript provides a command line interface to interact with Astra Control with built-in guardrails, and since it utilizesastraSDK/it can provide additional context around the requirements of the astraSDK classes.When usingtoolkit.py/actoolkitin automation, it ishighlyrecommended to tie your workflows to a specifictagorrelease(as functionality may change over time), and be sure to thoroughly test all workflows to ensure expected behavior.Note: Support for all components of the Astra Control Python SDK is exclusively handled in a best effort fashion by thecommunityviaGitHub issues, and isnotsupported by NetApp Support. Use of this SDK is entirely at your own risk.InstallationThe NetApp Astra Control SDK can be utilized three different ways, depending upon your use case:Administrator: if you want to use the toolkit as quickly as possible without modifications, it is recommended to utilize theprepared Docker image, as it has all of the required dependencies and binaries configured and ready to go (includingactoolkit).DevOps / GitOps: if utilizing the toolkit in a software pipeline, thepython package(actoolkit) istypicallythe most straightforward method of consumption. A simplepip installcommand results in toolkit.py (asactoolkit) being available in the user's PATH and all python-related dependencies installed. It also installs theastraSDK/library for use in custom scripts.Developer: if you plan to modify the SDK for internal consumption,manual installationis recommended by cloning (or forking) this repository and working in your local development environment. Ensure that all dependencies mentioned below are met.ThisPython SDK Installation videowalks through all three use cases / installation methods.PrerequisitesFor theadministratoruse case with theprepared Docker image:Docker 20.10.7+For theDevOps / GitOpsuse case with thepython package(actoolkit):Python 3.8+Pip 21.1.2+For thedeveloperuse case or tomanually installthe NetApp Astra Control SDK:Python 3.8+Pip 21.1.2+Git 2.30.2+Kubectl 1.23+Azure CLI (az) 2.25.0+ or Google Cloud SDK (gcloud) 345.0.0+ or AWS CLI (aws) 1.22.0+Helm 3.2.1+AuthenticationNo matter the method of installation, the SDK authenticates by reading in theconfig.yamlfile from the following locations (in order):The current working directory that the executed function is located in~/.config/astra-toolkits//etc/astra-toolkits/The directory pointed to by the shell env varASTRATOOLKITS_CONFAgain, no matter the method of installation, theconfig.yamlfile should have the following syntax:headers: Authorization: Bearer <Bearer-Token-From-API-Access-Page> uid: <Account-ID-From-API-Access-Page> astra_project: <Shortname-or-FQDN> verifySSL: <True-or-False>ThisAstra Control API Credentialsvideo walks through creating theconfig.yamlfile, or follow the instructions below.Create (if usingactoolkit) or edit (if using the git repo) theconfig.yamlfile in one of the above mentioned locations with your NetApp Astra Control account information:Authorization: Bearer: Your API tokenuid: Your Astra Control Account IDastra_project: Your Astra Control instance (shortnames get astra.netapp.io appended to them, FQDNs [anything with a.] are used unchanged)verifySSL: True or False, useful for self-signed certs (if this field isn't included it's treated as True)You can find this information in your NetApp Astra Control account profile. Click the user icon in the upper right-hand corner, then chooseAPI Accessfrom the drop-down menu which appears.Copy and paste your Astra Control account ID into theconfig.yamlfile.To get your API token, click+ Generate API token. Generate a new API token, then copy and paste the token into theconfig.yamlWhen you are done, theconfig.yamllooks like:headers: Authorization: Bearer thisIsJustAnExample_token-replaceWithYours== uid: 12345678-abcd-4efg-1234-567890abcdef astra_project: astra.netapp.io verifySSL: True1. Docker InstallationLaunch the prepared Docker image. Docker will automatically download the image if you don't already have it on your system.docker run -it netapp/astra-toolkits:latest /bin/bashNOTE: From this point forward, you will be working in the Docker container you just launched.Set up your kubeconfig to successfully run kubectl commands against your cluster with the appropriate command (e.g.export KUBECONFIG=/path/to/kubeconfig,gcloud container clusters get-credentials,az aks get-credentials, oraws eks update-kubeconfig).Configure yourconfig.yamlas detailed in theauthenticationsection.Since theactoolkitpython package is bundled with the Docker image, you can immediately use it to interact with Astra Control:actoolkit list clustersAlternatively, you can also follow themanual installationsteps to clone the git repo and optionally make modifications to the code base, all while not having to worry about software dependencies.2. Python Package InstallationInstallactoolkitwith the following command:python3 -m pip install actoolkitConfigure yourconfig.yamlas detailed in theauthenticationsection.You can now useactoolkitto invoke the NetApp Astra Control SDK. For example,listyour Astra Control Kubernetes clusters with the command:actoolkit list clustersAdditionally, theastraSDK/library is available for import for use when creating custom scripts:>>> import astraSDK >>> print(astraSDK.clusters.getClusters(output="table").main()) +----------------------+--------------------------------------+---------------+----------------+ | clusterName | clusterID | clusterType | managedState | +======================+======================================+===============+================+ | uscentral1-cluster | 0412fd41-51b8-478a-b055-0bd50e34b1fe | gke | managed | +----------------------+--------------------------------------+---------------+----------------+ | prod-cluster | c69d8281-d4ea-4902-b03e-0c39c7da4543 | gke | managed | +----------------------+--------------------------------------+---------------+----------------+3. Manual InstallationClone the NetApp Astra Control SDK repo.git clone https://github.com/NetApp/netapp-astra-toolkits.gitMove into the repo directory.cd netapp-astra-toolkitsRun the following commands to add the required Python elements:python3 -m venv toolkit source toolkit/bin/activate pip install -r requirements.txtConfigure yourconfig.yamlas detailed in theauthenticationsection.You can now use./toolkit.pyto invoke the NetApp Astra Control SDK. For example,listyour Astra Control Kubernetes clusters with the command:./toolkit.py list clustersAdditional ResourcesSeethe documentationfor more information.
actools
UNKNOWN
actor
A simple actor framework with dynamic parallelismFree software: GNU Lesser General Public License v2.1 or later (LGPLv2+)Installationpip install actorYou can also install the in-development version with:pip install https://github.com/yasserfarouk/actor/archive/master.zipDocumentationhttps://actor.readthedocs.io/DevelopmentTo run the all tests run:toxNote, to combine the coverage data from all the tox environments run:Windowsset PYTEST_ADDOPTS=--cov-append toxOtherPYTEST_ADDOPTS=--cov-append toxChangelog0.0.0 (2020-06-16)First release on PyPI.
actor-au
A very simple actor simulation framework, intended for use with theatre_au.
actorch
Welcome toactorch, a deep reinforcement learning framework for fast prototyping based onPyTorch. The following algorithms have been implemented so far:REINFORCEAdvantage Actor-Critic (A2C)Actor-Critic Kronecker-Factored Trust Region (ACKTR)Trust Region Policy Optimization (TRPO)Proximal Policy Optimization (PPO)Advantage-Weighted Regression (AWR)Deep Deterministic Policy Gradient (DDPG)Distributional Deep Deterministic Policy Gradient (D3PG)Twin Delayed Deep Deterministic Policy Gradient (TD3)Soft Actor-Critic (SAC)💡 Key featuresSupport forOpenAI GymnasiumenvironmentsSupport forcustom observation/action spacesSupport forcustom multimodal input multimodal output modelsSupport forrecurrent models(e.g. RNNs, LSTMs, GRUs, etc.)Support forcustom policy/value distributionsSupport forcustom preprocessing/postprocessing pipelinesSupport forcustom exploration strategiesSupport fornormalizing flowsBatched environments (both for training and evaluation)Batchedtrajectory replayBatched anddistributional value estimation(e.g. batched and distributionalRetraceandV-trace)Data parallel and distributed data parallelmulti-GPU training and evaluationAutomaticmixed precision trainingIntegration withRay Tunefor experiment execution andhyperparameter tuningat any scaleEffortless experiment definition throughPython-based configuration filesBuilt-invisualization toolto plot performance metricsModularobject-orienteddesignDetailedAPI documentation🛠️️ InstallationFor Windows, make sure the latestVisual C++ runtimeis installed.Using PipFirst of all, installPython 3.6 or later. Open a terminal and run:pipinstallactorchUsing Conda virtual environmentClone or download and extract the repository, navigate to<path-to-repository>/binand run the installation script (install.shfor Linux/macOS,install.batfor Windows).actorchand its dependencies (pinned to a specific version) will be installed in aCondavirtual environment namedactorch-env.NOTE: you can directly useactorch-envand theactorchpackage in the local project directory for development (seeFor development).Using Docker (Linux/macOS only)First of all, installDockerandNVIDIA Container Runtime. Clone or download and extract the repository, navigate to<path-to-repository>, open a terminal and run:dockerbuild-t<desired-image-name>.# Build imagedockerrun-it--runtime=nvidia<desired-image-name># Run container from imageactorchand its dependencies (pinned to a specific version) will be installed in the specified Docker image.NOTE: you can directly use theactorchpackage in the local project directory inside a Docker container run from the specified Docker image for development (seeFor development).From sourceFirst of all, installPython 3.6 or later. Clone or download and extract the repository, navigate to<path-to-repository>, open a terminal and run:pipinstall.For developmentFirst of all, installPython 3.6 or laterandGit. Clone or download and extract the repository, navigate to<path-to-repository>, open a terminal and run:pipinstall-e.[all]pre-commitinstall-fThis will install the package in editable mode (any change to the package in the local project directory will automatically reflect on the environment-wide package installed in thesite-packagesdirectory of your environment) along with its development, test and optional dependencies. Additionally, it installs agit commit hook. Each time you commit, unit tests, static type checkers, code formatters and linters are run automatically. Runpre-commit run --all-filesto check that the hook was successfully installed. For more details, seepre-commit's documentation.▶️ QuickstartIn this example we will solve theOpenAI GymnasiumenvironmentCartPole-v1usingREINFORCE. Copy the following configuration in a file namedREINFORCE_CartPole-v1.py(with the same indentation):importgymnasiumasgymfromtorch.optimimportAdamfromactorchimport*experiment_params=ExperimentParams(run_or_experiment=REINFORCE,stop={"training_iteration":50},resources_per_trial={"cpu":1,"gpu":0},checkpoint_freq=10,checkpoint_at_end=True,log_to_file=True,export_formats=["checkpoint","model"],config=REINFORCE.Config(train_env_builder=lambda**config:ParallelBatchedEnv(lambda**kwargs:gym.make("CartPole-v1",**kwargs),config,num_workers=2,),train_num_episodes_per_iter=5,eval_freq=10,eval_env_config={"render_mode":None},eval_num_episodes_per_iter=10,policy_network_model_builder=FCNet,policy_network_model_config={"torso_fc_configs":[{"out_features":64,"bias":True}],},policy_network_optimizer_builder=Adam,policy_network_optimizer_config={"lr":1e-1},discount=0.99,entropy_coeff=0.001,max_grad_l2_norm=0.5,seed=0,enable_amp=False,enable_reproducibility=True,log_sys_usage=True,suppress_warnings=True,),)Open a terminal in the directory where you saved the configuration file and run (if you installedactorchin a virtual environment, you first need to activate it, e.g.conda activate actorch-envif you installedactorchusingConda):pipinstallgymnasium[classic_control]# Install dependencies for CartPole-v1actorchrunREINFORCE_CartPole-v1.py# Run experimentNOTE: training artifacts (e.g. checkpoints, metrics, etc.) are saved in nested subdirectories. This might cause issues on Windows, since the maximum path length is 260 characters. In that case, move the configuration file (or setlocal_dir) to an upper level directory (e.g.Desktop), shorten the configuration file name, and/or shorten the algorithm name (e.g.DistributedDataParallelREINFORCE.rename("DDPR")).Wait for a few minutes until the training ends. The mean cumulative reward over the last 100 episodes should exceed 475, which means that the environment was successfully solved. You can now plot the performance metrics saved in the auto-generatedTensorBoard(or CSV) log files usingPlotly(orMatplotlib):pipinstallactorch[vistool]# Install dependencies for VisToolcdexperiments/REINFORCE_CartPole-v1/<auto-generated-experiment-name> actorchvistoolplotlytensorboardYou can find the generated plots inplots.Congratulations, you ran your first experiment!Seeexamplesfor additional configuration file examples.HINT: since a configuration file is a regular Python script, you can use all the features of the language (e.g. inheritance).🔗 Useful linksIntroduction to deep reinforcement learningHyperparameter tuning with Ray TuneandOptuna integrationLogging with Ray TuneMonitoring jobs with Ray DashboardSetting up a cluster with Ray Cluster@ Citation@misc{DellaLibera2022ACTorch, author = {Luca Della Libera}, title = {{ACTorch}: a Deep Reinforcement Learning Framework for Fast Prototyping}, year = {2022}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/lucadellalib/actorch}}, }📧 [email protected]
actorio
Actorio - a simple actor framework for asyncioActorio is a Python asyncio implementation of theactor model.There already are Python actor model implementation such asThespianorPykkabut they're currently lacking asyncio support.The main goal of the Actor model is to cleanly define the critical section without having to deal with lock or any other synchronization mechanism.It also helps with theSingle responsibility principle, eachActorclass should deal with one part of the fonctionnal requirements of the application and its API can be properly defined (ie, what kind of message does this actor accept and what kind of message does it produce), hence making actor testing easier, since you don't have to mock the surrounding systems and just check if the message sequence if correct.Current API is crude and provisional and is likely to change as syntax and concepts evolve.Rules of the actor modelAn actor is an execution unit that executes concurrently with other actors.An actor does not share state with anybody else, but it can have its own state.An actor can only communicate with other actors by sending and receiving messages. It can only send messages to actors whose address it has.When an actor receives a message it may take actions like:altering its own state, e.g. so that it can react differently to a future messagesending messages to other actorsstarting new actorsNone of the actions are required, and they may be applied in any order.An actor only processes one message at a time. In other words, a single actor does not give you any concurrency, and it does not need to use locks internally to protect its own state.Hello WorldA Hello world example:Let's start with the typical example of Hello World. In this case, we'll create an Actor, send it a message, wait for a response message, and print that response:fromactorioimportActor,Message,DataMessage,askimportasyncioclassGreeter(Actor):# Here we override the handle_message method to send a `DataMessage` with the data "Hello World!".asyncdefhandle_message(self,message:Message):awaitmessage.sender.tell(DataMessage(data="Hello World!",sender=self))asyncdefmain():# Let's create an instance of a Greeter actor and start it.asyncwithGreeter()asgreeter:# Then we'll just send it an empty message and wait for a responsereply:DataMessage=awaitask(greeter,Message())print(reply.data)asyncio.get_event_loop().run_until_complete(main())Actor spawning actorsActor instances can spawn other actors during their execution. be it at startup, in themainloop_setupasync method or as a reaction to an event in any handler.The actor should first be instanciated (with an__init__call) then be registered on its parent with aregister_childasync call.Whenever a child dies, the parenthandle_child_stoppedasync method is called with the child actor object and theasyncio.taskobject for its execution loop. This way, the parent can take any action required.For example, if we had to handle blocking operations. we could do something like:importrandomimportasynciofromactorioimportActor,Message,DataMessage,EndMainLoop,Referenceasyncdefblocking_operation():#for the sake of this example, the blocking operation is a sleepsleep_time=random.randint(0,10)awaitasyncio.sleep(sleep_time)returnsleep_timeclassRequestMessage(Message):"""A request to do some computation or blocking call"""classResponseMessage(DataMessage):"""The result of a computation"""classWorker(Actor):# Here we override the handle_message method to send a `ResponseMessage`# with the result of the blocking operation as its data.asyncdefhandle_message(self,message:Message):sleep_time=awaitblocking_operation()awaitmessage.sender.tell(ResponseMessage(data=sleep_time,sender=self))# This actor only deals with one message then stops,# raising an EndMainLoop exeception here will properly stop the actorraiseEndMainLoop()classManager(Actor):asyncdefhandle_message(self,message:Message):ifisinstance(message,RequestMessage):# We spawn and register the new child, we get its reference backchild=awaitself.register_child(Worker())# We just transfer the message to the child, that way,# we won't have to process its responseawaitchild.tell(message)asyncdefmain():# We create an inbox for us, this is not an actor,# just somewhere actors can send messagesme=Reference()asyncwithManager()asmanager:# Then we start 10 long computationsfor_inrange(10):awaitmanager.tell(RequestMessage(sender=me))# Then we'll listen to our inbox# to get the responses as they come byfor_inrange(10):message=awaitme.inbox.get()print("Got a response with result{}".format(message.data))asyncio.get_event_loop().run_until_complete(main())Handling external blocking eventsUsing message is great, but, most of the time, we also need to use other APIs that don't provide a message-based interface.It's possible to register external blocking event and handler for those with theregister_input_taskmethod. This methods takes a factory function for a coroutine and an async function to handle the task result.The handler will not be called if the Actor is currently busy processing anything else (like a message or any other task), this way, there is no concurrency issue and each handler is called with a clean actor stateFor now, the order in which those events will be handled is not necessarily the order in which they happenedFor example, to use anaiohttpwebsocket :importaiohttpfromaiohttpimportwebfromactorioimportActor,EndMainLoopimportasyncioclassClient(Actor):def__init__(self,websocket:web.WebSocketResponse):self.websocket=websocketsuper().__init__()# Here we define an input task handler,# It will be called each time its registered event happens.# The resulting `asyncio.Task` object will be passed as argument,# this way, the handler can deal with any exception raised during event collectionasyncdefhandle_websocket_event(self,task:asyncio.Task):try:msg=task.result()exceptExceptionase:# In case of any exception, we just stop the Actorself.logger.exception(e)raiseEndMainLoop()ifmsg.type==aiohttp.WSMsgType.TEXT:# if we receive text, we just send it back# We could also just send a message to our inboxawaitself.websocket.send_str(msg.data)else:# any other request just stops the ActorraiseEndMainLoop()asyncdefmainloop_setup(self):self.register_input_task(self.websocket.receive,self.handle_websocket_event)awaitsuper().mainloop_setup()asyncdefwebsocket_handler(request):ws=web.WebSocketResponse()awaitws.prepare(request)client_actor=Client(websocket=ws)asyncwithclient_actor:# We until the client's mainloop endsawaitclient_actorreturnwsapp=web.Application()app.add_routes([web.get('/',websocket_handler)])web.run_app(app)You can then connect to the websocket and send it some messages, it will act as an echo server.
actor-loader
Actor Loader - Vanilla Actors with CollisionAn automation tool for quickly putting the right Collision Actors (C-Actors, Vanilla Actors with Collision) in your mod.What is ActorLoader for?To understand this, you must first be familiar with C-Actors.C-Actors was originally a mod including every applicable actor in the game with attached collision. These actors could be used in Ice-Spear to make landscape scenes quickly without having to worry about manually adding collision to them. A timely process that was easy to mess up.The original C-Actors mod would be required as a prerequisite in every mod using it. This wasn't terrible, but because it had almost every actor in it, it slowed down merging times in BCML quite a lot. There was no real need for all 2044 actors to be loaded when only 5 were used in the mod.My alternative at the time was to take the actors used and put them into my mod. This was an okay solution, but it got difficult to keep track of which actors were used as the mod expanded.So this is what ActorLoader is for, it gets every C-Actor used by reading the mods smubin files, then copies them into the mods content folder.If you have any questions, feel free to ask them in the comments or on myDiscord server.UsageIn the root folder of your mod (the folder containing aoc, content, etc . . .) type actor-loader.exe in the file path.Note: in previous versions it was required to append 'C' to new actors, this is no longer required.
actorname
No description available on PyPI.
actors
A python actor framework.FeaturesEasy to build concurrency with the actor model.Lightweight. Can run millions of actors on a single thread.Integrated supervision for managing actor lifetime and faults.Extensible with new executors and dispatchers.An Akka-like API.InstallationInstall fromPyPIusingpip:$pipinstallactorsObligatory greeterfromactorsimportActor,ActorSystemclassGreeter(Actor):defreceive(self,message):print("Hello%s"%message)system=ActorSystem()greeter=system.actor_of(GreetingActor)greeter.tell("world")system.terminate()DocumentationDocumentation is available athttp://pythonhosted.org/actors/.
actpdf
actpdfinstallationfor Windows operating system:install python:https://www.python.org/downloads/windows/create virtualenv:python -m venv envactivate it:.\env\Scripts\activateinstallGTK3to fixlibcairo-2.dll error. see:weasyprint installion guide.installWe use weasyprint 52.4, sinceWeasyPrint 53 requires at least Pango 1.44.$ pip install actpdf $ actpdfpackaging: Update VersionUpdating your distribution Down the road, after you’ve made updates to your distribution and wish to make a new release:pip install build pip install twinedelete files indistfolderincrement the version number in yoursetup.cfgfile$ python3 -m buildFirst upload to TestPypi:$ twine upload --repository testpypi dist/* # to get password see https://test.pypi.org $ pip install --index-url https://test.pypi.org/simple/ --no-deps actpdfUpload package to the Python Package Index:$ twine upload dist/*see:https://packaging.python.org/tutorials/packaging-projects/completed templates:allergynutritionhealthfitness_quanutritionnutrition_quanutritionfitnessdetoxskinpgxpersonalitycarriersleeptools: pdf to html converterhttps://idrsolutions.com/online-pdf-to-html-converter
act-police-archiver
No description available on PyPI.
actproxy
actproxyPython package providingactproxy.comAPI access and proxy rotation methods for requests (synchronous) and aiohttp (asyncio). Can also be used independently. Supports socks5, http/https, and ipv4/ipv6 as per actproxy's services.Quick-Start (AIOHTTP)importactproxyfromaiohttpimportClientSessionasyncdefmain():actproxy_api_keys=["xxxxxxxxxxxxxxxxxxxxxxxx","xxxxxxxxxxxxxxxxxxxxxxxx"]# Initialize API. Also returns your proxies.awaitactproxy.aioinit(actproxy_api_keys)# Use a new AIOHTTP connector which rotates & uses the next proxy.asyncwithClientSession(connector=actproxy.aiohttp_rotate())assession:url="http://dummy.restapiexample.com/api/v1/employees"asyncwithsession.get(url)asresp:ifresp.status==200:resp_json=awaitresp.json()print(resp_json)Quick-Start (Requests)importactproxyimportrequestsactproxy_api_keys=["xxxxxxxxxxxxxxxxxxxxxxxx","xxxxxxxxxxxxxxxxxxxxxxxx"]# Initialize API. Also returns your proxies.actproxy.init(actproxy_api_keys)url="http://dummy.restapiexample.com/api/v1/employees"resp=requests.get(url,proxies=actproxy.rotate())ifresp.status_code==200:resp_json=resp.json()print(resp_json)Methodsactproxy.aioinit(api_keys:List=None,output_format:DumpFormat='json',get_userpass:Boolean=True)->Union[FlatList,str,None]Fetches your proxies from ActProxy & returns them. Must be run before the other aiohttp functions.actproxy.init(api_keys:List[str],output_format:DumpFormat='json',get_userpass:Any=True)->Union[FlatList,str,None]Fetches your proxies from ActProxy & returns them. Must be run before the other synchronous functions.actproxy.aiohttp_rotate(protocol:ProxyProto/str,return_proxy:Boolean=False)->Union[ProxyConnector,Tuple[Data,ProxyConnector]]Returns an aiohttp connector which uses the next proxy from your list.actproxy.async_rotate_fetch(url:str,protocol:ProxyProto/str='socks5',return_proxy:Boolean=False)->DataRotate proxies and perform a GET request. Returns a Data object ofresponse.status_code,response.text, andresponse.headers.actproxy.rotate(protocol:ProxyProto='socks5')->DataReturns the next proxy from your list. Return variable is suitable for use with requests[socks].actproxy.random_proxy(protocol:ProxyProto='socks5')->DataReturns a random proxy from your list. Return variable is suitable for use with requests[socks].actproxy.aiohttp_random(protocol:ProxyProto='socks5',return_proxy:Boolean=False)->Union[ProxyConnector,Tuple[Data,ProxyConnector]]Returns an aiohttp connector which uses uses a random proxy from your list.actproxy.one_hot_proxy()->DataSimilar to rotate() but returns a single proxy dict/object for use in places other than aiohttp or requests.Changelog0.1.9-11/09/2020: New asyncio rotation methods based on python_socks.0.1.8-10/28/2020: Fixed versioning typo.0.1.7-10/28/2020: Relax Python version constraint (3.8-4.0).0.1.6-10/24/2020: Lock aiohttp version fixingaiohttp #51120.1.5-10/24/2020: Rotator bug fix. CSV fix. Better type-hints & coverage.0.1.4-10/23/2020: Support multiple API keys. Unit tests. Fixes.0.1.3-9/29/2020: Minor fixes and addition of docstrings.0.1.2-9/28/2020: Initial release version.
act_python
UNKNOWN
actr
Python ACT-R (fork)A Python implementation of the ACT-R cognitive architecture developed at the Carleton Cognitive Modelling (CCM) Lab.The original pip package is calledpython_actr.ForkThis fork ofpython_actrby Andy Maloney is not affiliated with the CCM lab.I created it because the main repository isn't being updated and I need a stable, running version via pip for mygactarproject.Changes are noted in theCHANGELOG.The pip package for this fork is namedactr.CompatibilityAlthough this pip package is namedactr, it still usespython_actras its package name.This keeps this pip package compatible with the officialpython_actrone, howeveryou should only have one of them installed at a time. As far as I can tell, there's no way to enforce this using pip, so if thepython_actrpackage is already installed, run:pipuninstallpython_actrInstall(Before installing, please see note above about compatibility with thepython_actrpackage.)pip3installactrUseWhen writing code, you use it the same way you use thepython_actrpackage:frompython_actrimport*To run a file, make sure the package is installed with pip (see above), then you can just run it like this:python3tutorials/hello_world.pyRun TestsmaketestI know there are a bunch of failures - these exist inpython_actras well & I am not planning to investigate at this time...
actr6_jni
TODO!
actrecipemagicfile
actrecipemagicfile (A CFFI fork of python-magic)actrecipemagicfileis a Python interface to the libmagic file type identification library. libmagic identifies file types by checking their headers against a predefined list. This functionality is accessible from the command line using the Unix commandfile.Usage>>>importmagicfileasmagic>>>magic.from_file("testdata/test.pdf")'PDF document, version 1.2'>>>magic.from_buffer(open("testdata/test.pdf").read(1024))'PDF document, version 1.2'>>>magic.from_file("testdata/test.pdf",mime=True)'application/pdf'There is also aMagicclass that provides more direct control, including overriding the magic database file and turning on character encoding detection. This is not recommended for general use. In particular, it's not safe for sharing across multiple threads and will fail throw if this is attempted.>>>f=magic.Magic(uncompress=True)>>>f.from_file('testdata/test.gz')'ASCII text (gzip compressed data, was "test", last modified: Sat Jun 2821:32:522008,fromUnix)'You can also combine the flag options:>>>f=magic.Magic(mime=True,uncompress=True)>>>f.from_file('testdata/test.gz')'text/plain'Licenseactrecipemagicfileis distributed under the MIT license. See the included LICENSE file for details.Note: This package is mostly used in an internal project.
actrie
No description available on PyPI.
acts
No description available on PyPI.