package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
act
No description available on PyPI.
act4
ActLibrary for dynamic functional programming and more.Enter this command and go to thedocumentation:pip install act4OverviewfromtypingimportOptional,Callablefromactimport*defdivision_between(a:int,b:int)->int|bad[str]:ifa==0orb==0:returnbad("division by zero")returna/bWithNumber=type(number=N)WithMultiplication=type(multiplication=M)WithDivision=type(division=D)Result=WithMultiplication[N]&WithDivision[N]@fbind_by(...|then>>on(None,bad("something is missing")))@do(maybe,optionally,for_input=optionally)deffunc(do:Do,a:WithNumber[Optional[int]],b:WithNumber[Optional[int]])->Result[int]:maybe,optionally=dofirst_number=optionally.same(a.number)second_number=optionally.same(b.number)division=maybe(division_between)(first_number,second_number)multiplication=first_number*second_numberreturnResult(multiplication,division)# As a result, `func` has this type.func:Callable[[Optional[WithNumber[Optional[int]]],Optional[WithNumber[Optional[int]]]],Result[int]|bad[str],]assertfunc(WithNumber(16),WithNumber(2))==obj(multiplication=32,division=8)assertfunc(WithNumber(16),WithNumber(0))==bad("division by zero")assertfunc(WithNumber(16),WithNumber(None))==bad("something is missing")assertfunc(WithNumber(16),None)==bad("something is missing")classRawResult:def__init__(self,multiplication:int,division:int)->None:self.multiplication=multiplicationself.division=divisionassert(Result(32,8)==RawResult(32,8)==WithMultiplication(32)&WithDivision(8)==obj(multiplication=32,division=8))
actable
NOTE:Presentlyonlysupports Python 3.5+ and Django 1.9+ (seeissue #1)Activity stream for Python Django. Unlike other activity streams, it is much more flexible, with every event designed to supporting an arbitrary number of associated objects. It also is designed to be unobtrusive: Any of your models can be registered as an activity generator, all you need to do is generate a data structure for context, or an HTML fragment.FeaturesVery easily / magically integrated into an existing system, with signals being auto-generated based on principle objectsArbitrary number of objects can be associated with every eventFast look ups with denormalized events (no joins)Looking up streams for particular actors or objectsDecent test coverageHandy Paginator helper class to page through streamExample projectNot yet implemented:FollowQuick startOverview:Install actable and put in requirements fileAdd to INSTALLED_APPS3. Pick several important models to implement the actable interface so that every save or update generates an eventAdd those models to ACTABLE_MODELSUse helper classes to add a streams to your viewsInstall:pipinstallactableAdd it to yourINSTALLED_APPS:INSTALLED_APPS=(...'actable.apps.ActableConfig',...)Pick one or more models to be your actable models. Whenever these models are updated or created, it will generate events. These events can involve any number of other objects.You must implement at least 2 methods on your actable models. The first method isget_actable_relationswhich must return a dictionary where all the values are model instances that are related to this action. Instead of limiting yourself to “Actor, Verb, Object”, this allows you to have any number of relations. Each one of these model instances will receive a copy of this event to its activity stream.Example:classProjectBlogPost:defget_actable_relations(self,event):return{'subject':self.user,'object':self,'project':self.project,}Now you must choose one of 2 other methods to implement. These constitute the data to cache for each event.The most versatile of the two is one that returns a dictionary containing entirely simple (serializable) data types. This will be stored in serialized form in your database.Example:classProjectBlogPost:defget_actable_json(self,event):verb='posted'ifevent.is_creationelse'updated'return{'subject':self.user.username,'subject_url':self.user.get_absolute_url(),'object':self.title,'object_url':self.get_absolute_url(),'project':self.project.title,'verb':verb,}The other option is caching an HTML snippet (string) that can be generated any way you see fit.Example:classProjectBlogPost:defget_actable_html(self,event):return'<a href="%s">%s</a> wrote%s'%(self.user.get_absolute_url(),self.user.username,self.title)Finally, you should list your newly improved as anACTABLE_MODEL, as such:ACTABLE_MODELS=['myapp.ProjectBlogPost',]CreditsTools used in creating this package:Cookiecuttercookiecutter-djangopackageHistory0.1.0 (2017-11-10)First release on PyPI.
actadiurna
UNKNOWN
act-admin
ACT AdminIntroductionThis package should only be used with act-api, act-workers, act-types at version 2.x.x.This package contains management utilities for theACT Platform.Changelog2.1.0Support forindexOptionfor Daily/TimeGlobal indices in the platform. Use--no-index-optionas argument toact-typesto bootstrap legacay platforms without this feature.InstallationThis project requires that you have a running installation of theact-platform.Install from pippipinstallact-adminact-originusage$act-origin--act-baseurl<BASEURL>--user-id<USERID>--add Originname:myorigin Origindescription:MyTestOrigin Origintrust(float0.0-1.0.Default=0.8): Originorganization(UUID):[2019-11-1110:46:22]app=origin-clientlevel=INFOmsg=Createdorigin:myorigin Originadded: Origin(name='myorigin',id='e5a9792e-78c7-4190-9275-27616be47ca8',organization=Organization(),description='My Test Origin',trust=0.8)act-types usageTo bootstrap the type system with default types (userid/act-baseurl must point to ACT installation):act-types \ --user-id 1 \ --act-baseurl http://localhost:8888 \ --loglevel ERROR \ --default-object-types \ --default-fact-types \ --default-meta-fact-types \ --addIt is safe to rerun the command above, after new types have been added to the data model.You can also add types from your own files, using --object-types-file, --fact-types-file and --meta-fact-types-file that points to a json file on the same format as thedefault types.To show default types (replace with fact/meta-fact for other types):act-types--default-object-typeslistLocal developmentUse pip to install inlocal development mode. act-types (and act-api) uses namespacing, so it is not compatible with usingsetup.py installorsetup.py develop.In repository, run:pip3install--user-e.It is also necessary to install in local development mode to correctly resolve the files that are read by the--default-*options when doing local changes. These are read from etc under act.types and if the package is installed with "pip install act-types" it will always read the files from the installed package, even though you do changes in a local checked out repository.
actag
AcTag: Opti-Acoustic Fiducial Markers for Underwater Localization and MappingAcTag is a novel opti-acoustic fiducial marker that can be detected in both optical and acoustic images. When seen through an imaging sonar, AcTags provide the unique identification of four individual landmarks per tag, and provide relative range and azimuth values to each landmark. When seen with a camera, AcTags provide a valid 6-DOF pose estimate through the use of theapriltagrepository. In addition, AcTags are cheap, easy to manufacture, and can be easily detected with this open-source repository. Therefore, we believe that AcTag has significant potential to improve underwater robotic localization and mapping by enabling accurate tracking of objects in underwater environments.DocumentationDocumentation on this repository can be found onRead the Docs. Please see theQuickstartguide for a quick overview into detecting AcTags.Contact UsFor questions or inquiries, feel free to reach out to the team:Kalin NormanDaniel ButterfieldFrost Lab
actagm
# AGM library
act-api
python-actpython-act is a library used to connect to theACT platform.The source code for this API is availble ongithuband onPyPi.Changelog2.1.3Added support for loading Schema based config inact.api.lib.cli, withload_config(). The old method of loading the configuration, usinghandle_args()are now deprecated. We will add a deprecation warning in version 2.1.4, and will removehandle_args()in version 2.2.0, no earlier than June 1st. 2023.2.1.0Support forindexOptionfor Daily/TimeGlobal indices in the platform1.0.27facts created with act.api.Act.fact() will now have "RoleBased" as default access_mode. You can initialize with act.api.Act(access_mode="Public") to get the old defaults.facts created with act.api.fact.Fact() now requires the user to specify access_modeSetupInstall from PyPi:$ pip3 install act-apiThe platform has a REST api, and the goal of this library is to expose all functionality in the API.Objects and FactsThe act platform is built on two basic types, the object and fact.Objects are universal elements that can be referenced uniquely by its value. An example of an object can be an IP address.Facts are assertions or obsersvations that ties objects together. A fact may or may not have a value desribing further the fact.Facts can be linked on or more objects. Below, the mentions fact is linked to both an ipv4 object and report object, but the hasTitle fact is only linked to a report.Object typeObject valueFact typeFact valueObject typeObject valuereportcbc80bb(...)mentionsn/aipv4127.0.0.1reportcbc80bb(...)nameThreat Intel Summaryn/an/aDesign principles of the Python API.Most functions returns an object that can be chainedAttributes can be accessed using dot notation (e.g fact.name and fact.type.name)Example usageConnect to the APIConnct to the API using an URL where the API is exposed and a user ID:>>> import act.api >>> c = act.api.Act("https://act-eu1.mnemonic.no", user_id = 1, log_level = "warning")The returned object exposes most of the API in the ACT platform:fact - Manage factsfact_search - Search factsfact_type - Instantiate a fact typeget_fact_types - Get fact typesobject - Manage objectsobject_search - Searh objectsorigin - Manage originsget_object_types - Get object typesAdditional arguments to act.api.Act can be passed on torequestsusing the requests_common_kwargs, which mans you can add for instanceauthif the instance is behind a reverse proxy with HTTP authentication:>>> c = act.api.Act("https://act-eu1.mnemonic.no", user_id = 1, log_level = "warning", requests_common_kwargs = {"auth": ("act", "<PASSWORD>")})Create factCreate a fact by callingfact(). The result can be chained using one or moresource(),destination()orbidirectionial()to add linked objects.>>> f = c.fact("mentions").source("report", "87428fc522803d31065e7bce3cf03fe475096631e5e07bbd7a0fde60c4cf25c7").destination("ipv4", "127.0.0.1") >>> f Fact(type='mentions', access_mode='RoleBased', source_object=Object(type='report', value='87428fc522803d31065e7bce3cf03fe475096631e5e07bbd7a0fde60c4cf25c7'), destination_object=Object(type='ipv4', value='127.0.0.1'))The fact is not yet added to the platform. Userserialize()orjson()to see the parameters that will be sent to the platform when the fact is added.>>> f.serialize() {'type': 'mentions', 'value': '', 'accessMode': 'RoleBased', 'sourceObject': {'type': 'report', 'value': '87428fc522803d31065e7bce3cf03fe475096631e5e07bbd7a0fde60c4cf25c7'}, 'destinationObject': {'type': 'ipv4', 'value': '127.0.0.1'}, 'bidirectionalBinding': False} >>> f.json() '{"type": "mentions", "value": "", "accessMode": "RoleBased", "sourceObject": {"type": "report", "value": "87428fc522803d31065e7bce3cf03fe475096631e5e07bbd7a0fde60c4cf25c7"}, "destinationObject": {"type": "ipv4", "value": "127.0.0.1"}, "bidirectionalBinding": false}'Since the fact is not yet added it does not have an id.>>> print(f.id) NoneUseadd()to add the fact to the platform.>>> f.add() Fact(type='mentions', origin=Origin(name='John Doe', id='00000000-0000-0000-0000-000000000001'), confidence=1.0, organization=Organization(name='Test Organization 1', id='00000000-0000-0000-0000-000000000001'), access_mode='RoleBased', source_object=Object(type='report', value='87428fc522803d31065e7bce3cf03fe475096631e5e07bbd7a0fde60c4cf25c7', id='3eb92445-c88f-4128-8bd1-1cd27a95a088'), destination_object=Object(type='ipv4', value='127.0.0.1', id='95d200cf-89e9-4e6f-9e4f-973f2f88dd11'))The fact will be replaced with the fact added to the platform and it will now have an id.>>> print(f.id) '5e533787-e71d-4ba4-9208-531f9baf8437'A string representation of the fact will show a human readable version of the fact.>>> str(f) '(report/87428fc522803d31065e7bce3cf03fe475096631e5e07bbd7a0fde60c4cf25c7) -[mentions/ipv4]-> (ipv4/127.0.0.1)'Specifying origins when creating factsYou can specify origins, when creating facts:>>> act.api.base.origin_map(c.config) {'John Doe': '00000000-0000-0000-0000-000000000001', 'Test origin': '5da8b157-5129-4f2f-9b90-6d624d62eebe'} >>> f = c.fact("mentions", origin=c.origin(name="Test origin")).source("report", "87428fc522803d31065e7bce3cf03fe475096631e5e07bbd7a0fde60c4cf25c7").destination("ipv4", "127.0.0.1") >>> f.serialize() {'type': 'mentions', 'value': '', 'origin': '5da8b157-5129-4f2f-9b90-6d624d62eebe', 'accessMode': 'RoleBased', 'sourceObject': {'type': 'report', 'value': '87428fc522803d31065e7bce3cf03fe475096631e5e07bbd7a0fde60c4cf25c7'}, 'destinationObject': {'type': 'ipv4', 'value': '127.0.0.1'}, 'bidirectionalBinding': False}You can useorigin_nameororigin_idwhen connecting to the API to apply an origin to all facts:>>> c = act.api.Act("", user_id = 1, log_level="warn", origin_name="Test-origin") >>> f = c.fact("mentions").source("report", "87428fc522803d31065e7bce3cf03fe475096631e5e07bbd7a0fde60c4cf25c7").destination("ipv4", "127.0.0.1") >>> f.origin Origin(name='Test-origin')Specifying access_mode when crating factsDefault access mode when creating facts are "RoleBased". This means that facts belong to an organization and only users with access to that organization have access to the fact.To create Public facts, available to everyone you can useaccess_mode = "Public":>>> f = c.fact("mentions", access_mode="Public").source("report", "87428fc522803d31065e7bce3cf03fe475096631e5e07bbd7a0fde60c4cf25c7").destination("ipv4", "127.0.0.1") >>> f Fact(type='mentions', access_mode='Public', source_object=Object(type='report', value='87428fc522803d31065e7bce3cf03fe475096631e5e07bbd7a0fde60c4cf25c7'), destination_object=Object(type='ipv4', value='127.0.0.1'))Get factUseget()to get a fact by it's id.>>> f = c.fact(id='4dc14f42-f175-4695-8ddb-d372b3138ec8').get()Properties on objects can be retrieved by dot notation.>>> f.type.name 'name' >>> f.value 'Threat Intel Summary' ''Add Meta factsUsemeta()to create meta facts (facts about facts).>>> f = c.fact(id='f994810d-3e4e-4f08-b1c4-a0b67cd1b8fc').get() >>> import time >>> meta = f.meta("observationTime", int(time.time())) >>> meta Fact(type='observationTime', value=1605100652, in_reference_to=Fact(type='mentions', id='f994810d-3e4e-4f08-b1c4-a0b67cd1b8fc'), access_mode='RoleBased')As with facts, the meta fact is not sent to the backend, and you must useadd()to submit it to the platform.>>> meta.add() MetaFact(type='observationTime', value='1605100652', origin=Origin(name='John Doe', id='00000000-0000-0000-0000-000000000001'), confidence=1.0, in_reference_to=Fact(type='mentions', id='f994810d-3e4e-4f08-b1c4-a0b67cd1b8fc'), organization=Organization(name='Test Organization 1', id='00000000-0000-0000-0000-000000000001'), access_mode='RoleBased')Get Meta factsUseget_meta()to get meta facts (facts about facts).>>> f = c.fact(id='6d80469f-bc73-4520-a82a-7667a6526362').get() >>> meta = f.get_meta() >>> print(meta[0]) [observationTime/2018-12-12T13:42:17.526912]Retract factUseretract()to retract a fact.. The factmusthave an id, either by specyfing it directly, or retriving the fact from a search.>>> f = c.fact(id='1ba6c36a-8300-4ea1-aded-03ee80083dff') >>> f.retract()Search Objects>>> objects = c.object_search(object_value="127.0.0.1", after="2016-09-28T21:26:22Z") >>> len(objects) 1 >>> objects[0].type.name 'ipv4' >>> objects[0].value '127.0.0.1' >>> objects[0].statistics[0].type.name 'DNSRecord' >>> objects[0].statistics[0].count 131 >>> objects[0].statistics[1].type.name 'mentions' >>> objects[0].statistics[1].count 114Create object type>>> object_type = c.object_type("fqdn").add()Search factsSearch fact and limit search by using the parameters.>>> help(c.fact_search) Help on method fact_search in module act.helpers: fact_search(keywords='', object_type=[], fact_type=[], object_value=[], fact_value=[], organization=[], source=[], include_retracted=None, before=None, after=None, limit=None) method of act.helpers.Act instance Search objects Args: keywords (str): Only return Facts matching a keyword query object_type (str[] | str): Only return Facts with objects having a specific ObjectType fact_type (str[] | str): Only return Facts having a specific FactType object_value (str[] | str): Only return Facts with objects matching a specific value fact_value (str[] | str): Only return Facts matching a specific value organization (str[] | str): Only return Facts belonging to a specific Organization origin (str[] | str): Only return Facts coming from a specific Origin include_retracted (bool): Include retracted Facts (default=False) before (timestamp): Only return Facts added before a specific timestamp. Timestamp is on this format: 2016-09-28T21:26:22Z after (timestamp): Only return Facts added after a specific timestamp. Timestamp is on this format: 2016-09-28T21:26:22Z limit (integer): Limit the number of returned Objects (default 25). Limit must be <= 10000. All arguments are optional. Returns ActResultSet of Facts.By default the search will return and ActResultSet with 25 itmes.>>> facts = c.fact_search(fact_type="mentions", fact_value="ipv4") >>> len(facts) 25 >>> facts.size 25 >>> facts.count 820304Thecompleteproperty can be used to to check whether the result returned all available items.>>> facts.complete FalseUse the limit parameter to get more items.>>> facts = c.fact_search(fact_type="mentions", object_value="127.0.0.1", limit=2000) >>> facts.size 119 >>> facts.complete TrueGet Object typesGet all object types.>>> object_types = c.get_object_types() >>> len(object_types) 46 >>> len(object_types) 46 >>> object_types[0].name 'technique' >>> object_types[0].validator 'RegexValidator' >>> object_types[0].validator_parameter '(.|\\n)*'Graph queriesThe act platform has support for graph queries using the Gremlin Query language.Use thetraverse()function from an object to perform a graph query.>>> path = c.object("ipv4", "127.0.0.220").traverse('g.bothE("mentions").bothV().path().unfold()') >>> type(path[0]) <class 'act.obj.Object'> >>> type(path[1]) <class 'act.fact.Fact'>You will normally want to useunfold()in the gremlin query to make sure you recive objects and facts.Here is an example querying for threat actor aliases.The graph of this will look like the screen shot below.>>> aliases = c.object("threatActor", "APT 29").traverse('g.repeat(outE("threatActorAlias").outV()).until(cyclicPath()).path().unfold()') >>> obj = [obj.value for obj in aliases if isinstance(obj, act.obj.Object)] >>> obj ['APT 29', 'OfficeMonkeys', 'APT 29', 'APT 29', 'The Dukes', 'APT 29', 'APT 29', 'Hammer Toss', 'APT 29', 'APT 29', 'EuroAPT', 'APT 29', 'APT 29', 'CozyDuke', 'APT 29', 'APT 29', 'Office Monkeys', 'APT 29', 'APT 29', 'CozyCar', 'APT 29', 'APT 29', 'APT29', 'APT 29', 'APT 29', 'Dukes', 'APT 29', 'APT 29', 'Cozy Duke', 'APT 29', 'APT 29', 'Cozer', 'APT 29', 'APT 29', 'CozyBear', 'APT 29', 'APT 29', 'Cozy Bear', 'APT 29', 'APT 29', 'SeaDuke', 'APT 29', 'APT 29', 'Group 100', 'APT 29', 'APT 29', 'Minidionis', 'APT 29', 'APT 29', 'The Dukes', 'APT29', 'APT 29', 'APT 29', 'The Dukes', 'APT29', 'The Dukes', 'APT 29', 'CozyDuke', 'APT29', 'APT 29', 'APT 29', 'CozyDuke', 'APT29', 'CozyDuke', 'APT 29', 'APT29', 'The Dukes', 'APT29', 'APT 29', 'APT29', 'The Dukes', 'APT 29', 'APT 29', 'APT29', 'Cozy Bear', 'APT 29', 'APT 29', 'APT29', 'Cozy Bear', 'APT29', 'APT 29', 'APT29', 'CozyDuke', 'APT 29', 'APT 29', 'APT29', 'CozyDuke', 'APT29', 'APT 29', 'Cozy Bear', 'APT29', 'APT 29', 'APT 29', 'Cozy Bear', 'APT29', 'Cozy Bear', 'APT 29', 'The Dukes', 'APT29', 'Cozy Bear', 'APT 29', 'APT 29', 'The Dukes', 'APT29', 'Cozy Bear', 'APT29', 'APT 29', 'The Dukes', 'APT29', 'CozyDuke', 'APT 29', 'APT 29', 'The Dukes', 'APT29', 'CozyDuke', 'APT29', 'APT 29', 'CozyDuke', 'APT29', 'The Dukes', 'APT29', 'APT 29', 'CozyDuke', 'APT29', 'The Dukes', 'APT 29', 'APT 29', 'CozyDuke', 'APT29', 'Cozy Bear', 'APT 29', 'APT 29', 'CozyDuke', 'APT29', 'Cozy Bear', 'APT29', 'APT 29', 'Cozy Bear', 'APT29', 'The Dukes', 'APT29', 'APT 29', 'Cozy Bear', 'APT29', 'The Dukes', 'APT 29', 'APT 29', 'Cozy Bear', 'APT29', 'CozyDuke', 'APT 29', 'APT 29', 'Cozy Bear', 'APT29', 'CozyDuke', 'APT29'] >>> set(obj) {'Office Monkeys', 'EuroAPT', 'Minidionis', 'APT29', 'OfficeMonkeys', 'Hammer Toss', 'CozyCar', 'The Dukes', 'Cozer', 'CozyBear', 'Cozy Bear', 'SeaDuke', 'Group 100', 'Dukes', 'CozyDuke', 'Cozy Duke', 'APT 29'}Type systemMost instances are bootstrapped with a type system that will cover most use cases. However, it is also possible to extend the system with additional objectTypes / factTypes.Add Object TypesYou can add objects by creating ObjectType object and executingadd(). There is also a shortcut available on the client (object_type) which can used like this:>>> c.object_type(name="filename", validator_parameter='.+').add() ObjectType(name='filename', id='432c6d8a-542c-4374-94d1-b14e95139877', validator_parameter='.+', namespace=NameSpace(name='Global', id='00000000-0000-0000-0000-000000000000'))The validator_parameter specifies what values that are allowed on this object. In this example, any non-empty values are allowed.Add Fact TypesFacts specifies relation to one or two Objects and to add facts there must be FactTypes that specifies these bindings. There is a helper function that will create a Fact Type with bindings between all exisiting object types in the system:>>>> c.create_fact_type_all_bindings("droppedBy", '.*') (...)However, on production systems it is advisable to only create bindings between objects that makes sense for the given Fact Type, like this:>>> object_bindings = [{ "destinationObjectType": "hash", "sourceObjectType": "filename" }] >>> c.create_fact_type("droppedBy", '.*', object_bindings = object_bindings) FactType(name='droppedBy', id='cbc49137-3c52-4655-8b47-386d31de231a', validator_parameter='.*', relevant_object_bindings=[RelevantObjectBindings(source_object_type='432c6d8a-542c-4374-94d1-b14e95139877', destination_object_type='e4b673b6-7a59-4fca-b8eb-ff4489501cf5')], namespace=NameSpace(name='Global', id='00000000-0000-0000-0000-000000000000'))The bindings will be created using a combination of all source/destination objects for each entry.It is also possible to specify bidirectional bindings like this:>>> object_bindings = [{ "bidirectional": true, "destinationObjectType": "threatActor", "sourceObjectType": "threatActor" }]Update Fact TypesFacts are immutable, so it is not possible to update the ObjectType and FactType validators, as this might lead to an incositence state. However, it is possible to add Object Bindings to existing facts. This function require the objets to be retrived first:>>> dropped_by = [ft for ft in c.get_fact_types() if ft.name == "droppedBy"][0] >>> hash = [ot for ot in c.get_object_types() if ot.name == "hash"][0] >>> filename = [ot for ot in c.get_object_types() if ot.name == "filename"][0] >>> dropped_by.id '18b0f70e-82dc-4904-b745-d20b0ac54adf' >>>> dropped_by.add_binding(source_object_type=filename, destination_object_type=hash)OriginsThe platforms supportsoriginto support where the fact originates from. If now origin is given when creating a fact, the origin will be the user itself.List originYou can list origins usingget_origins():>>> c.get_origins() [Origin(name='John Doe', id='00000000-0000-0000-0000-000000000001', namespace=NameSpace(name='Global', id='00000000-0000-0000-0000-000000000000'), organization=Organization(name='Test Organization 1', id='00000000-0000-0000-0000-000000000001'), trust=0.8)]Add origin>>> o = c.origin("Test origin", trust=0.5, description="My test origin") >>> o.add() Origin(name='Test origin', id='5da8b157-5129-4f2f-9b90-6d624d62eebe', namespace=NameSpace(name='Global', id='00000000-0000-0000-0000-000000000000'), organization=Organization(), description='My test origin', trust=0.5)Get origin>>> o = c.origin(id="5da8b157-5129-4f2f-9b90-6d624d62eebe") >>> o.get() Origin(name='Test origin', id='5da8b157-5129-4f2f-9b90-6d624d62eebe', namespace=NameSpace(name='Global', id='00000000-0000-0000-0000-000000000000'), organization=Organization(), description='My test origin', trust=0.5)Fact ChainsFact chain is currently a experimental concept, that supports chains of fact, where some of the objects in the chain can unknowns / placeholders.The unknowns are marked using value "*". And after the chain is created they will get a special value "[placeholder[]]", where the HASH is caclulated based on the incoming/outgoing paths from the placeholder.>>> facts = ( c.fact("observedIn").source("uri", "http://uri.no").destination("incident", "*"), c.fact("targets").source("incident", "*").destination("organization", "*"), c.fact("memberOf").source("organization", "*").destination("sector", "energy"), ) >>> chain = act.api.fact.fact_chain(*facts) >>> for fact in chain: fact.add()This feature should be considered experimental and are subject to change. It is implemented client side and the backend does not have the notion of what a fact chain is at the moment, but the frontned will currently show the value in a more user friendly way.Also note that adding facts in a chain as shown above is NOT atomic, and might lead to inconsistencies if some of the facts does not pass validation in the backend.TestsTests (written in pytest) are contained in the test/ folder. Mock objects are available for most API requests in the test/data/ folder.This command will execute the tests using both python2 and python3 (requires pytest, python2 and python3).test/run-tests.sh
actappliance
# actappliance #### Use case ###This repo abstracts the type of connection you are making to an actifio appliance. You can write a test or use case oneway ane execute over SSH or RESTful connections.The primary idea being that all sent commands can look like CLI as it is shorter and more people are familiar with it,while the responses look like the RESTful API's JSON returns as they are easier to parse.It also allows direct commands using either connection with the same contract of CLI like requests and RESTful likeresponses for the case where the call is unreliable unusable for whatever reason (CLI permissions, arbitrary outputs).# Functionality of Library #First create your appliance/sky/uds object:> a = Appliance(ip_address=<sky or cds ip>, hostname=<sky or cds dns name>) # hostname or ip_address required> ex. a = Appliance(ip_address=8.8.8.8)With default settings it will try to send RESTful calls for all cmd methods.```>>> a.a.cmd('udsinfo lsversion'){u'status': 0, u'result': [{u'version': u'7.0.0.68595', u'component': u'CDS', u'installed': u'2016-03-07 12:14:37'}, {u'version': u'7.0.0.68595', u'component': u'psrv-revision', u'installed': u'2016-03-07 12:14:37'}]}```Note: You will likely see debug messages if your log levels aren't set!If you store the return the object has additional methods like parse and raise_for_error.```>>> act_response = a.a.cmd('udsinfo lsversion')>>> act_response.parse(){u'version': u'7.0.0.68595', u'component': u'CDS', u'installed': u'2016-03-07 12:14:37'}```### Parse ###The parse method tries to simplify interactions with our RESTful responses. It only returns dictionaries and strings. Itwill never return a list! In the case above you can see it returned the first relevant dictionary it found. If the infoyou desire was the version of the psrv-revision component you would use m_k='component' (search key is component),m_v='psrv-revision' (matching value is psrv-revision). Those two inputs in action:```>>> act_response.parse(m_k='component', m_v='psrv-revision'){u'version': u'7.0.0.68595', u'component': u'psrv-revision', u'installed': u'2016-03-07 12:14:37'}```However we wanted the version not the whole dicitonary so we would add k='version' (search for key version in the dictand return the corresponding value).The full command and result:```>>> act_response{u'status': 0, 'errorcode': 8675309, 'errormessage': 'Something went wrong', u'result': [{u'version': u'7.0.0.68595', u'component': u'CDS', u'installed': u'2016-03-07 12:14:37'}, {u'version': u'7.0.0.68595', u'component': u'psrv-revision', u'installed': u'2016-03-07 12:14:37'}]}>>> act_response.parse(m_k='component', m_v='psrv-revision', k='version')u'7.0.0.68595'```Here we can see the use of parse is to simplify basic parsing of appliance responses.* Advanced exampleIf you have used parse for a while, you probably have come to understand how it functions. Overreliance on parse maylead to writing code like the following:`ids = [act_response.parse(backups, k='id', index=backup) for backup in range(len(backups))]`The above is considered ugly. When doing something like the above rewriting it to avoid using parse, but instead performit's action. The following has an identical result to the above line:`ids = [data['id'] for data in backups['result']]`If you want to avoid list comprehensions you could do the following```ids = []for data in backup['results']:ids.append(data['id'])```### Raise_for_error ###The raise_for_error method does self inspection of the dictionary to determine if an Actifio related error occurred.These errors do not include connection errors like failing to authenticate and get a valid REST sessionid. These arespecifically for errors that are bubbled up to the user when interacting with an Actifio appliance. The response objectshave two attributes "errormessage" and "errorcode" which you can use to handle errors that should not end the test.* Basic example```>>> r = self.a.cmd('udsinfo lsversion -l')>>> r.raise_for_status()Response: {u'errorcode': 10010, u'errormessage': u'invalid option: l'}```This raised an error because -l is not a valid option for "udsinfo lsversion". The error object itself has direct accessto errorcode and errormessage. You can handle these exceptions as needed:```>>> from actappliance.act_errors import ACTError>>> try:... r.raise_for_error()... except ACTError as e:... if e.errorcode == 10010:... # handle or allow this error... print("I am allowing this error")... else:... raise```An alternative way to handle this would be to catch the specific error:```>>> from actappliance.act_errors import act_errors>>> try:... r.raise_for_error()... except act_errors[10010]:... # handle or allow this error... print("I am allowing this error")```Note: If your command needs to specifically be rest OR ssh and cannot function or is an inaccurate test if sent theother way use the specific methods instead of cmd.### Have fun!![Lots of fun](http://i.imgur.com/fzhEnP0.png)
act-appliance
This repo abstracts the type of connection you are making to an actifio appliance. You can write a test or use case one way ane execute over SSH or RESTful connections.
actarius
Opinionated wrappers for the mlflow tracking API.fromactariusimportNewExperimentRunContents1Features2Installation3Use4Configuration5Contributing5.1Installing for development5.2Running the tests5.3Adding documentation6Credits1Featuresactariusis meant to facilitate the way we logmlflowexperiments inBigPanda, which means the following additions over themlflowtracking API:Automatically loggingstdoutandstderrto file (without hiding them from the terminal/console) and logging this file an easilly readable artifact of the experiment. This supports nested experiment run contexts.Adding a bunch of default tags (currently focused aroundgit).Convenience logging methods for dataframes as CSVs, and of arbitrary Python objects as either Pickle or text files (the latter using their inherent text represention).Warning but not erroring when mlflow is badly- or not configured.2Installationpipinstallactarius3Useactariusprovides a custom context manager that wraps around MLflow code to help you run and track experiments using BigPanda’s conventions.This context manager should be provided with some basic parameters that configure which experiment is being run:fromactariusimportExperimentRunContext,log_dfexpr_databricks_path='Shared/experiments/pattern_generation/run_org'withExperimentRunContext(expr_databricks_path):mlflow.set_tags({'some_tag':45})mlflow.log_params({'alpha':0.5,'beta':0.2})# run experiment code...mlflow.log_metrics({'auc':0.71,'stability':33.43})log_df(my_df)actariusalso provides an experiment object that needs to be closed explicitly:fromactariusimportExperimentRunexpr_databricks_path='Shared/experiments/pattern_generation/run_org'exp_obj=ExperimentRun(expr_databricks_path)exp_obj.set_tags({'some_tag':45})exp_obj.log_params({'alpha':0.5,'beta':0.2})# run experiment code...exp_obj.log_df(my_df)exp_obj.end_run(tags={'another_tag':'test'},params={'log_param_here':4},metrics={'auc':0.71,'stability':33.43},)4Configurationactariuswill fail silently if eithermlflowor the databricks cli is not correctly configured. It will issue a small warning on each experiment logging attempt, however (each closing of an experiment context, and each explicit call to anend_run()method of anactarius.ExperimentRunobject).Additionally, in this case experiment results will be logged into the./mlruns/directory (probably to the./mlruns/0/subdirectory), with random run ids determined and used to create per-run sub-directories.To have the stack trace of the underlying error printed after the warning, simply set the value of theACTARIUS__PRINT_STACKTRACEenvironment variable toTrue. Runing will then commence regularly.5Contributing5.1Installing for developmentClone:[email protected]:bigpandaio/actarius.gitInstall in development mode, including test dependencies:cdactariuspipinstall-e'.[test]'5.2Running the testsTo run the tests use:cdactariuspytest5.3Adding documentationThe project is documented using thenumpy docstring conventions, which were chosen as they are perhaps the most widely-spread conventions that are both supported by common tools such as Sphinx and result in human-readable docstrings. When documenting code you add to this project, followthese conventions.Additionally, if you update thisREADME.rstfile, usepython setup.py checkdocsto validate it compiles.6CreditsCreated by Shay Palachy ([email protected]).
act_as_executable
UNKNOWN
act-atmos
The Atmospheric data Community Toolkit (ACT) is an open source Python toolkit for working with atmospheric time-series datasets of varying dimensions. The toolkit has functions for every part of the scientific process; discovery, IO, quality control, corrections, retrievals, visualization, and analysis. It is a community platform for sharing code with the goal of reducing duplication of effort and better connecting the science community with programs such as theAtmospheric Radiation Measurement (ARM) User Facility. Overarching development goals will be updated on a regular basis as part of theRoadmap.Please report any issues or feature requests by sumitting anIssue. Additionally, ourdiscussions boardsare open for ideas, general discussions or questions, and show and tell!Version 2.0ACT now has a version 2.0 release. This release contains many function naming changes such as IO and Discovery module function naming changes. To prepare for this release, av2.0has been provided that explains the changes and how to work with the new syntax.The new release is available on both PyPI and conda-forge.Please report any bugs of the 2.0 release to the Issue Tracker mentioned in the Important Links section below.Important LinksDocumentation:https://arm-doe.github.io/ACT/Examples:https://arm-doe.github.io/ACT/source/auto_examples/index.htmlIssue Tracker:https://github.com/ARM-DOE/ACT/issuesCitingIf you use ACT to prepare a publication, please cite the DOI listed in the badge above, which is updated with every version release to ensure that contributors get appropriate credit. DOI is provided through Zenodo.DependenciesxarrayNumPySciPymatplotlibskyfieldpandasdaskPintPyProjSixRequestsMetPyffspeclazy_loadercmweatherOptional DependenciesMPL2NCReading binary MPL data.CartopyMapping and geoplotsPy-ARTReading radar files, plotting and correctionsscikit-posthocsUsing interquartile range or generalized Extreme Studentized Deviate quality control testsicartticartt is an ICARTT file format reader and writer for PythonPySP2PySP2 is a python package for reading and processing Single Particle Soot Photometer (SP2) datasets.MoviePyMoviePy is a python package for creating movies from imagesInstallationACT can be installed a few different ways. One way is to install using pip. When installing with pip, the ACT dependencies found inrequirements.txtwill also be installed. To install using pip:pip install act-atmosThe easiest method for installing ACT is to use the conda packages from the latest release. To do this you must download and installAnacondaorMiniconda. With Anaconda or Miniconda install, it is recommended to create a new conda environment when using ACT or even other packages. To create a new environment based on theenvironment.yml:conda env create -f environment.ymlOr for a basic environment and downloading optional dependencies as needed:conda create -n act_env -c conda-forge python=3.12 act-atmosBasic command in a terminal or command prompt to install the latest version of ACT:conda install -c conda-forge act-atmosTo update an older version of ACT to the latest release use:conda update -c conda-forge act-atmosIf you are using mamba:mamba install -c conda-forge act-atmosIf you do not wish to use Anaconda or Miniconda as a Python environment or want to use the latest, unreleased version of ACT see the section below onInstalling from source.Installing from SourceInstalling ACT from source is the only way to get the latest updates and enhancement to the software that have no yet made it into a release. The latest source code for ACT can be obtained from the GitHub repository,https://github.com/ARM-DOE/ACT. Either download and unpack thezip fileof the source code or use git to checkout the repository:git clone https://github.com/ARM-DOE/ACT.gitOnce you have the directory locally, you can install ACT in development mode using:pip install -e .If you want to install the repository directly, you can use:pip install git+https://github.com/ARM-DOE/ACT.gitContributingACT is an open source, community software project. Contributions to the package are welcomed from all users.The latest source code can be obtained with the command:git clone https://github.com/ARM-DOE/ACT.gitIf you are planning on making changes that you would like included in ACT, forking the repository is highly recommended.We welcome contributions for all uses of ACT, provided the code can be distributed under the BSD 3-clause license. A copy of this license is available in theLICENSE.txtfile in this directory. For more on contributing, see thecontributor’s guide.TestingFor testing, we use pytest. To install pytest:$ conda install -c conda-forge pytestAnd for matplotlib image testing with pytest:$ conda install -c conda-forge pytest-mplAfter installation, you can launch the test suite from outside the source directory (you will need to have pytest installed and for the mpl argument need pytest-mpl):$ pytest --mpl --pyargs actIn-place installs can be tested using thepytestcommand from within the source directory.
act-bb-usage
UNKNOWN
actchain
actchain: Asynchronous Crypto Trading ChainInstallationpipinstallactchainRequires Python 3.11 or higher.Usage# An example of using actchain to compute the weighted mid price of the order bookfromtypingimportAsyncGeneratorimportasyncioimportactchainimportccxt.proasccxt_proasyncdefloop_binance_orderbook()->AsyncGenerator[dict,None]:exchange=ccxt_pro.binance()whileTrue:yieldawaitexchange.watch_order_book("BTC/USDT")asyncdeforder_book_feature_computation(event:actchain.Event)->dict:best_ask=event.data["asks"][0]best_bid=event.data["bids"][0]w_mid=(best_ask[0]*best_ask[1]+best_bid[0]*best_bid[1])/(best_ask[1]+best_bid[1])mid=(event.data["asks"][0][0]+event.data["bids"][0][0])/2return{"w_mid":w_mid,"mid":mid}asyncdefmain()->None:flow=(actchain.Flow("main").add(actchain.Loop(loop_binance_orderbook).as_chain()).add(actchain.Function(order_book_feature_computation).as_chain()).add(actchain.Function(lambdaevent:print(event.data)).as_chain()))awaitflow.run()if__name__=="__main__":asyncio.run(main())
actchainkit
actchainkit
actcrm-python
actcrm-pythonAct! CRM API wrapper written in pythonBatchbook-pythonBatchbook API wrapper written in python.Installingpip install actcrm-pythonUsageSimple access with API KEYfrom actcrm.client import Client client = Client('API_KEY', 'DEVELOPER_KEY')Create a contactclient.create_contact()Or you can specify what you want to send. Every field must be a string.client.create_contact(firstName='Jhon', lastName='Snow', mobilePhone='0000000')Get Contactsclient.get_contacts()You can use OData query parameter to filter the contactsclient.get_contacts(top=1, order_by='created desc', filter='. . .')Get an specific contactclient.get_contact(contact_id)Delete an specific contactclient.delete_contact(contact_id)You can do the same with opportunitiesclient.create_opportunity()Get Metadataclient.get_metadata()Requirements-Requests -UrllibTODOCalendarCampaignsContactActivitiesEmarketingGroups Show/Hide List Operations Expand OperationsInteractionsTodosUserInfos
actdiag
actdiaggenerate activity-diagram image file from spec-text file.FeaturesGenerate activity-diagram from dot like text (basic feature).Multilingualization for node-label (utf-8 only).You can get some examples and generated images onblockdiag.com.SetupUse easy_install or pip:$ sudo easy_install actdiag Or $ sudo pip actdiagspec-text setting sampleFew examples are available. You can get more examples atblockdiag.com.simple.diagsimple.diag is simply define nodes and transitions by dot-like text format:diagram { A -> B -> C; lane you { A; B; } lane me { C; } }UsageExecute actdiag command:$ actdiag simple.diag $ ls simple.png simple.pngRequirementsPython 3.7 or laterblockdiag 1.5.0 or laterfuncparserlib 0.3.6 or laterreportlab (optional)wand and imagemagick (optional)setuptoolsLicenseApache License 2.0
act_dr6_lenslike
ACT DR6 Lensing LikelihoodThis repository contains likelihood software for the ACT DR6 CMB lensing analysis. If you use this software and/or the associated data, please cite both of the following papers:Madhavacheril, Qu, Sherwin, MacCrann, Li et al ACT Collaboration (2023), arxiv:2304.05203Qu, Sherwin, Madhavacheril, Han, Crowley et al ACT Collaboration (2023), arxiv:2304.05202In addition, if you use the ACT+Planck lensing combination variant from the likelihood, please also cite:Carron, Mirmelstein, Lewis (2022), arxiv:2206.07773, JCAP09(2022)039ChainsA pre-release version of the chains from Madhavacheril et al are availablehere. Please make sure to read the README file.Step 1: InstallOption 1: Install from PyPIYou can install the likelihood directly with:pip install act_dr6_lenslikeOption 2: Install from GithubIf you wish to be able to make changes to the likelihood for development, first clone this repository. Then install with symbolic links:pip install -e . --userTests can be run usingpython setup.py testStep 2: download and unpack dataThis can be performed automatically with the suppliedget-act-data.shscript. Otherwise follow the steps below.Download the likelihood data tarball for ACT DR6 lensing fromNASA's LAMBDA archive.Extract the tarball into theact_dr6_lenslike/data/directory in the cloned repository such the directoryv1.2is directly inside it. Only then should you proceed with the next steps.Step 3: use in Python codesGeneric Python likelihoodimport act_dr6_lenslike as alike variant = 'act_baseline' lens_only = False # use True if not combining with any primary CMB data like_corrections = True # should be False if lens_only is True # Do this once data_dict = alike.load_data(variant,lens_only=lens_only,like_corrections=like_corrections) # This dict will now have entries like `data_binned_clkk` (binned data vector), `cov` # (covariance matrix) and `binmat_act` (binning matrix to be applied to a theory # curve starting at ell=0). # Get cl_kk, cl_tt, cl_ee, cl_te, cl_bb predictions from your Boltzmann code. # These are the CMB lensing convergence spectra (not potential or deflection) # as well as the TT, EE, TE, BB CMB spectra (needed for likelihood corrections) # in uK^2 units. All of these are C_ell (not D_ell), no ell or 2pi factors. # Then call lnlike=alike.generic_lnlike(data_dict,ell_kk,cl_kk,ell_cmb,cl_tt,cl_ee,cl_te,cl_bb)Cobaya likelihoodYour Cobaya YAML or dictionary should have an entry of this formlikelihood: act_dr6_lenslike.ACTDR6LensLike: lens_only: False stop_at_error: True lmax: 4000 variant: act_baselineNo other parameters need to be set. (e.g. do not manually setlike_correctionsorno_like_correctionshere). An example is provided inACTDR6LensLike-example.yaml. If, however, you are combining with the ACT DR4 CMB 2-point power spectrum likelihood, you should also setno_actlike_cmb_corrections: True(in addition tolens_only: Trueas described below). You do not need to do this if you are combining with Planck CMB 2-point power spectrum likelihoods.Important parametersvariantshould beact_baselinefor the ACT-only lensing power spectrum with the baseline multipole rangeact_extendedfor the ACT-only lensing power spectrum with the extended multipole range (L<1250)actplanck_baselinefor the ACT+Planck lensing power spectrum with the baseline multipole rangeactplanck_extendedfor the ACT+Planck lensing power spectrum with the extended multipole range (L<1250)lens_onlyshould beFalse when combining with any primary CMB measurementTrue when not combining with any primary CMB measurementRecommended theory accuracyFor CAMB calls, we recommend the following (or higher accuracy):lmax: 4000lens_margin:1250lens_potential_accuracy: 4AccuracyBoost:1lSampleBoost:1lAccuracyBoost:1halofit_version:mead2016
acted.projects
UNKNOWN
actelink-computation
No description available on PyPI.
actelink-variables
Actelink helpers library for variables management
actflow
ActflowToolboxThe Brain Activity Flow ("Actflow") ToolboxA toolbox to facilitate discovery of how cognition & behavior are generated via brain network interactionsVersion 0.3.0Visithttps://colelab.github.io/ActflowToolbox/for more informationVersion info:Version 0.3.0: Added Glasso FC to the set of connectivity methods (new recommended best practice for activity flow mapping); see updated HCP_example Jupyter notebook for a demo [2023-09-24]Version 0.2.6: Added combinedFC to the set of connectivity methods (current recommended best practice for activity flow mapping); see updated HCP_example Jupyter notebook for a demoVersion 0.2.5: Fixed minor bug related to applying parcel level non-circular code to subcortical data.Version 0.2.4: Updated the non-circular code to be more efficient. Also created an easier and faster version of the non-circular approach that is at the parcel level (excluding all parcels within 10mm of the target parcel).Cite as:Cole MW, Ito T, Bassett DS, Schultz DH (2016). "Activity flow over resting-state networks shapes cognitive task activations". Nature Neuroscience. 19:1718–1726.http://dx.doi.org/10.1038/nn.4406https://github.com/ColeLab/ActflowToolbox/The article that describes the specific toolbox functions being used in most detailHow to installOption 1:Within an Anaconda environment:conda install -c conda-forge actflowOption 2:pip install actflowOption 3:git clone --recurse-submodules https://github.com/ColeLab/ActflowToolbox.gitHow to useSee this paper for an overview of how to use the Brain Activity Flow Toolbox: Cocuzza CV, Sanchez-Romero R, Cole MW (2022). "Protocol for activity flow mapping of neurocognitive computations using the Brain Activity Flow Toolbox". STAR Protocols. 3, 1. doi:10.1016/j.xpro.2021.101094Example notebook:https://colelab.github.io/ActflowToolbox/HCP_example.htmlEmail list/forumWe strongly encourage you to join the ColeNeuroLab Users Group (https://groups.google.com/forum/#!forum/coleneurolab_users), so you can be informed about major updates in this repository and others hosted by the Cole Neurocognition Lab.Software development guidelinesPrimary language: Python 3Secondary language (for select functions, minimally maintained/updated): MATLABVersioning guidelines: Semantic Versioning 2.0.0 (https://semver.org/); used loosely prior to v1.0.0, strictly afterUsing GitHub for version controlThose new to Git should go through a tutorial for branching, etc.:https://www.youtube.com/watch?v=oFYyTZwMyAgandhttps://guides.github.com/activities/hello-world/Use branching for adding new features, making sure code isn't broken by changesConsidering using unit tests and Travis CI (https://travis-ci.org) in futureStyle specifications:PEP8 style as general guidelines (loosely applied for now):https://www.python.org/dev/peps/pep-0008/Soft tabs (4 spaces) for indentations [ideally set "soft tabs" setting in editor, so pressing tab key produces 4 spaces]Use intuitive variable and function namesAdd detailed comments to explain what code does (especially when not obvious)ContentsNote: only a subset of files are listed and describedDirectory: actflowcomp - Calculating activity flow mappingactflowcalc.py - Main function for calculating activity flow mapping predictionsactflowtest.py - A convenience function for calculating activity-flow-based predictions and testing prediction accuracies (across multiple subjects)noiseceilingcalc.py - A convenience function for calculating the theoretical limit on activity-flow-based prediction accuracies (based on noise in the data being used)Directory: connectivity_estimation - Connectivity estimation methodscalcactivity_parcelwise_noncircular_surface.py: High-level function for calculating parcelwise actflow with parcels that are touching (e.g., the Glasser 2016 parcellation), focusing on task activations. This can create circularity in the actflow predictions due to spatial autocorrelation. This function excludes vertices within X mm (10 mm by default) of each to-be-predicted parcel.calcconn_parcelwise_noncircular_surface.py: High-level function for calculating parcelwise actflow with parcels that are touching (e.g., the Glasser 2016 parcellation), focusing on connectivity estimation. This can create circularity in the actflow predictions due to spatial autocorrelation. This function excludes vertices within X mm (10 mm by default) of each to-be-predicted parcel.corrcoefconn.py: Calculation of Pearson correlation functional connectivitymultregconn.py: Calculation of multiple-regression functional connectivitypartial_corrconn.py: Calculation of partial-correlation functional connectivitypc_multregconn.py: Calculation of regularized multiple-regression functional connectivity using principle components regression (PCR). Useful when there are fewer time points than nodes, for instance.Directory: dependencies - Other packages Actflow Toolbox depends onDirectory: examples - Example analyses that use the Actflow Toolbox (Jupyter notebook)Directory: images - Example images generated by the Actflow ToolboxDirectory: matlab_code - Limited functions for activity flow mapping in MATLABPCmultregressionconnectivity.m - Compute multiple regression-based functional connectivity; PC allows for more regions/voxels than time points.actflowmapping.m - MATLAB version of actflowcalc.py; Main function for computing activity flow mapping predictionsmultregressionconnectivity.m - Compute multiple regrression-based functional connectivityDirectory: model_compare - Comparing prediction accuracies across modelsmodel_compare_predicted_to_actual.py - Calculation of predictive model performancemodel_compare.py - Reporting of model prediction performance, and comparison of prediction performance across modelsDirectory: network_definitions - Data supporting parcel/region sets and network definitionsdilateParcels.py - Dilate individual parcels (cortex and subcortex) and produce masks to exclude vertices within 10 mm; requires Connectome workbenchDirectory: simulations - Simulations used for validating methodsDirectory: tools - Miscellaneous toolsaddNetColors.py - Generates a heatmap figure with The Cole-Anticevic Brain-wide Network Partition (CAB-NP) colors along axesaddNetColors_Seaborn.py - Generates a Seaborn heatmap figure with The Cole-Anticevic Brain-wide Network Partition (CAB-NP) colors along axesmap_to_surface.py - Maps 2D matrix data onto a dscalar surface file (64k vertices); uses Glasser et al. 2016 ROI parcellationmax_r.py - Permutation testing to control for FWE (as in Nichols & Holmes, 2002 max-t); individual difference correlations (r)max_t.py - Permutation testing to control for FWE (as in Nichols & Holmes, 2002); t-test variants (t)regression.py - Compute multiple linear regression (with L2 regularization option)
actfw
DEPRECATEDActcast Application Framework for PythonThis package provides a Python API for developing Actcast apps.This framework has moved intoactfw-core&actfw-raspberrypisince v1.4.0.This package only providesactfwmodule name, which interally binds submodules inactfw_coreandactfw_raspberrypi.DocumentAPI ReferencesUsageConstruct your application with a task parallel modelApplicationactfw.Application: Main applicationWorkersactfw.task.Producer: Task generatoractfw.capture.PiCameraCapture: Generate CSI camera capture imageactfw.capture.V4LCameraCapture: Generate UVC camera capture imageactfw.task.Pipe: Task to Task converteractfw.task.Consumer: Task terminatorEach worker is executed in parallel.User shouldDefine subclass ofProducer/Pipe/ConsumerclassMyPipe(actfw.task.Pipe):defproc(self,i):...Connect defined worker objectsp=MyProducer()f1=MyPipe()f2=MyPipe()c=MyConsumer()p.connect(f1)f1.connect(f2)f2.connect(c)Register toApplicationapp=actfw.Application()app.register_task(p)app.register_task(f1)app.register_task(f2)app.register_task(c)Execute applicationapp.run()
actfw-core
actfw-coreCore components of actfw, a framework for Actcast Application written in Python. actfw-core is intended to be independent of any specific device.Installationsudo apt-get updatesudo apt-get install -y python3-pip python3-pilsudo apt-get install -y libv4l-0 libv4lconvert0 # if using `V4LCameraCapture`pip3 install actfw-coreDocumentAPI ReferencesUsageConstruct your application with a task parallel modelApplicationactfw_core.Application: Main applicationWorkersactfw_core.task.Producer: Task generatoractfw_core.capture.V4LCameraCapture: Generate UVC camera capture imageactfw_core.task.Pipe: Task to Task converteractfw_core.task.Consumer: Task terminatorEach worker is executed in parallel.User shouldDefine subclass ofProducer/Pipe/ConsumerclassMyPipe(actfw_core.task.Pipe):defproc(self,i):...Connect defined worker objectsp=MyProducer()f1=MyPipe()f2=MyPipe()c=MyConsumer()p.connect(f1)f1.connect(f2)f2.connect(c)Register toApplicationapp=actfw_core.Application()app.register_task(p)app.register_task(f1)app.register_task(f2)app.register_task(c)Execute applicationapp.run()Development GuideInstallation of dev requirementspip3 install poetrypoetry installRunning testspoetry run pytest -vReleasing package & API docCI will automatically do. Follow the following branch/tag rules.Make changes for next version inmasterbranch (via pull-requests).Make a PR that updates version inpyproject.tomland merge it tomasterbranch.Create GitHub release frommasterbranch's HEAD.Draft a new release.Create new tag namedrelease-<New version>(e.g.release-1.4.0) fromChoose a tagpull down menu.Write title and description.Publish release.Then CI will build/upload package to PyPI & API doc to GitHub Pages.
actfw-gstreamer
actfw-gstreameractfw's components using GStreamer for implementation. actfw is a framework for Actcast Application written in Python.Installationsudo apt-get updatesudo apt-get install -y python3-pip python3-pilsudo apt-get install libgstreamer1.0-dev libgirepository1.0-dev ibgstreamer-plugins-base1.0-dev libglib2.0-devpip3 install actfw-gstreamerDocumentAPI ReferencesUsageSeeactfw-corefor basic usage ofactfwframework.InitalizationAn application usingactfw-gstreamerhave to initialize GStreamer library before usingactfw-gstreamer's components.if__name__=='__main__':importgigi.require_version('Gst','1.0')fromgi.repositoryimportGstGst.init(None)main()videotestsrcYou can learn basic usage ofactfw-gstreamerby usingvideotestsrc.fromactfw_gstreamer.captureimportGstreamerCapturefromactfw_gstreamer.gstreamer.converterimportConverterPILfromactfw_gstreamer.gstreamer.streamimportGstStreamBuilderfromactfw_gstreamer.restart_handlerimportSimpleRestartHandlerdefvideotestsrc_capture()->GstreamerCapture:pipeline_generator=preconfigured_pipeline.videotestsrc()builder=GstStreamBuilder(pipeline_generator,ConverterPIL())restart_handler=SimpleRestartHandler(10,5)returnGstreamerCapture(builder,restart_handler)defmain():app=actfw_core.Application()capture=videotestsrc_capture()app.register_task(capture)consumer=YourConsumer()app.register_task(consumer)capture.connect(consumer)app.run()This generatesFrames usingvideotestsrc.GstreamerCaptureis aProducer.It generatesFrames consists of an output ofConverterBase. In this case, converter class isConverterPILand output isPIL.Image.Image.GstStreamBuilderandPipelineGeneratordetermines how to build gstreamer pipelines.preconfigured_pipelineprovides preconfiguredPipelineGenerators.SimpleRestartHandleris a simple implementation ofRestartHandlerBase, which determines "restart strategy".For more details, seetests.rtspsrcYou can usertspsrcusingpreconfigured_pipeline.rtsp_h264().Note that, as of now (2021-04),Actcast applicationcannot use multicast UDP with dynamic address and unicast UDP. (RTSP client communicates with RTSP server in RTP and determines adderss of mulitcast UDP.) Therefore, you can use only the optionprotocols = "tcp". See alsohttps://gstreamer.freedesktop.org/documentation/rtsp/rtspsrc.html#rtspsrc:protocols.You should also pay attention to decoders. Available decoders are below:decoder (package) \ deviceRaspberry Pi 3Raspberry Pi 4Jetson Nanoomxh264(fromgstreamer1.0-omxandgstreamer1.0-omx-rpi)ox?v4l2h264dec(fromgstreamer1.0-plugins-good)very slowo?If your application supports various devices, you should branch by hardware types and select appropriatedecoder_type. For example, it is recommended to usedecoder_typeomxfor Raspberry Pi 3 andv4l2for Raspberry Pi 4. Currently, this library does not provide auto determination.Development GuideInstallation of dev requirementspip3 install poetrypoetry installRunning testspoetry run nose2 -vReleasing package & API docCI will automatically do. Follow the following branch/tag rules.Make changes for next version inmasterbranch (via pull-requests).Make a PR that updates version inpyproject.tomland merge it tomasterbranch.Create GitHub release frommasterbranch's HEAD.Draft a new release.Create new tag namedrelease-<New version>(e.g.release-1.4.0) fromChoose a tagpull down menu.Write title and description.Publish release.Then CI will build/upload package to PyPI & API doc to GitHub Pages.
actfw-jetson
actfw-jetsonactfw's components for Jetson series. actfw is a framework for Actcast Application written in Python.Installationsudo apt-get updatesudo apt-get install -y python3-pip python3-pil#InstallGStreamerdependencies(somecomponentsinactfw-jetsonusesGStreamerinimplementation)sudo apt-get install -y libgstreamer1.0-dev libgirepository1.0-dev ibgstreamer-plugins-base1.0-dev libglib2.0-dev libcairo2-devpip3 install actfw-jetsonDocumentAPI ReferencesUsageSeeactfw-corefor basic usage.Since actfw-jetson uses GStreamer to implement some components, an application using actfw-jetson may have to initialize GStreamer library before using actfw-jetson's components.if__name__=='__main__':importgigi.require_version('Gst','1.0')fromgi.repositoryimportGstGst.init(None)main()actfw-jetson provides:actfw_jetson.Display: Display usingnvoverlaysinkelement inNVIDIA's Accelerated GStreamer.Exampleexample/hello_jetson: The simplest application example for JetsonUse HDMI display as 1280x720 areaGenerate 1280x720 single-colored imageDraw "Hello, Actcast!" textDisplay it as 1280x720 imageNotice message for each frameSupport application heartbeatSupport "Take Photo" commandDepends: fonts-dejavu-coreDevelopment GuideInstallation of dev requirementscurl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python -poetry installRunning testspoetry run nose2 -vRunning exampleshello_jetsonDisplays a red rectangle and greeting text on it on HDMI display.Run on a Jetson Nano connected to HDMI display:apt-get install fonts-dejavu-corepoetry run python example/hello_jetsoncamera_displayDisplays camera input on HDMI display.Run on a Jetson Nano with CSI camera and HDMI display:poetry run python example/camera_displayReleasing package & API docCI will automatically do. Follow the following branch/tag rules.Make changes for next version inmasterbranch (via pull-requests).Updateversionfield inpyproject.tomlwith new version inmasterbranch.Create GitHub release frommasterbranch's HEAD.Draft a new release.Create new tag namedrelease-<New version>(e.g.release-1.4.0) fromChoose a tagpull down menu.Write title and description.Publish release.Then CI will build/upload package to PyPI & API doc to GitHub Pages.
actfw-raspberrypi
actfw-raspberrypiactfw's components for Raspberry Pi. actfw is a framework for Actcast Application written in Python.Installationsudo apt-get updatesudo apt-get install -y python3-pip python3-pilpip3 install actfw-raspberrypiDocumentAPI ReferencesUsageSeeactfw-corefor basic usage.actfw-raspberrypi provides:actfw_raspberrypi.capture.PiCameraCapture: Generate CSI camera capture imageactfw_raspberrypi.Display: Display using PiCamera Overlayactfw_raspberrypi.vc4.Display: Display using VideoCore IVactfw_raspberrypi.vc4.Window: Double buffered windowExampleexample/hello: The most simple application exampleUse HDMI display as 640x480 areaCapture 320x240 RGB image from CSI cameraDraw "Hello, Actcast!" textDisplay it as 640x480 image (with x2 scaling)Notice message for each frameSupport application settingSupport application heartbeatSupport "Take Photo" commandDepends: python3-picamera fonts-dejavu-coreexample/grayscale: Next level application exampleUse HDMI display as 640x480 areaCapture 320x240 RGB image from CSI cameraConvert it to grayscaleDisplay it as 640x480 image (with x2 scaling)Notice message for each frameSupport application settingSupport application heartbeatSupport "Take Photo" commandDepends: python3-picameraexample/parallel_grayscale: Paralell processing application exampleUse HDMI display as 640x480 areaCapture 320x240 RGB image from CSI cameraConvert it to grayscaleThere exists 2 converter taskRound-robin task schedulingDisplay it as 640x480 image (with x2 scaling)Notice message for each frameShow which converter processes imageSupport application settingSupport application heartbeatSupport "Take Photo" commandDepends: python3-picameraexample/uvccamera: UVC camera capture examplepicamerais unnecessaryUse HDMI display center 640x480 areaCapture 320x240 RGB image from UVC cameraConvert it to grayscaleDisplay it as 640x480 image (with x2 scaling)Notice grayscale pixel data histogramSupport application settingSupport application heartbeatSupport "Take Photo" commandDepends: libv4l-0 libv4lconvert0Development GuideInstallation of dev requirementspip3 install poetrypoetry installRunning testspoetry run nose2 -vRunning examplesOn a Raspberry Pi connected to HDMI display:poetry run python example/helloReleasing package & API docCI will automatically do. Follow the following branch/tag rules.Make changes for next version inmasterbranch (via pull-requests).Make a PR that updates version inpyproject.tomland merge it tomasterbranch.Create GitHub release frommasterbranch's HEAD.Draft a new release.Create new tag namedrelease-<New version>(e.g.release-1.4.0) fromChoose a tagpull down menu.Write title and description.Publish release.Then CI will build/upload package to PyPI & API doc to GitHub Pages.
actg
No description available on PyPI.
actiapi
This library isonlymeant to serve as an example of how to use ActiGraph’s API to retrieve data from ActiGraph devices.Please refer to official API documentation (https://github.com/actigraph/StudyAdminAPIDocumentationandhttps://github.com/actigraph/CentrePoint3APIDocumentation).Please do not contact ActiGraph support for questions related to this library. You can create issues on this repository and we’ll do our best to help you.ExampleMetadata>>> from actiapi.v3 import ActiGraphClientV3 api_client = ActiGraphClientV3(<api_access_key>, <api_secret_key>) metadata = api_client.get_study_metadata(<study_id>) metadata = {x["id"]: x for x in metadata}Raw data>>> from actiapi.v3 import ActiGraphClientV3 api_client = ActiGraphClientV3(<api_access_key>, <api_secret_key>) results: List[str] = api_client.get_files( user=<user_id>, study_id=<self.study_id> )
actigamma
The package for producing gamma spec from nuclide activities
actigraph
No description available on PyPI.
actihealth
No description available on PyPI.
actility
Failed to fetch description. HTTP Status Code: 404
actin
No description available on PyPI.
actinet
actinetA tool to extract meaningful health information from large accelerometer datasets. The software generates time-series and summary metrics useful for answering key questions such as how much time is spent in sleep, sedentary behaviour, or doing physical activity. The backbone of this repository is a self-supervised Resnet18 model.InstallMinimum requirements: Python>=3.9, Java 8 (1.8)The following instructions make use of Anaconda to meet the minimum requirements:Download & installMiniconda(light-weight version of Anaconda).(Windows) Once installed, launch theAnaconda Prompt.Create a virtual environment:conda create -n actinet python=3.9 openjdk pipThis creates a virtual environment calledactinetwith Python version 3.9, OpenJDK, and Pip.Activate the environment:conda activate actinetYou should now see(actinet)written in front of your prompt.Installactinet:pip install actinetYou are all set! The next time that you want to useactinet, open the Anaconda Prompt and activate the environment (step 4). If you see(actinet)in front of your prompt, you are ready to go!Usage# Process an AX3 file$actinet-fsample.cwa# Or an ActiGraph file$actinet-fsample.gt3x# Or a GENEActiv file$actinet-fsample.bin# Or a CSV file (see data format below)$actinet-fsample.csvTroubleshootingSome systems may face issues with Java when running the script. If this is your case, try fixing OpenJDK to version 8:conda install -n actinet openjdk=8Offline usageTo use this package offline, one must first download and install the relevant classifier file and model modules. This repository offers two ways of doing this.Run the following code when you have internet access:actinet --cache-classifierFollowing this, the actinet classifier can be used as standard without internet access, without needing to specify the flags relating to the model repository.Alternatively, you can download or git clone the ssl modules from thessl-wearables repository.In addition, you can donwload/prepare a custom classifier file.Once this is downloaded to an appopriate location, you can run the actinet model using:actinet -f sample.cwa -c /path/to/classifier.joblib.lzma -m /path/to/ssl-wearablesOutput filesBy default, output files will be stored in a folder named after the input file,outputs/{filename}/, created in the current working directory. You can change the output path with the-oflag:$actinet-fsample.cwa-o/path/to/some/folder/<Output summary written to: /path/to/some/folder/sample-outputSummary.json><Time series output written to: /path/to/some/folder/sample-timeSeries.csv.gz>The following output files are created:Info.jsonSummary info, as shown above.timeSeries.csvRaw time-series of activity levelsSeeData Dictionaryfor the list of output variables.Plotting activity profilesTo plot the activity profiles, you can use the -p flag:$actinet-fsample.cwa-p<Output plot written to: data/sample-timeSeries-plot.png>Crude vs. Adjusted EstimatesAdjusted estimates are provided that account for missing data. Missing values in the time-series are imputed with the mean of the same timepoint of other available days. For adjusted totals and daily statistics, 24h multiples are needed and will be imputed if necessary. Estimates will be NaN where data is still missing after imputation.Processing CSV filesIf a CSV file is provided, it must have the following header:time,x,y,z.Example:time,x,y,z2013-10-21 10:00:08.000,-0.078923,0.396706,0.9177592013-10-21 10:00:08.010,-0.094370,0.381479,0.9335802013-10-21 10:00:08.020,-0.094370,0.366252,0.9019382013-10-21 10:00:08.030,-0.078923,0.411933,0.901938...Processing multiple filesWindowsTo process multiple files you can create a text file in Notepad which includes one line for each file you wish to process, as shown below forfile1.cwa,file2.cwa, andfile2.cwa.Example text filecommands.txt:actinet -f file1.cwa &actinet -f file2.cwa &actinet -f file3.cwa:ENDOnce this file is created, runcmd < commands.txtfrom the terminal.LinuxCreate a filecommand.shwith:actinet -f file1.cwaactinet -f file2.cwaactinet -f file3.cwaThen, runbash command.shfrom the terminal.Collating outputsA utility script is provided to collate outputs from multiple runs:actinet-collate-outputs outputs/This will collate all *-Info.json files found in outputs/ and generate a CSV file.Citing our workWhen using this tool, please consider citing the works listed inCITATION.md.LicenceSeeLICENSE.md.AcknowledgementsWe would like to thank all our code contributors, manuscript co-authors, and research participants for their help in making this work possible.
actinfrictionpy
Supporting python modules for dynamics analysis in "Constriction of actin rings by passive crosslinkers"Python package for the dynamics analysis ofRef. 1.This package is primarily for creating plots for the solutions of differential equations that are solved with another package, (ActinFriction.jl).InstallationThis package was developed and used on Linux.It is available on the PyPI respository. It can be installed by runningpip install actinfrictionpyIf you are not using a virtual environment, the--userflag may be used instead to install it locally to the user. To install directly from this repository, runpython -m build pip install dist/actinfrictionpy-[current version]-py3-none-any.whlTo run the above, it may be necessary to update a few packages:python3 -m pip install --upgrade pip setuptools wheelFor more information on building and installing python packages, see the documentation from thePython Packaging Authority.References[1] A. Cumberworth and P. R. ten Wolde, Constriction of actin rings by passive crosslinkers,arXiv:2203.04260 [physics.bio-ph].LinksPython Packaging AuthorityActinFriction.jlReplication package Ref. 1
actingweb
This is a python library implementation showcasing the REST-basedActingWebdistributed micro-services model. A typical use case is bot to bot communication on a peer to peer level. It serves as the reference implementation for theActingWeb REST protocol specificationfor how such micro-services interact.Repository and documentationThe library is available as a PYPI library and installed withpip install actingweb. Project home is athttps://pypi.org/project/actingweb/.The git repository for this library can be found athttps://github.com/gregertw/actingweb.The latest documentation for the released version (release branch) of this library can be found athttp://actingweb.readthedocs.io/.The master branch of the library has the latest features and bug fixes and the updated documentation can be found athttp://actingweb.readthedocs.io/en/master.Why use actingweb?ActingWeb is well suited for applications where each individual user’s data and functionality both needs high degree of security and privacy AND high degree of interactions with the outside world. Typical use cases are Internet of Things where each user’s “Thing” becomes a bot that interacts with the outside world, as well as bot to bot communication where each user can get a dedicated, controllable bot talking to other user’s bots.As a developer, you get a set of out of the box functionality from the ActingWeb library:an out-of-the-box REST bot representing each user’s thing, service, or functionality (your choice)a way to store and expose data over REST in a very granular way using propertiesa trust system that allows creation of relationships the user’s bot on the user levela subscription system that allows one bot (user) to subscribe to another bot’s (user’s) changesan oauth framework to tie the bot to any other API service and thus allow user to user communication usingindividual user’s data from the API serviceThere is a high degree of configurability in what to expose, and although the ActingWeb specification specifies a protocol set to allow bots from different developers to talk to each other, not all functionality needs to be exposed.`Each user’s indvidual bot is called anactorand this actor has its own root URL where its data and services are exposed. See below for further details.Features of actingweb libraryThe latest code in master is at all times deployed tohttps://actingwebdemo.greger.io/It has implemented a simple sign-up page as a front-end to a REST-based factory URL that will instantiate a new actor with a guid to identify the actor. The guid is then embedded in the actor’s root URL, e.g.https://actingwebdemo.greger.io/9f1c331a3e3b5cf38d4c3600a2ab5d54.If you try to create an actor, you will get to a simple web front-end where you can set the actor’s data (properties) and delete the actor. You can later access the actor (both /www and REST) by using the Creator you set as username and the passphrase you get when creating the actor and log in.acting-web-gae-libraryis a close to complete implementation of the full ActingWeb specification where all functionality can be accessed through the actor’s root URL (e.g.https://actingwebdemo.greger.io/9f1c331a3e3b5cf38d4c3600a2ab5d54):/properties: attributed/value pairs as flat or nested json can be set, accessed, and deleted to store this actor’s data/meta: a publicly available json structure allowing actor’s to discover each other’s capabilities/trust: access to requesting, approving, and managing trust relationships with other actors of either the same type or any other actor “talking actingweb”/subscriptions: once a trust relationship is set up, this path allows access to establishing, retrieving, and managing subscriptions that are based on paths and identified with target, sub-target, and resource, e.g./resources/folders/12345/callbacks: used for verification when establishing trust/subscriptions, to receive callbacks on subscriptions, as well as a programming hook to process webhooks from 3rd party services/resources: a skeleton to simplify exposure of any type of resource (where /properties is not suited)/oauth: used to initiate a www-based oauth flow to tie the actor to a specific OAuth user and service. Available if OAuth is turned on and a 3rd party OAuth service has been configured in config.py. /www will also be redirected to /oauth (OAuth is not enabled in the online actingwebdemo mini-application)Sidenote: Theactingweb libraryalso implements a simple mechanism for protecting the /www path with oauth (not in the specification). On successful OAuth authorisation, it will set a browser cookie to the oauth token. This is not used in the inline demo and requires also that the identity of the user authorising OAuth access is the same user already tied to the instantiated actor. There is a programming hook that allows such verification as part of the OAuth flow, but it is not enabled in the actingwebdemo mini-application.Other applications using the actingweb libraryThere is also another demo application available forCisco Webex Teams. It uses the actingweb library to implement a Webex Teams bot and integration. If you have signed up as a Cisco Webex Teams user, you can try it out by sending a message [email protected] details about the Army Knife can be found onthis blog.The ActingWeb ModelThe ActingWeb micro-services model and protocol defines a bot-to-bot and micro-service-to-micro-service communication that allows extreme distribution of data and functionality. This makes it very suitable for holding small pieces of sensitive data on behalf of a user or “things” (as in Internet of Things). These sensitive data can then be used and shared in a very granular and controlled way through the secure and distributed ActingWeb REST protocol. This allows you to expose e.g. your location data from your phone directly on the Internet (protected by a security framework) and to be used by other serviceson your choosing. You can at any time revoke access to your data for one particular service without influencing anything else.The ActingWeb Micro-Services ModelThe programming model in ActingWeb is based on an extreme focus on only representing one small set of functionality and for only one user or entity. This is achieved by not allowing any other way of calling the service (in ActingWeb called a “mini-application”) than through a user and the mini-app’s REST interface (a user’s instance of a mini-application is called anactorin ActingWeb). From a practical point of view, getting xyz’s location through the REST protocol is as simple as doing a GEThttp://mini-app-url/xyz/properties/location.There is absolutely no way of getting xyz’s and yyz’s location information in one request, and the security model enforces access based on user (i.e. actor), so even if you have access tohttp://mini-app-url/xyz/properties/location, you may not have access tohttp://mini-app-url/yyz/properties/location.Any functionality desired across actors, for example xyz sharing location information with yyzMUSTbe done through the ActingWeb REST protocol. However, since the ActingWeb service-to-service REST protocol is standardised,anyservice implementing the protocol can easily share data with other services.The ActingWeb REST ProtocolThe ActingWeb REST protocol specifies a set of default endpoints (like/properties,/trust,/subscriptionsetc) that are used to implement the service-to-service communication, as well as a set of suggested endpoints (like/resources,/actionsetc) where the mini-applications can expose their own functionality. All exchanges are based on REST principles and a set of flows are built into the protocol that support exchanging data, establishing trust between actors (per actor, not per mini-application), as well as subscribing to changes.The ActingWeb Security ModelThe security model is based on trust between actors, not mini-applications. This means that each instance of the mini-application holding the sensitive data for one particular person or thingmustbe connected through a trust relationship to another ActingWeb actor, but it doesn’t have to be a mini-application of the same type (like location sharing), but could be a location sharing actor establishing a trust relationship with 911 authorities to allow emergency services to always be able to look you up.There are currently two ways of establishing trust between actors: either through an explicit OAuth flow where an actor is tied to somebody’s account somewhere else (like Google, Box.com, etc) or through a flow where one actor requests a trust relationship with another, which then needs to be approved either interactively by a user or programatically through the REST interface.Seehttp://actingweb.org/for more information.Building and installing# Build source and binary distributions: python setup.py sdist bdist_wheel --universal # Upload to test server: python setup.py sdist upload -r pypitest twine upload --repository pypitest dist/actingweb-a.b.c.* # Upload to production server: twine upload dist/actingweb-a.b.c.*
actinia-api
API docs for actinia-core and plugins!OpenAPI is a project used to describe and document RESTful APIs. actinia is using swagger2, version upgrade is on the agena.In this repository all API docs should be collected. This way, the release of the API is not interconnected with the source code repositories anymore but independent versioning is used.Currently, the migration is still in progress and the API docs can be found here or still in the corresponding source code repository.
actinia-core
Actinia — The GRASS GIS REST APISoftwareArticleActiniais an open source REST API for scalable, distributed, high performance processing of geographical data that mainly usesGRASS GISfor computational tasks.It provides a REST API to process satellite images, time series of satellite images, arbitrary raster data with geographical relations and vector data.The REST interface allows to access, manage and manipulate the GRASS GIS database via HTTP GET, PUT, POST and DELETE requests and to process raster, vector and time series data located in a persistent GRASS GIS database.Actiniaallows the processing of cloud based data, for example all Landsat 4-8 scenes as well as all Sentinel-2 scenes in an ephemeral databases. The computational results of ephemeral processing are available via object storage as GeoTIFF files.API documentationThe full API documentation is availablehereactinia command execution - actinia shellThere is also an option to interactively control actinia. For details, seehere.InstallationRequired system packages:alpine:apk add python3 py3-pipAs not all python packages are pre-build for alpine, they need to be build during installation. For this some system packages are required:apk add python3 py3-pip python3-dev gcc musl-dev linux-headers build-base gdal gdal-tools gdal-dev proj proj-util proj-dev geos-dev py3-numpy-devubuntu:apt install -y python3 python3-pipAnd then install from pypi:pip install actinia-coreInstallation with DockerDocker images are available fromhttps://hub.docker.com/r/mundialis/actinia-coredocker pull mundialis/actinia-coreFor own deployments or local dev-setup, see thedocker/subfolder.Actinia is also available onOSGeoLive.ExamplesData managementList all locations that are available in the actinia persistent database:curl-u'demouser:gu3st!pa55w0rd'-XGET"https://actinia.mundialis.de/api/v3/locations"List all mapsets in the location latlong_wgs84:curl-u'demouser:gu3st!pa55w0rd'-XGET"https://actinia.mundialis.de/api/v3/locations/latlong_wgs84/mapsets"List all space-time raster datasets (STRDS) in location latlong_wgs84 and mapset Sentinel_timeseries:curl-u'demouser:gu3st!pa55w0rd'-XGET"https://actinia.mundialis.de/api/v3/locations/latlong_wgs84/mapsets/modis_ndvi_global/strds"List all raster map layers of the STRDS:curl-u'demouser:gu3st!pa55w0rd'-XGET"https://actinia.mundialis.de/api/v3/locations/latlong_wgs84/mapsets/modis_ndvi_global/strds/ndvi_16_5600m/raster_layers"Landsat and Sentinel-2 NDVI computationCompute the NDVI of the top of athmosphere (TOAR) corrected Landsat4 scene LC80440342016259LGN00:curl-u'demouser:gu3st!pa55w0rd'-XPOST"https://actinia.mundialis.de/api/v3/landsat_process/LC80440342016259LGN00/TOAR/NDVI"NDVI computation of Sentinel-2A scene S2A_MSIL1C_20170212T104141_N0204_R008_T31TGJ_20170212T104138:curl-u'demouser:gu3st!pa55w0rd'-XPOST"https://actinia.mundialis.de/api/v3/sentinel2_process/ndvi/S2A_MSIL1C_20170212T104141_N0204_R008_T31TGJ_20170212T104138"The results of the asynchronous computations are available as GeoTIFF file in a cloud storage for download.List of available endpointsTo see a simplelist of endpoints(and more), consult the "paths" section in theAPI JSON; or, to list the available endpoints on command line, run# sudo npm install -g jsoncurl-u'demouser:gu3st!pa55w0rd'-XGEThttps://actinia.mundialis.de/api/v3/swagger.json|jsonpaths|json-kaDevelopmentUse pre-commitIt is highly recommended to install and usepre-commitbefore submitting any new or modified code or any other content. The pre-commit Git hooks set checks validity and executes automated formatting for a range of file formats, including Python. Pre-commit installs all necessary tools in a virtual environment upon first use.If you never used pre-commit before, you must start by installing it on your system. You only do it once:python-mpipinstallpre-commitPre-commit must then be activated in the code repository. Change the directory to the root folder and use theinstallcommand:cd<actinia-core_source_dir># once per repopre-commitinstallPre-commit will then be automatically triggered by thegit commitcommand. If it finds any problem it will abort the commit and try to solve it automatically. In that case review the changes and run againgit addandgit commit.It is also possible to run pre-commit manually, e.g:pre-commitrunflake8--all-files pre-commitrunblack--all-files# pre-commit run yamllint --all-files# pre-commit run markdownlint --all-filesOr to target a specific set of files:pre-commitrun--filessrc/*The pre-commit hooks set is defined in.pre-commit-config.yaml.It is possible to temporally disable the pre-commit hooks in the repo, e.g. while working on older branches:pre-commituninstallAnd to reactivate pre-commit again:gitswitchmain pre-commitinstallThanks to all contributors ❤
actinia-python-client
actinia-python-clientPython Client Library foractinia.InstallationThe client is available athttps://pypi.org/project/actinia-python-client/pip3installactinia-python-clientFor newest version seereleases.Condaactinia-python-client is automatically packaged onconda-forge / actinia-python-client-feedstock.Small ExampleSeeexamples
actinis-django-storages
InstallationInstalling from PyPI is as easy as doing:pipinstalldjango-storagesIf you’d prefer to install from source (maybe there is a bugfix in master that hasn’t been released yet) then the magic incantation you are looking for is:pipinstall-e'git+https://github.com/jschneier/django-storages.git#egg=django-storages'Once that is done setDEFAULT_FILE_STORAGEto the backend of your choice. If, for example, you want to use the boto3 backend you would set:DEFAULT_FILE_STORAGE='storages.backends.s3boto3.S3Boto3Storage'If you are using theFileSystemStorageas your storage management class in your modelsFileFieldfields, remove them and don’t specify any storage parameter. That way, theDEFAULT_FILE_STORAGEclass will be used by default in your field. For example, if you have aphotofield defined as:photo=models.FileField(storage=FileSystemStorage(location=settings.MEDIA_ROOT),upload_to='photos',)Set it to just:photo=models.FileField(upload_to='photos',)There are also a number of settings available to control how each storage backend functions, please consult the documentation for a comprehensive list.Aboutdjango-storages is a project to provide a variety of storage backends in a single library.This library is usually compatible with the currently supported versions of Django. Check the Trove classifiers in setup.py to be sure.django-storages is backed in part byTidelift. Check them out for all of your enterprise open source software commerical support needs.SecurityTo report a security vulnerability, please use theTidelift security contact. Tidelift will coordinate the fix and disclosure. Pleasedo notpost a public issue on the tracker.HistoryThis repo began as a fork of the original library under the package name of django-storages-redux and became the official successor (releasing under django-storages on PyPI) in February of 2016.Found a Bug? Something Unsupported?I suspect that a few of the storage engines in backends/ have been unsupported for quite a long time. I personally only really need the S3Storage backend but welcome bug reports (and especially) patches and tests for some of the other backends.Issues are tracked via GitHub issues at theproject issue page.DocumentationDocumentation for django-storages is located athttps://django-storages.readthedocs.io/.ContributingCheck for open issuesat the project issue page or open a new issue to start a discussion about a feature or bug.Fork thedjango-storages repository on GitHubto start making changes.Add a test case to show that the bug is fixed or the feature is implemented correctly.Bug me until I can merge your pull request. Also, don’t forget to add yourself toAUTHORS.django-storages CHANGELOG1.12.3 (2021-10-29)GeneralAdd support for Python 3.10 (#1078)S3Re-raise non-404 errors in.exists()(#1084,#1085)AzureFix usingAZURE_CUSTOM_DOMAINwith an account key credential (#1082,#1083)SFTPCatchFileNotFoundErrorinstead ofOSerrorin.exists()to prevent swallowingsocket.timeoutexceptions (#1064,#1087)1.12.2 (2021-10-16)AzureAddparameterskwarg toAzureStorage.urlto configure blob properties in the SAS token (#1071)Fix regression whereAZURE_CUSTOM_DOMAINwas interpreted as a replacement ofblob.core.windows.netrather than as a full domain (#1073,#1076)1.12.1 (2021-10-11)S3Change gzip compression to use a streaming implementation (#1061)Fix saving files withS3ManifestStaticStorage(#1068,#1069)1.12 (2021-10-06)GeneralAdd support for Django 3.2 (#1046,#1042,#1005)Replace Travis CI with GitHub actions (#1051)S3Convert signing keys to bytes if necessary (#1003)Avoid a ListParts API call during multipart upload (#1041)Custom domains now use passed URL params (#1054)Allow the use of AWS profiles and clarify the options for passing credentials (fbe9538)Re-allow override of various access key names (#1026)Properly exclude empty folders duringlistdir(66f4f8e)Support saving file objects that are notseekable(#860,#1057)ReturnTruefor.exists()if a non-404 error is encountered (#938)AzureBreaking: This backend has been rewritten to use the newer versions ofazure-storage-blob, which now has a minimum required version of 12.0. The settingsAZURE_EMULATED_MODE,AZURE_ENDPOINT_SUFFIX, andAZURE_CUSTOM_CONNECTION_STRINGare now ignored. (#784,#805)Add support for user delegation keys (#1063)Google CloudBreaking: The minimum required version ofgoogle-cloud-storageis now 1.27.0 (#994)Breaking: Switch URL signing version from v2 to v4 (#994)Deprecated: Support forGS_CACHE_CONTROLwill be removed in 1.13. Please set thecache_controlparameter ofGS_OBJECT_PARAMETERSinstead. (#970)AddGS_OBJECT_PARAMETERSand overridableGoogleCloudStorage.get_object_parametersto customize blob parameters for all blobs and per-blob respectively. (#970)Catch theNotFoundexception raised when deleting a non-existent blob, this matches Django and other backends (#998,#999)Fix signing URLs with custom endpoints (#994)DropboxValidatewrite_modeparam (#1020)1.11.1 (2020-12-23)S3Revert fix forValueError: I/O operation on closed filewhen callingcollectstaticand introduceS3StaticStorageandS3ManifestStaticStoragefor use asSTATICFILES_STORAGEtargets (#968)1.11 (2020-12-16)GeneralTest against Python 3.9 (#964)S3FixValueError: I/O operation on closed filewhen callingcollectstatic(#382,#955)CalculateS3Boto3StorageFile.buffer_size(via settingAWS_S3_FILE_BUFFER_SIZE) at run-time rather than import-time. (#930)Fix writingbytearraycontent (#958,#965)Google CloudAdd settingGS_QUERYSTRING_AUTHto avoid signing URLs. This is useful for buckets with a policy of Uniform public read (#952)AzureAddAZURE_OBJECT_PARAMETERSand overridableAzureStorage.get_object_parametersto customizeContentSettingsparameters for all keys and per-key respectively. (#898)1.10.1 (2020-09-13)S3RestoreAWS_DEFAULT_ACLhandling. This setting is ignored ifACLis set inAWS_S3_OBJECT_PARAMETERS(#934)SFTPFix usingSFTP_STORAGE_HOST(#926)1.10 (2020-08-30)GeneralBreaking: Removed support for end-of-life Python 2.7 and 3.4 (#709)Breaking: Removed support for end-of-life Django 1.11 (#891)Add support for Django 3.1 (#916)Introduce a newBaseStorageclass with aget_default_settingsmethod and use it inS3Boto3Storage,AzureStorage,GoogleCloudStorage, andSFTPStorage. These backends now calculate their settings when instantiated, not imported. (#524,#852)S3Breaking: Automatic bucket creation has been removed. Doing so encourages using overly broad credentials. As a result, support for the correspondingAWS_BUCKET_ACLandAWS_AUTO_CREATE_BUCKETsettings have been removed. (#636)Breaking: Support for the undocumented settingAWS_PRELOAD_METADATAhas been removed (#636)Breaking: The constructor kwargaclis no longer accepted. Instead, use theACLkey in settingAWS_S3_OBJECT_PARAMETERS(#636)Breaking: The constructor kwargbucketis no longer accepted. Instead, usebucket_nameor theAWS_STORAGE_BUCKET_NAMEsetting (#636)Breaking: Support for settingAWS_REDUCED_REDUNDANCYhas been removed. Replace withStorageClass=REDUCED_REDUNDANCYinAWS_S3_OBJECT_PARAMETERS(#636)Breaking: Support for settingAWS_S3_ENCRYPTIONhas been removed. Replace withServerSideEncryption=AES256inAWS_S3_OBJECT_PARAMETERS(#636)Breaking: Support for settingAWS_DEFAULT_ACLhas been removed. Replace withACLinAWS_S3_OBJECT_PARAMETERS(#636)Addhttp_methodparameter to.urlmethod (#854)Add support for signing Cloudfront URLs to the.urlmethod. You must setAWS_CLOUDFRONT_KEY,AWS_CLOUDFRONT_KEY_IDand install eithercryptographyorrsa(#456,#587). See the docs for more info. URLs will only be signed ifAWS_QUERYSTRING_AUTHis set toTrue(#885)Google CloudBreaking: Automatic bucket creation has been removed. Doing so encourages using overly broad credentials. As a result, support for the correspondingGS_AUTO_CREATE_BUCKETandGS_AUTO_CREATE_ACLsettings have been removed. (#894)DropboxAddDROPBOX_WRITE_MODEsetting to control e.g. overwriting behavior. Check the docs for more info (#873,#138)SFTPRemove exception swallowing during ssh connection (#835,#838)FTPAddFTP_STORAGE_ENCODINGsetting to set the filesystem encoding (#803)Support multiple nested paths for files (#886)1.9.1 (2020-02-03)S3Fix reading files withS3Boto3StorageFile(#831,#833)1.9 (2020-02-02)GeneralBreaking: The long deprecated S3 backend based onbotohas been removed. (#825)Test against and support Python 3.8 (#810)S3Deprecated: Automatic bucket creation will be removed in version 1.10 (#826)Deprecated: The undocumentedAWS_PRELOAD_METADATAand associated functionality will be removed in version 1.10 (#829)Deprecated: Support forAWS_REDUCED_REDUNDANCYwill be removed in version 1.10 Replace withStorageClass=REDUCED_REDUNDANCYinAWS_S3_OBJECT_PARAMETERS(#829)Deprecated: Support forAWS_S3_ENCRYPTIONwill be removed in version 1.10 (#829) Replace withServerSideEncryption=AES256inAWS_S3_OBJECT_PARAMETERSA customContentEncodingis no longer overwritten automatically (note that specifying one will disable automaticgzip) (#391,#828).AddS3Boto3Storage.get_object_parameters, an overridable method for customizing upload parameters on a per-object basis (#819,#828)Opening and closing a file inwmode without writing anything will now create an empty file in S3, this mimics the builtinopenand Django’s ownFileSystemStorage(#435,#816)Fix reading a file in text mode (#404,#827)Google CloudDeprecated: Automatic bucket creation will be removed in version 1.10 (#826)DropboxFix crash onDropBoxStorage.listdir(#762)Settings can now additionally be specified at the class level to ease subclassing (#745)LibcloudAdd support for Backblaze B2 toLibCloudStorage.url(#807)FTPFix creating multiple intermediary directories on Windows (#823,#824)1.8 (2019-11-20)GeneralAdd support for Django 3.0 (#759)Update license identifier to unambiguousBSD-3-ClauseS3Include error message raised when missing library is imported (#776,#793)GoogleBreakingThe minimum supported version ofgoogle-cloud-storageis now1.15.0which enables…Add settingGS_CUSTOM_ENDPOINTto allow usage of custom domains (#775,#648)AzureFix extra installation by pinning version to < 12 (#785)Add support for settingAZURE_CACHE_CONTROLheader (#780,#674)1.7.2 (2019-09-10)S3Avoid misleadingAWS_DEFAULT_ACLwarning for insecuredefault_aclwhen overridden as a class variable (#591)Propagate file deletion to cache whenpreload_metadataisTrue, (not the default) (#743,#749)Fix exception raised on closed file (common if usingManifestFilesMixinorcollectstatic. (#382,#754)AzurePare down the required packages inextra_requireswhen installing theazureextra to onlyazure-storage-blob(#680,#684)Fix compatability withgenerate_blob_shared_access_signatureupdated signature (#705,#723)Fetching a file now uses the configured timeout rather than hardcoding one (#727)Add support for configuring all blobservice options:AZURE_ENDPOINT_SUFFIX,AZURE_CUSTOM_DOMAIN,AZURE_CONNECTION_STRING,AZURE_TOKEN_CREDENTIAL. See the docs for more info. Huge thanks once again to @nitely. (#750)Fix filename handling to not strip special characters (#609,#752)Google CloudSet the file acl in the same call that uploads it (#698)Reduce the number of queries and required permissions whenGS_AUTO_CREATE_BUCKETisFalse(the default) (#412,#718)Set thepredefined_aclwhen creating aGoogleCloudFileusing.write(#640,#756)AddGS_BLOB_CHUNK_SIZEsetting to enable efficient uploading of large files (#757)DropboxComplete migration to v2 api with file fetching and metadata fixes (#724)AddDROPBOX_TIMEOUTto configure client timeout defaulting to 100 seconds to match the underlying sdk. (#419,#747)SFTPFix reopening a file (#746)1.7.1 (2018-09-06)Fix off-by-1 error inget_available_namewheneverfile_overwriteoroverwrite_filesisTrue(#588,#589)ChangeS3Boto3Storage.listdir()to uselist_objectsinstead oflist_objects_v2to restore compatability with services implementing the S3 protocol that do not yet support the new method (#586,#590)1.7 (2018-09-03)SecurityTheS3BotoStorageandS3Boto3Storagebackends have an insecure default ACL ofpublic-read. It is recommended that all current users audit their bucket permissions. Support has been added for settingAWS_DEFAULT_ACL = NoneandAWS_BUCKET_ACL = Nonewhich causes all created files to inherit the bucket’s ACL (and created buckets to inherit the Amazon account’s default ACL). This will become the default in version 1.10 (forS3Boto3Storageonly sinceS3BotoStoragewill be removed in version 1.9, see below). Additionally, a warning is now raised ifAWS_DEFAULT_ACLorAWS_BUCKET_ACLis not explicitly set. (#381,#535,#579)BreakingTheAzureStoragebackend and documentation has been completely rewritten. It now depends onazureandazure-storage-bloband isvastlyimproved. Big thanks to @nitely and all other contributors along the way (#565)The.url()method ofGoogleCloudStoragehas been completely reworked. Many use cases should require no changes and will experience a massive speedup. The.url()method no longer hits the network for public urls and generates signed urls (with a default of 1-day expiration, configurable viaGS_EXPIRATION) for non-public buckets. Check out the docs for more information. (#570)Various backends will now raiseImproperlyConfiguredat runtime if their location (GS_LOCATION,AWS_LOCATION) begins with a leading/rather than silently stripping it. Verify yours does not. (#520)The long deprecatedGSBotoStoragebackend is removed. (#518)DeprecationThe insecure default ofpublic-readforAWS_DEFAULT_ACLandAWS_BUCKET_ACLinS3Boto3Storagewill change to inherit the bucket’s setting in version 1.10 (#579)The legacyS3BotoBackendis deprecated and will be removed in version 1.9. It is strongly recommended to move to theS3Boto3Storagebackend for performance, stability and bugfix reasons. See theboto migration docsfor step-by-step guidelines. (#578,#584)The long aliased arguments toS3Boto3Storageofaclandbucketare deprecated in favor ofbucket_nameanddefault_acl(#516)The minimum required version ofboto3will be increasing to1.4.4in the next major version ofdjango-storages. (#583)FeaturesAdd support for a file to inherit its bucket’s ACL by settingAWS_DEFAULT_ACL = None(#535)AddGS_CACHE_CONTROLsetting forGoogleCloudStoragebackend (#411,#505)Add documentation around using django-storages with Digital Ocean Spaces (#521)Add support for Django 2.1 and Python 3.7 (#530)MakeS3Boto3Storagepickleable (#551)Add automatic reconnection toSFTPStorage(#563,#564)Unconditionally set the security token in the boto backends (b13efd)Improve efficiency of.listdironS3Boto3Storage(#352)AddAWS_S3_VERIFYto support custom certificates and disabling certificate verification toS3Boto3Storage(#486,#580)AddAWS_S3_PROXIESsetting toS3Boto3Storage(#583)Add a snazzy new logo. Big thanks to @reallinfoBugfixesReset file read offset before passing toGoogleCloudStorageandAzureStorage(#481,#581,#582)Fix various issues with multipart uploads in the S3 backends (#169,#160,#364,#449,#504,#506,#546)FixS3Boto3Storageto stream down large files (also disallowr+wmode) (#383,#548)FixSFTPStorageFileto align with the coreFileabstraction (#487,#568)CatchIOErrorinSFTPStorage.delete(#568)AzureStorage,GoogleCloudStorage,S3Boto3StorageandS3BotoStoragenow respectmax_lengthwhenfile_overwrite = True(#513,#554)The S3 backends now consistently usecompresslevel=9(the Python stdlib default) for gzipped content (#572,#576)Improve error message ofS3Boto3Storageduring an unexpected exception when automatically creating a bucket (#574,#577)1.6.6 (2018-03-26)You can now specify the backend you are using to install the necessary dependencies usingextra_requires. For examplepip installdjango-storages[boto3](#417)Add additional content-type detection fallbacks (#406,#407)AddGS_LOCATIONsetting to specify subdirectory forGoogleCloudStorage(#355)Add support for uploading large files toDropBoxStorage, fix saving files (#379,#378,#301)Drop support for Django 1.8 and Django 1.10 (and hence Python 3.3) (#438)Implementget_created_timeforGoogleCloudStorage(#464)1.6.5 (2017-08-01)Fix Django 1.11 regression with gzipped content being saved twice resulting in empty files (#367,#371,#373)Fix themtimewhen gzipping content onS3Boto3Storage(#374)1.6.4 (2017-07-27)Files uploaded withGoogleCloudStoragewill now set their appropriate mimetype (#320)FixDropBoxStorage.urlto work. (#357)FixS3Boto3StoragewhenAWS_PRELOAD_METADATA = True(#366)FixS3Boto3Storageuploading file-like objects without names (#195,#368)S3Boto3Storageis now threadsafe - a separate session is created on a per-thread basis (#268,#358)1.6.3 (2017-06-23)Revert defaultAWS_S3_SIGNATURE_VERSIONto V2 to restore backwards compatability inS3Boto3. It’s recommended that all new projects set this to be's3v4'. (#344)1.6.2 (2017-06-22)Fix regression insafe_join()to handle a trailing slash in an intermediate path. (#341)Fix regression ings.GSBotoStoragegetting an unexpected kwarg. (#342)1.6.1 (2017-06-22)Drop support for Django 1.9 (e89db45)Fix regression insafe_join()to allow joining a base path with an empty string. (#336)1.6 (2017-06-21)Breaking:Remove backends deprecated in v1.5.1 (#280)Breaking:DropBoxStoragehas been upgrade to support v2 of the API, v1 will be shut off at the end of the month - upgrading is recommended (#273)Breaking:TheSFTPStoragebackend now checks for the existence of the fallback~/.ssh/known_hostsbefore attempting to load it. If you had previously been passing in a path to a non-existent file it will no longer attempt to load the fallback. (#118,#325)Breaking:The default version value forAWS_S3_SIGNATURE_VERSIONis now's3v4'. No changes should be required (#335)Deprecation:The undocumentedgs.GSBotoStoragebackend. See the newgcloud.GoogleCloudStorageorapache_libcloud.LibCloudStoragebackends instead. (#236)Add a new backend,gcloud.GoogleCloudStoragebased on thegoogle-cloudbindings. (#236)Pass in the location constraint when auto creating a bucket inS3Boto3Storage(#257,#258)Add support for readingAWS_SESSION_TOKENandAWS_SECURITY_TOKENfrom the environment toS3Boto3StorageandS3BotoStorage. (#283)Fix Boto3 non-ascii filenames on Python 2.7 (#216,#217)Fixcollectstatictimezone handling in and addget_modified_timetoS3BotoStorage(#290)Add support for Django 1.11 (#295)Addprojectkeyword support to GCS inLibCloudStoragebackend (#269)Files that have a guessable encoding (e.g. gzip or compress) will be uploaded with that Content-Encoding in thes3boto3backend (#263,#264)The Dropbox backend now properly translates backslashes in Windows paths into forward slashes (e52a127)The S3 backends now permit colons in the keys (#248,#322)1.5.2 (2017-01-13)Actually useSFTP_STORAGE_HOSTinSFTPStoragebackend (#204)FixS3Boto3Storageto avoid race conditions in a multi-threaded WSGI environment (#238)Fix trying to localize a naive datetime whensettings.USE_TZisFalseinS3Boto3Storage.modified_time. (#235,#234)Fix automatic bucket creation inS3Boto3StoragewhenAWS_AUTO_CREATE_BUCKETisTrue(#196)Improve the documentation for the S3 backends1.5.1 (2016-09-13)Breaking:Drop support for Django 1.7 (#185)Deprecation:hashpath, image, overwrite, mogile, symlinkorcopy, database, mogile, couchdb. See (#202) to discuss maintenance going forwardUse a fixedmtimeargument forGzipFileinS3BotoStorageandS3Boto3Storageto ensure a stable output for gzipped filesUse.putfileobjinstead of.putinS3Boto3Storageto use the transfer manager, allowing files greater than 5GB to be put on S3 (#194,#201)UpdateS3Boto3Storagefor Django 1.10 (#181) (get_modified_timeandget_accessed_time)Fix bad kwarg name inS3Boto3StoragewhenAWS_PRELOAD_METADATAisTrue(#189,#190)1.5.0 (2016-08-02)Add new backendS3Boto3Storage(#179)Add astrictoption toutils.setting(#176)Tests, documentation, fixing.closeforSFTPStorage(#177)Tests, documentation, add.readlinesforFTPStorage(#175)Tests and documentation forDropBoxStorage(#174)FixMANIFEST.into not ship.pycfiles. (#145)Enable CI testing of Python 3.5 and fix test failure from api change (#171)1.4.1 (2016-04-07)Files that have a guessable encoding (e.g. gzip or compress) will be uploaded with that Content-Encoding in thes3botobackend. Compressable types such asapplication/javascriptwill still be gzipped. PR#122FixDropBoxStorage.existscheck and addDropBoxStorage.url(#127)AddGS_HOSTsetting (with a default ofGSConnection.DefaultHost) to fixGSBotoStorage. (#124,#125)1.4 (2016-02-07)This package is now released on PyPI asdjango-storages. Please update your requirements files todjango-storages==1.4.1.3.2 (2016-01-26)Fix memory leak from not closing underlying temp file ins3botobackend (#106)Allow easily specifying a custom expiry time when generating a url forS3BotoStorage(#96)Check for bucket existence when the empty path (‘’) is passed tostorage.existsinS3BotoStorage- this prevents a crash when runningcollectstatic-con Django 1.9.1 (#112) fixed in#1161.3.1 (2016-01-12)A few Azure Storage fixes [pass the content-type to Azure, handle chunked content, fixurl] (#45)Add support for a Dropbox (dropbox) storage backendVarious fixes to theapache_libcloudbackend [return the number of bytes asked for by.read, make.namenon-private, don’t initialize to an emptyBytesIOobject] (#55)Fix multi-part uploads ins3botobackend not respectingAWS_S3_ENCRYPTION(#94)Automatically gzip svg files (#100)1.3 (2015-08-14)Breaking:Drop Support for Django 1.5 and Python 2.6Breaking:Remove previously deprecated mongodb backendBreaking:Remove previously deprecatedparse_ts_extendedfrom s3boto storageAdd support for Django 1.8+ (#36)AddAWS_S3_PROXY_HOSTandAWS_S3_PROXY_PORTsettings for s3boto backend (#41)Fix Python3K compat issue in apache_libcloud (#52)Fix Google Storage backend not respectingGS_IS_GZIPPEDsetting (#51,#60)Rename FTP_nameattribute tonamewhich is what the DjangoFileapi is expecting (#70)PutStorageMixinfirst in inheritance to maintain backwards compat with older versions of Django (#63)1.2.3 (2015-03-14)Variety of FTP backend fixes (fixexists, addmodified_time, remove call to non-existent function) (#26)Apparently the year changed to 20151.2.2 (2015-01-28)Remove always show all warnings filter (#21)Release package as a wheelAvoid resource warning during install (#20)MadeS3BotoStoragedeconstructible (previously onlyS3BotoStorageFilewas deconstructible) (#19)1.2.1 (2014-12-31)Deprecation:Issue warning aboutparse_ts_extendedDeprecation:mongodb backend - django-mongodb-engine now ships its own storage backendFixstorage.modified_timecrashing on new files whenAWS_PRELOAD_METADATA=True(#11,#12,#14)1.2 (2014-12-14)Breaking:Remove legacy S3 storage (#1)Breaking:Remove mosso files backend (#2)Add text/javascript mimetype to S3BotoStorage gzip allowed defaultsAdd support for Django 1.7 migrations in S3BotoStorage and ApacheLibCloudStorage (#5,#8)Python3K (3.3+) now available for S3Boto backend (#4)NOTE: Version 1.1.9 is the first release of django-storages after the fork. It represents the current (2014-12-08) state of the original django-storages in master with no additional changes. This is the first release of the code base since March 2013.1.1.9 (2014-12-08)Fix syntax for Python3 with pull-request#91Support pushing content type from File object to GridFS with pull-request#90Support passing a region to the libcloud driver with pull-request#86Handle trailing slash paths fixes#188fixed by pull-request#85Use a SpooledTemporaryFile to conserve memory in S3BotoFile pull-request#69Guess content-type for S3BotoStorageFile the same way that _save() in S3BotoStorage doesPass headers and response_headers through from url to generate_url in S3BotoStorage pull-request#65Added AWS_S3_HOST, AWS_S3_PORT and AWS_S3_USE_SSL settings to specify host, port and is_secure in pull-request#66Everything Below Here Was Previously Released on PyPI under django-storages1.1.8 (2013-03-31)Fixes#156regarding date parsing, ValueError when running collectstaticProper handling of boto dev version parsingMade SFTP URLs accessible, now uses settings.MEDIA_URL instead of sftp://1.1.7 (2013-03-20)Listing of huge buckets on S3 is now prevented by using the prefix argument to boto’s list() methodInitial support for Windows Azure StorageSwitched to useing boto’s parse_ts date parser getting last modified info when using S3boto backendFixed key handling in S3boto and Google Storage backendsAccount for lack of multipart upload in Google Storage backendFixed seek() issue when using AWS_IS_GZIPPED by darkness51 with pull-request#50Improvements to S3BotoStorage and GSBotoStorage1.1.6 (2013-01-06)Merged many changes from Jannis Leidel (mostly regarding gzipping)Fixed tests by Ian LewisAdded support for Google Cloud Storage backend by Jannis LeidelUpdated license file by Dan Loewenherz, fixes#133with pull-request#44Set Content-Type header for use in upload_part_from_file by Gerardo CurielPass the rewind parameter to Boto’s set_contents_from_file method by Jannis Leidel with pull-request#45Fix for FTPStorageFile close() method by Mathieu Comandon with pull-request#43Minor refactoring by Oktay Sancak with pull-request#48Ungzip on download based on Content-Encoding by Gavin Wahl with pull-request#46Add support for S3 server-side encryption by Tobias McNulty with pull-request#17Add an optional setting to the boto storage to produce protocol-relative URLs, fixes#1051.1.5 (2012-07-18)Merged pull request#36from freakboy3742 Keith-Magee, improvements to Apache Libcloud backend and docsMerged pull request#35from atodorov, allows more granular S3 access settingsAdd support for SSL in Rackspace Cloudfiles backendFixed the listdir() method in s3boto backend, fixes#57Added base url tests for safe_join in s3boto backendMerged pull request#20from alanjds, fixed SuspiciousOperation warning if AWS_LOCATION ends with ‘/’Added FILE_BUFFER_SIZE setting to s3boto backendMerged pull request#30from pendletongp, resolves#108,#109and#110Updated the modified_time() method so that it doesn’t require dateutil. fixes#111Merged pull request#16from chamal, adds Apache Libcloud backendWhen preloading the S3 metadata make sure we reset the files key during saving to prevent stale metadataMerged pull request#24from tobias.mcnulty, fixes bug where s3boto backend returns modified_time in wrong time zoneFixed HashPathStorage.location to no longer use settings.MEDIA_ROOTRemove download_url from setup file so PyPI dist is used1.1.4 (2012-01-06)Added PendingDeprecationWarning for mosso backendMerged pull request#13from marcoala, addsSFTP_KNOWN_HOST_FILEsetting to SFTP storage backendMerged pull request#12from ryankask, fixes HashPathStorage tests that delete remote mediaMerged pull request#10from key, adds support for django-mongodb-engine 0.4.0 or later, fixes GridFS file deletion bugFixed S3BotoStorage performance problem calling modified_time()Added deprecation warning for s3 backend, refs#40Fixed CLOUDFILES_CONNECTION_KWARGS import error, fixes#78Switched to sphinx documentation, set official docs up onhttps://django-storages.readthedocs.io/HashPathStorage uses self.exists now, fixes#831.1.3 (2011-08-15)Created this lovely change logFixed#89: broken StringIO import in CloudFiles backendMergedpull request #5: HashPathStorage path bug
actinrings
Supporting python modules for equilibrium analysis in "Constriction of actin rings by passive crosslinkers"Python package for the equilibrium analysis ofRef. 1.It includes a module implementing the equilibrium model, as well as a module for creating plots with this model, as well as plots from finite element calculations output from a related package (elastic-rings) and plots from Monte Carlo simulations from another related package (ActinRingsMC.jl).Some example scripts for creating plots are provided in thescriptsdirectory.InstallationThis package was developed and used on Linux.It is available on the PyPI respository. It can be installed by runningpip install actinringsIf you are not using a virtual environment, the--userflag may be used instead to install it locally to the user. To install directly from this repository, runpython -m build pip install dist/actinrings-[current version]-py3-none-any.whlTo run the above, it may be necessary to update a few packages:python3 -m pip install --upgrade pip setuptools wheelFor more information on building and installing python packages, see the documentation from thePython Packaging Authority.References[1] A. Cumberworth and P. R. ten Wolde, Constriction of actin rings by passive crosslinkers,arXiv:2203.04260 [physics.bio-ph].LinksPython Packaging Authorityelastic-ringsActinRingsMC.jlReplication package Ref. 1
action
This library draws a parallel between command line args and function args in Python. Positionals get passed to regular function arguments, and options or flags are mapped to keyword arguments.For example, this command invocation:$ package install -u ffmpeg -vCould be translated to this function call:package.install('ffmpeg', upgrade=True, verbose=1)This library does the bridging automatically using information supplied in form of decorators and type annotations. To make a function accessible as a command-line action, decorate it withaction:import sys import action @action def install(package_name, *, upgrade: action.Flag = False, verbose: action.Count = 0): """ Do the work """ if __name__ == '__main__': sys.exit(action.execute(sys.argv[1:]))All other exported symbols are described below.@actionThe main decorator which is used to make actions from functions. It takes a single function as an input and inspects its signature.The name of the command being created is drawn from the name of original function.All arguments before splat are counted as positionals, and those going after are options or flags.Configuration through annotationsClient code could alter how certain arguments are treated and presented by annotating its arguments.One way to do so is to supply a constructor as an annotation:@action def add(x: int, y: int): print(x + y)That constructor shall be called upon execution to coerce types before passing arguments to the action invoked.Positionals only support this kind of annotations.Options, on the other hand, use callable annotations differently. Each option or flag could occur many times, therefore that behaviour should be covered by corresponding annotation. There are some sane defaults come already packaged.FlagDenotes whether some condition is truthy. Could be specified any number of times on command line. First occurrence sets the value toTrue. Subsequent occurrences have no effect:@action def add(x: int, y: int, *, pad: action.Flag = False): result = x + y format = '{}' if pad: format = '{:04}' print(format.format(result))CountInitially isNone. On first occurrence sets to one, on each subsequent occurrence increments by one:@action def add(x: int, y: int, *, verbose: action.Count = 0): result = x + y if verbose > 3: print('augend:', x) print('addend:', y) print('sum: ', result) print() elif verbose > 0: print('{} + {} = {}'.format(result)) else: print(result)KeyGeneric value specified as a command-line option:@action def walk(*, depth: action.Key('depth', type=int)): ...Keyconstructor has three arguments:short,longandtype. One ofshortorlongis required.typeisstrby default.any callableThere is also a shorthand notation for specifying a Key:@action def walk(*, depth: int): ...Short and long names shall be deduced from the argument name.(short, long, type) tripleAnother shorthand for Key allows to specify short and long names manually:@action def walk(*, depth: ('r', 'depth', int)): ...Option abstract baseOn a low level, to know a value for an option, the command line processor performs a folding operation over all occurrences of a certain option. Therefore, to have fine-grained control over the argument parsing process, one could subclassaction.Optionto use it instead of prepackaged annotations for options. Subclass should override call method to take two arguments: the old value and an option body. That call method could either return a new value or throw an exception to stop command line processing right away. If call method returns a value, that value shall be passed as old value on the next [email protected] command line processor selects an action whose name matches the first positional. If there is no such action registered, the command line processor attempts to invoke the special action marked as default:@action.default @action def install(package): ... # `./prog.py install ffmpeg` shall invoke `install('ffmpeg')` # and `./prog.py ffmpeg` shall still invoke `install('ffmpeg')`This decorator could also be used if the program has a single action:@action.default def list_directory(): ...action.executeLook up a previously registered action whose name matches first positional from command line, match command-line arguments to selected action arguments and invoke that action.The first positional argument is hidden from the command invoked.action.executenever callsos.exit, so it could be used in an interactive prompt.action.contextIf you want an isolated argument parser to avoid modification of module-wide state, you could instantiate anotherActionwith this method.Normally, anActionobject is constructed in place ofactionmodule when importing.Coded with Love.
actionable-agile-extract
This utility helps extract data from JIRA for processing with the ActionableAgile™ Analytics tool (https://www.actionableagile.com/analytics-tools/), as well as ad-hoc analysis using Excel.It will produce a CSV file with one row for each JIRA issue matching a set of filter criteria, containing basic information about the issue as well as the date the issue entered each step in the main cycle workflow.This data can be used to produce a Cumulative Flow Diagram, a cycle time scatterplot, a cycle time histogram, and other analytics based on cycle time.InstallationInstall Python 2.7 and pip. Seehttp://pip.readthedocs.org/en/stable/installing/.Install usingpip:$ pip install actionable-agile-extractIf you get errors, try to installnumpyandpandasseparately first:$ pip install numpy pandas $ pip install actionable-agile-extractConfigurationWrite a YAML configuration file like so, calling it e.g.config.yaml:# How to connect to JIRA? Connection: Domain: https://myserver.atlassian.net Username: myusername # If missing, you will be prompted at runtime Password: secret # If missing, you will be prompted at runtime # What to search for? Criteria: Project: ABC # JIRA project key to search Issue types: # Which issue types to include - Story - Defect Valid resolutions: # Which resolution statuses to include (unresolved is always included) - Done - Closed JQL: labels != "Spike" # Additional filter as raw JQL, optional # Describe the workflow. Each step can be mapped to either a single JIRA # status, or a list of statuses that will be treated as equivalent Workflow: Open: Open Analysis IP: Analysis in Progress Analysis Done: Analysis Done Development IP: Development in Progress Development Done: Development Done Test IP: Test in Progress Test Done: Test Done Done: - Closed - Done # Map field names to additional attributes to extract Attributes: Components: Component/s Priority: Priority Release: Fix version/sThe sections forConnection,CriteriaandWorkfloware required.UnderConection, onlyDomainis required. If not specified, the script will prompt for both or either of username and password when run.UnderCriteria, all fields are technically optional, but you should specify at least some of them to avoid an unbounded query.UnderWorkflow, at least two steps are required. Specify the steps in order. You may either specify a single workflow value or a list (as shown forDoneabove), in which case multiple JIRA statuses will be collapsed into a single state for analytics purposes.The file, and values for things like workflow statuses and attributes, are case insensitive.When specifying attributes, use thenameof the field (as rendered on screen in JIRA), not its id (as you might do in JQL), so e.g. useComponent/snotcomponents.The attributesType(issue type),StatusandResolutionare always included.When specifying fields likeComponent/sorFix version/sthat may have lists of values, only the first value set will be used.RunningRun the binary with:$ jira-cycle-extract config.yaml data.csvThis will extract a CSV file calleddata.csvwith cycle data based on the configuration inconfig.yaml.Use the-voption to print more information during the extract process.
actionable-recourse
actionable-recourseis a python library for recourse verification and reporting.Recourse in Machine Learning?Recourseis the ability of a person to change the prediction of a machine learning model by alteringactionableinput variables – e.g.,incomeandn_credit_cardsas opposed toageoralma_mater.Recourse is an essential aspect of procedural fairness in consumer-facing applications of machine learning. When a consumer is denied a loan by a machine learning model, for example, they should be able to change the input variables of the model in a way that guarantees approval. Otherwise, this person will be denied the loan so long as the model is deployed, and stripped of the ability to influence a decision that affects their livelihood.Verification & ReportingThis library provides protect consumers against this harm through verification and reporting. These tools can be used to answer questions like:What can a person do to obtain a favorable prediction from a given model?How many people can change their prediction?How difficult for people to change their prediction?Specific functionality includes:Customize the set of feasible action for each input variable of an machine learning model.Produce a list of actionable changes for a person to flip the prediction of a model.Estimate the feasibility of recourse of a model on a population of interest.Evaluate the difficulty of recourse of a model on a population of interest.The tools are currently designed to support linear classification models, and will be extended to cover other kinds of models over time.InstallationYou can install the library viapip.$ pip install actionable-recourseRequirementsPython 3Python-MIP or CPLEXCPLEXCPLEX is fast optimization solver with a Python API. It is commercial software, but free to download for students and faculty at accredited institutions. To obtain CPLEX:Register forIBM Academic InitiativeDownload theIBM ILOG CPLEX Optimization Studiofrom thesoftware catalogInstall the CPLEX Optimization Studio (on MacOS, run./<cplex-path>/Contents/MacOS).Setup the CPLEX Python APIas described here.If you have problems installing CPLEX, please check theCPLEX user manualor theCPLEX forums.Usageimport pandas as pd import numpy as np from sklearn.linear_model import LogisticRegression import recourse as rs # import data url = 'https://raw.githubusercontent.com/ustunb/actionable-recourse/master/examples/paper/data/credit_processed.csv' df = pd.read_csv(url) y, X = df.iloc[:, 0], df.iloc[:, 1:] # train a classifier clf = LogisticRegression(max_iter = 1000) clf.fit(X, y) yhat = clf.predict(X) # customize the set of actions A = rs.ActionSet(X) ## matrix of features. ActionSet will set bounds and step sizes by default # specify immutable variables A['Married'].mutable = False # can only specify properties for multiple variables using a list A[['Age_lt_25', 'Age_in_25_to_40', 'Age_in_40_to_59', 'Age_geq_60']].mutable = False # education level A['EducationLevel'].step_direction = 1 ## force conditional immutability. A['EducationLevel'].step_size = 1 ## set step-size to a custom value. A['EducationLevel'].step_type = "absolute" ## force conditional immutability. A['EducationLevel'].bounds = (0, 3) A['TotalMonthsOverdue'].step_size = 1 ## set step-size to a custom value. A['TotalMonthsOverdue'].step_type = "absolute" ## discretize on absolute values of feature rather than percentile values A['TotalMonthsOverdue'].bounds = (0, 100) ## set bounds to a custom value. ## get model coefficients and align A.align(clf) ## tells `ActionSet` which directions each feature should move in to produce positive change. # Get one individual i = np.flatnonzero(yhat <= 0).astype(int)[0] # build a flipset for one individual fs = rs.Flipset(x = X.values[i], action_set = A, clf = clf) fs.populate(enumeration_type = 'distinct_subsets', total_items = 10) fs.to_latex() fs.to_html() # Run Recourse Audit on Training Data auditor = rs.RecourseAuditor(A, coefficients = clf.coef_, intercept = clf.intercept_) audit_df = auditor.audit(X) ## matrix of features over which we will perform the audit. ## print mean feasibility and cost of recourse print(audit_df['feasible'].mean()) print(audit_df['cost'].mean()) print_recourse_audit_report(X, audit_df, y) # or produce additional information of cost of recourse by other variables # print_recourse_audit_report(X, audit_df, y, group_by = ['y', 'Married', 'EducationLevel'])ContributingWe're actively working to improve this package and make it more useful. If you come across bugs, have comments, or want to help, let us know. We welcome any and all contributions! For more info on how to contribute, check outthese guidelines. Thank you community!ResourcesFor more about recourse, check out our paper:Actionable Recourse in Linear ClassificationIf you use this library in your research, we would appreciate a citation!
actionai
ActionAIA small library to run local functions using the openai function callingWarningThis library is still in its early stages, so please use it cautiously. If you find any bugs, please create a new issue.InstallpipinstallactionaiUsageNoteA function must be fully typed and must have a docstring# define a new functiondefget_current_weather(location:str,unit:str="fahrenheit"):"""Function to get current weather for the given location"""weather_info={"location":location,"temperature":"72","unit":unit,"forecast":["sunny","windy"],}returnweather_infoimportactionaiaction=actionai.ActionAI()action.register(get_current_weather)response=action.prompt("What is the current weather in the north pole?")print(response["choices"][0]["message"]["content"])# The current weather at the North Pole is 72°F. It is sunny and windy.The openai api key will be read automatically from theOPENAI_API_KEYenvironment variable. You can pass it manually as,importactionaiaction=actionai.ActionAI(openai_api_key="YOUR_KEY")Adding contextSometimes your function will have variables that need to be set by the program.deflist_todos(user:str):"""Function to list all todos"""returntodos[user]action=actionai.ActionAI(context={"user":"jason"})The context keys are skipped when creating json schema. The values are directly passed at the time of the function call.Choosing a modelBy default, the completion run on thegpt-3.5-turbo-0613model. You can change the model using themodelargument.importactionaiaction=actionai.ActionAI(model="gpt-4")You can see the complete list of supported chat completion modelshereDemoRunningtodo example👇🏻For more examples, check out theexamplesdirectory.
actionbar.babble
Introductionactionbar.babble is the Babble chat client integration package for the ActionBar, an extensible floating panel with links at the bottom of your Plone site.For more information regarding Babble and ActionBar, please see their respective packages: babble.client, babble.server and actionbar.panel.Additional info:For additional info, please read the documentation athttp://opkode.net/babbledocs/actionbar.babble/index.htmlChangelog0.2 (2011-10-05)Add timestamp support. [jcbrand]0.1b2 (2011-04-04)Fixed an IE javascript bug #2038 [deroiste]0.1b1 (2011-01-18)Added support for babble.client 1.4 [jcbrand]0.1a3 (2010-04-28)Fixed for ajax calls in the context of portal_factory [jcbrand]Tweak babblefeeder.css to be loaded after chat.css [jcbrand]Fixed AJAX call problems [jcbrand]0.1a2 (2010-04-08)Tweaked jsregistry.xml to gain Plone3 compatibility [jcbrand]0.1a1 (2010-04-08)Initial release [jcbrand]
actionbar.panel
Introductionactionbar.panel provides an admin/action panel at the bottom of your Plone site, similar to the one used on the now old facebook style.I took a lot of the css from Soh Tanaka’s excellent tutorial: *http://www.sohtanaka.com/web-design/facebook-style-footer-admin-panel-part-1/Adding new actions:The actionbar is fully extendible. It is a viewlet manager, with the name ‘actionbar.panel’. This means that you can add new actions and links to the panel by creating and registering viewlets for it in your own Plone add-ons.See actionbar/panel/browser/configure.zcml for widget registrations.If you want to publish eggs with add-on actions for the actionbar, please consider releasing them under the actionbar.* namespace.Installing:actionbar.panel should be a ‘drop-in’ installation. Just add ‘actionbar.panel’ to your eggs section in your buildout.cfg. Then use Plone’s control panel, or the portal_quickinstaller in the Zope management interface to install the package.Once you’ve done this, the panel and some default actions should be visible in Plone.Configuring:You can hide or rearrange the actions on the panel. You can also hide the entire panel itself. To do this, go to the viewlet managent screen by appending ‘/@@manage-viewlets’ to the root URL of your Plone site. The panel viewlet manager and its actions will be near the bottom of the page.Compatibility:actionbar.panel has been tested on Plone 4 and Plone 3.3.5. It should work on older versions of Plone 3 as well.Icons:The icons used in this release were created by Liam McKay (http://wefunction.com/) and are released under the GPL.http://www.woothemes.com/2009/09/woofunction-178-amazing-web-design-icons/Contact:Please contact me if you have any questions, compatibility problems or improvement suggestions.brand at syslab dot comChangelog1.2.1 (2010-10-01)Adding some padding so that the overlay won’t hide lower borders of overlays [do3cc]1.2 (2010-05-05)Fixed the ‘User Profile’ widget to point to the login_form when view anonymously [jcbrand]1.1 (2010-03-23)Fixed the package name in the INSTALL.txt and LICENSE.txt files [jcbrand]1.0 (2010-03-18)Initial release [jcbrand]
actioncable
No description available on PyPI.
actioncord
No description available on PyPI.
actiondb
# Welcome to ActionDB*The database that doesn't wait for you, it **goes**.*![MyPy](https://img.shields.io/badge/MyPy-Passing-blue.svg)![Flake8](https://img.shields.io/badge/Flake8-Pep%208-brightgreen.svg)[![Build Status](https://travis-ci.org/Zwork101/actionDB.svg?branch=master)](https://travis-ci.org/Zwork101/actionDB)Full docs [here](https://zwork101.github.io/action/)## IntroductionWelcome to actionDB, the DB that comes to you. ActionDB is designed to keep events that need to be summoned at a laterdate persistent. Personally, this will be used when I make discord bots viahttps://github.com/b1naryth1ef/disco. However this doesn't mean youcan't use this with anything else, you can, and I highly encourage that you do. While this library was designed so thatyou could easily create and use your own backend, you're welcome to use the sqlite3 backend that comes with it. Lastly,I was to talk about what's planned. At some point, I want to do some major refactoring, and separate the client andserver. This would allow you connect to a remote database, something that's not essential but nice, but mainly usewhatever concurrency library you want. The backend will still be gevent, however the client can be asyncio, trio,threading, etc.## Quick StartTo get started, we'll need to create our client, this will also create a action.db file, however the name can bechanged when initializing:```pyfrom action import Actionaction = Action()```After this, you should add listeners to your heart's content:```[email protected]("new_msg")def new_message(name: str, content: str, id: str=None)print(name + "\n\n" + content + "\n" + "(" + id + ")")```And if you want to do something, such as send a message in 5 minutes, you can do:```pyaction.trigger_in(5 * 60, "new_msg", "Hello World!", "My name is sam, and I live in a can", id="2323445")```Then presuming you're also doing other things, or have utilized ``gevent.joinall``, in 5 minutes you'll see:Hello World!My name is sam, and I live in a can(2323445)And that's all there is! If you want to see how you can trigger events at an exact time, or other fun stuff, see thedocumentation graciously provided below!### Note:You can import the main classes from action like so:```pyfrom action import Action, ActionBackend, ActionEmitter, Event```Happy Coding!
actionform
UNKNOWN
actionfreeze
Schedule a function to run repeatedly for a while. As long as it does not exceed 1 minute
action-graph
ActionGraphActionGraph is a symbolic AI agent for generating action plans based on preconditions and effects. This is loosely based on STRIPS approach (https://en.wikipedia.org/wiki/Stanford_Research_Institute_Problem_Solver). State variables are modeled as nodes; the actions represent edges/transitions from one state to another. Dijikstra's shortest path algorithm (A* but without the heuristic cost estimate) is used to generate a feasible, lowest cost plan.Source:https://github.com/bharathra/ACTION_GRAPHUsage:from action_graph.agent import Agent from action_graph.action import Action class Drive(Action): effects = {"driving": True} preconditions = {"has_drivers_license": True, "tank_has_gas": True} class FillGas(Action): effects = {"tank_has_gas": True} preconditions = {"has_car": True} class RentCar(Action): effects = {"has_car": True} cost = 100 # dollars class BuyCar(Action): effects = {"has_car": True} preconditions = {} cost = 10_000 # dollars if __name__ == "__main__": world_state = {"has_car": False, "has_drivers_license": True} goal_state = {"driving": True} ai = Agent() actions = [a(ai) for a in Action.__subclasses__()] ai.load_actions(actions) print("Initial State:", world_state) ai.update_state(world_state) print("Goal State: ", goal_state) plan = ai.get_plan(goal_state) # ai.execute_plan(plan)
action-hero
[action_hero_logo]: ./art/logo.svg![Action Hero Logo][action_hero_logo]![PyPI - Python Version](https://img.shields.io/pypi/pyversions/action-hero?style=flat-square)![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)[![codecov](https://codecov.io/gh/kadimisetty/action-hero/branch/master/graph/badge.svg)](https://codecov.io/gh/kadimisetty/action-hero)[![Build Status](https://travis-ci.org/kadimisetty/action-hero.svg?branch=master)](https://travis-ci.org/kadimisetty/action-hero)[![PEP8](https://img.shields.io/badge/code%20style-pep8-green.svg)](https://www.python.org/dev/peps/pep-0008/)![PyPI - License](https://img.shields.io/pypi/l/action-hero?style=flat-square)####`action_hero` is a python package that helps you__manage user arguments in command line applications using `argparse`__[Introduction](#introduction) · [Quick Usage](#quick-usage) · [Help & FAQ](#help-and-faq) · [Catalog](#catalog) · [Development](#development)## Introduction<dl><dt><code>argparse</code></dt><dd><code>argparse</code> is a python standard library module used to makecommand line applications.<code>argparse</code> provides<code>ArgumentParser</code> that parses user arguments and runs<code>argparse.Action</code>s on them.<a href="https://docs.python.org/3/library/argparse.html">⚓︎</a></dd><dt><code>action_hero</code> 🔥</dt><dd><code>action_hero</code> makes <code>argparse</code> more capable byproviding a large number of custom actions. For example, the<strong><code>FileIsWritableAction</code> automatically verifies that filepath(s) accepted as arguments are writable, informing the user if theyaren't.</strong> This saves you the trouble of doing that check yourself. Nice,no? <a href="#catalog">Browse the catalog</a> for many such actions.</dd></dl>## Quick Usage__1. Installation__:```pythonpip install action_hero```__2. Quick Usage__: Import an action and specify it when adding an argument to your parser.```pythonfrom action_hero import FileIsReadableAction...parser.add_argument("--file", action=FileIsReadableAction)...```__3. Full Example__: CLI program that counts number of lines of a file.```python# examples/line_counter.pyimport argparsefrom action_hero import FileIsReadableActionif __name__ == "__main__":# Create parserparser = argparse.ArgumentParser()# Add user argument "--file" and assert that it will be readableparser.add_argument("--file", action=FileIsReadableAction)# Parse user argumentsargs = parser.parse_args()if args.file:# Print number of lines in filewith open(args.file) as f:print("{} has {} lines".format(args.file, len(f.readlines())))else:# Print usage if no arguments were givenparser.print_usage()```Run `line_counter.py` on the command line```bash$ lsline_counter.py mary.md$ python line_counter.py --file mary.mdmary.md has 39 lines$ python line_counter.pyusage: line_counter.py [-h] [--file FILE]$ python line_counter.py --file nofile.mdusage: line_counter.py [-h] [--file FILE]line_counter.py: error: argument --file: File is not readable```**Note**: _Supported Python Versions >= 3.5_## Help and FAQ### Accepting `action_values`There are times your action requires an additional value. For instance, when your argument accepts only filenames with `md` or `markdown` extensions. You can use the `FileHasExtensionAction` action for this and pass in the extensions to check for via `action_values`, like so —```pythonparser.add_argument("--file",action=FileHasExtensionAction,action_values=["md", "markdown"])```Unless otherwise mentioned, `action_values` should be provided as a non-emptylist of strings. e.g.`action_values = ["md", "markdown"]`.### Pipelining multiple actionsThe `PipelineAction` allows you to run multiple actions as a pipeline. Pass inyour pipeline of actions as a list to `action_values`. If one of the actionsyou're passing in has it's own `action_values`, put that one as a tuple, likesuch: `(FileHasExtensionAction, ["md", "markdown"])`. Here's an example ofpipelining actions for `--file`1. File has extensions `md` or `markdown`2. File exists```pythonparser.add_argument("--file",action=PipelineAction,action_values=[(FileHasExtensionAction, ["md", "markdown"]),FileExistsAction])```Another helpful feature, this action provides is the _order of errorreporting_. In the above example, if the supplied argument file did not havethe markdown extensions, the error message would reflect that and exit. Afterthe user redoes the entry with a valid filename the next action in the pipelineapplies `FileExistsAction` which checks for existence. If the file does notexist, an error message about file not existing will be shown and exitsallowing the user to try again.This behavior can save you a lot of manual condition checks later on. Forexample, here's how to check for an _existing, writable, non-empty_, markdownfile —```pythonparser.add_argument("--file",action=PipelineAction,action_values=[FileExistsAction,FileIsWritableAction,FileIsNotEmptyAction,(FileHasExtensionAction, ["md", "markdown"])]```### Exceptions in this moduleYou'll come across two different exceptions in `action_hero`.1. __`ValueError`__: These are intended for you, the CLI developer. You'd wantto fix any underlying issue that causes them before releasing your CLI.e.g. when `action_values` is an empty list.2. __`argparse.ArgumentError`__: These are intended for your CLI's users, sothey might use the messages as hints to provide corrent command lineoptions.### Not capturing user argument exceptions`argparse.ArgumentParser` has a slightly unconventional approach to handling`argparse.ArgumentError`s. Upon encountering one, it prints argument usageinformation, error and exits. I mention this, so you don't setup a `try/except`around `parser.parse_args()` to capture that exception.In order to maintain consistency with the rest of your `argparse` code,exceptions in `action_hero` are also of type `argparse.ArgumentError` andcauses system exit as well. More information can be found in [PEP389](https://www.python.org/dev/peps/pep-0389/#id46). Since this isexpected behavior, I recommend you allow this exception and let it display usageinformation and exit.### Arguments accepting multiple valuesJust like any other `argparse.Action` each `action_hero.Action` handlesmultiple values and provides relevant error messages.### FAQ<dl><dt>What do I need to know to use <code>action_hero</code> in my command line application?</dt><dd>Vanilla <code>argparse</code> knowledge should do it.</dd><dt>Where can I find information about <code>argparse</code>?</dt><dd><code>argparse</code> is part of the <a href="https://docs.python.org/3.7/library/argparse.html#module-argparse">Python standard library</a>.</dd><dt>Is <code>action_hero</code> tied to the <code>argparse</code> module?</dt><dd>Yes <em>(but technically no — any project that can use an <code>argpoarse.Action</code> should work as long as it handles the <code>argparse.ArgumentError</code> exception)</em></dd><dt>What type are the user argument exceptions?</dt><dd><code>argparse.ArgumentError{"helpful error message"}</code>, just like any other <code>argparse.Action</code></code></dd><dt>Why re-implement actions already provided by <code>argparse</code> like the <code>choices</code> action?</dt><dd>In order to include them in <code>PipelineAction</code>.</dd><dt>There was no mention of humans! Does this work for humans?</dt><dd>Yes, it works for humans :)</dd></dl>## Catalog1. __Special__ actions:| Action | Description | `action_values` || --- | --- | --- || __`PipelineAction`__ | Run multiple actions as a pipeline | Actions to run as a pipeline. e.g. `[FileExistsAction, FileIsWritableAction]`. ([Read more](#pipelining-multiple-actions)) || __`DebugAction`__ | Print debug information. There can be multiple of these in a pipeline | |2. __Path, Directory and File__ related actions:| Action | Description | `action_values` || --- | --- | --- || __`DirectoryDoesNotExistAction`__ | Check if directory does not exist | || __`DirectoryExistsAction`__ | Check if directory exists | || __`DirectoryIsExecutableAction`__ | Check if directory is executable | || __`DirectoryIsNotExecutableAction`__ | Check if directory is not executable | || __`DirectoryIsNotReadableAction`__ | Check if directory is not readable | || __`DirectoryIsNotWritableAction`__ | Check if directory is not writable | || __`DirectoryIsReadableAction`__ | Check if directory is readable | || __`DirectoryIsValidAction`__ | Check directory is valid | || __`DirectoryIsWritableAction`__ | Check if directory is writable | || __`EnsureDirectoryAction`__<sup>*</sup> | Ensure directory exists and create it if it doesnt | || __`EnsureFileAction`__<sup>*</sup> | Ensure file exists and create it if it doesnt | || __`FileDoesNotExistAction`__ | Check if file doesnt exist | || __`FileExistsAction`__ | Check if file exists | || __`FileIsEmptyAction`__ | Check if file is empty | || __`FileIsExecutableAction`__ | Check if file is executable | || __`FileIsNotEmptyAction`__ | Check if file is not empty | || __`FileIsNotExecutableAction`__ | Check if file is not executable | || __`FileIsNotReadableAction`__ | Check if file is not readable | || __`FileIsNotWritableAction`__ | Check if file is not writable | || __`FileIsReadableAction`__ | Check if file is readable | || __`FileIsValidAction`__ | Check file is valid | || __`FileIsWritableAction`__ | Check if file is writable | || __`FileHasExtensionAction`__ | Check if file has specified extension | Extensions to check against. e.g. `["md", "markdown"]` || __`PathDoesNotExistsAction`__ | Check if path does not exist | || __`PathExistsAction`__ | Check if path exists | || __`PathIsExecutableAction`__ | Check if path is executable | || __`PathIsNotExecutableAction`__ | Check if path is not executable | || __`PathIsNotReadableAction`__ | Check if path is not writable | || __`PathIsNotWritableAction`__ | Check if path is not writable | || __`PathIsReadableAction`__ | Check if path is readable | || __`PathIsValidAction`__ | Check if path is valid | || __`PathIsWritableAction`__ | Check if path is writable | || __`ResolvePathAction`__<sup>†</sup> | Resolves path to canonical path removing symbolic links if present | |3. __Net & Email__ related actions:| Action | Description | `action_values` || --- | --- | --- || __`IPIsValidIPAddressAction`__ | Check if ip is valid ipv4 or ipv6 address | || __`IPIsValidIPv4AddressAction`__ | Check if ip address is valid ipv4 address | || __`IPIsValidIPv6AddressAction`__ | Check if ip address is valid ipv6 address | || __`URLIsNotReachableAction`__ | Check if URL is not reachable | || __`URLIsReachableAction`__ | Check if URL is reachable | || __`URLWithHTTPResponseStatusCodeAction`__ | Check if upplied URL responds with expected HTTP response status code | [Status codes](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status) to check against. e.g. `["200", "201", "202", "204"]` || __`EmailIsValidAction`__ | Checks if email address is valid | |4. __Type__ related actions:| Action | Description | `action_values` || --- | --- | --- || __`IsConvertibleToFloatAction`__ | Check if value is convertible to float | || __`IsConvertibleToIntAction`__ | Check if value is convertible to int | || __`IsConvertibleToUUIDAction`__ | Checks if value is convertible to UUID | || __`IsFalsyAction`__ | Checks if value is falsy | || __`IsTruthyAction`__ | Checks if value is truthy | |5. __Range__ related actions:| Action | Description | `action_values` || --- | --- | --- |6. __Miscellaneous__ actions:| Action | Description | `action_values` || --- | --- | --- || __`ChoicesAction`__ | Argument can only have values from provided choice(s) | Choices e.g. `["red", "blue", "green"]` || __`NotifyAndContinueAction`__ | Print provided notification message(s) | Message(s) e.g. `["This command will be deprecated in the next version."]` || __`NotifyAndExitAction`__ | Print provided notification message(s) and Exit | Message(s) e.g. `["This command has been deprecated", "Try --new-command"]` || __`ConfirmAction`__ | Print provided message and proceed with user confirmation _yes or no_. | Message(s) e.g. `["Proceed to Installation? (Y/n)"]` || __`GetInputAction`__<sup>†</sup> | Get user input and save to `self.dest` | Message(s) e.g. `["Favorite color"]` || __`GetSecretInputAction`__<sup>†</sup> | Get user input without displaying characters and save to the `self.dest` | Message(s) e.g. `["Enter your Password"]` || __`LoadJSONFromFileAction`__<sup>†</sup> | Return loaded JSON file(s) | || __`LoadYAMLFromFileAction`__<sup>†</sup> | Return loaded YAML file(s) | || __`LoadPickleFromFileAction`__<sup>†</sup> | Return unpickled file(s) | || __`CollectIntoDictAction`__<sup>†</sup> | Collect values into a dict | Delimiter(s) to split value(s) into key:value pair(s) e.g. `[":", "="]` (If multiple delimiters exist inside a value, only the first match is used) || __`CollectIntoListAction`__<sup>†</sup> | Collect values into a list | || __`CollectIntoTupleAction`__<sup>†</sup> | Collect values into a tuple | |<strong><sup>*</sup></strong> Actions that can make changes to disk<strong><sup>†</sup></strong> Actions that can make changes to `self.dest`## Development### Notes- __Formatting__-: _PEP8 only. Please format with black using `black_linelength=79`_- __License__: _The MIT License_- __Image Attributions__: _Karate by Alex Auda Samora from the Noun Project_Thank you for using `action_hero` — [@kadimisetty](https://github.com/kadimisetty) ⭐️✨
actionista-todoist
Actionista Action-Chain CLI for Todoist (actionista-todoist)Actionista Action CLI for Todoist.Manage your Todoist tasks from the command line, using powerful filters to select, print, reschedule, and complete tasks in a batch-wise fashion.Do you have dozens or even hundreds of overdue tasks on your agenda? Clear up your task list in seconds, using the Actionista Action CLI for Todoist. You can now take the rest of the day off with a clear conscience.This Action CLI for Todoist (todoist-action-cli), operates sequentially on a list of tasks. You start out with a list ofalltasks, and then select tasks using one of the many filters available. You can then sort, print, and reschedule the selected tasks.ActionistaAction-Chain CLI for Todoist (actionista-todoist) is inspired by the powerfulfindcommand line tool. It takes the "chain of actions" approach thatfinduses to find and select files on your harddisk, and applies it for managing your Todoist task list.The successfulfindutility works by supplying a sequence of "actions" (also known as "expressions" in thefindmanual).Most actions are essentially filters, where you list criteria for the files to find. However, the real usability offindis that it can not only print the matching files, but also use the matching files for other actions, e.g. deleting the files, or sending them to other command line tools.Theactionistaaction-chain CLI takes a similar approach. Starting from the full list of tasks, you can apply filters to find exactly those tasks that you need. Together with other actions, you canprint,reschedule,rename,mark-complete, ordeletewhatever tasks you need. You can invoke as many actions as you need, both filters and other actions, in any order. The actions are invoked in exactly the order you specify.So if you want, you can filter tasks by e.g. project name, and print all tasks in a given project, then filter by due date, and print again, then reschedule the tasks that were just printed, then filter by exact name, then mark that (or those) remaining task(s) as complete, and finally commit the changes to the server. This example is mostly just to show what is possible, and I personally wouldn't recommend having such a complex list of actions, but you are basically free to list as many (or as few) actions as you want or need. For the record, doing the described sequence of actions would look something like this:$ todoist-action-cli -project Wedding -print \ -due before today \ -print \ -reschedule tomorrow \ -name startswith "Pick up the rings" \ -rename "Remind Tommy to pick up the rings" \ -commitUsually, for your own sanity, command line usage would be a little more simple, and have only a single "purpose" with each invocation:# Basic example: Find tasks containing the string "rings": $ todoist-action-cli -sync -name "*rings*" -sort -printThe generalized command line usage is:$ todoist-action-cli [-action [args]] [-action [args]] [...]You can also import the package from python:>>> import actionista.todoist >>> import actionista.todoist.action_cliNote: This package was formerly imported asrsenv.rstodo.todoist_action_cli, but has now been separated into its own package, imported as:actionista.todoist_action_cli.NOTE: This application is not created by, affiliated with, or supported by Doist. It is a third-party command line utility that is making use of the official Todoist API, as documented byhttps://developer.todoist.com/sync/v8/.INSTALLATION:Installation withpipx:For regular end-users, I recommend usingpipxto install the Actionista-Todoist command line apps:$ pipx install actionista-todoistIf you don't have pipx installed, you can refer to thepipx installation guide. Briefly:$ pip install pipx $ pipx ensurepathThe last step will add~/.local/binto the PATH environment variable. Please make sure to close and restart your terminal/command prompt after installing pipx for the first time.Known installation errors:If you are usingconda, there is a known issue where you receive an error, "Error: [Errno 2] No such file or directory:", when trying to install packages withpipx. If you get this error, please update yourcondapython and make sure you are only using the "defaults" channel,not"conda-forge".Installation withpip:To install distribution release package from the Python Packaging Index (PyPI):$ pip install -U actionista-todoistAlternatively, install the latest git master source by fetching the git repository from github and install the package in editable mode (development mode):$ git clone [email protected]:scholer/actionista-todoist && cd actionista-todoist $ pip install -U -e .CONFIGURATION:Onceactionista-todoistpackage is installed, you need to obtain a login token from the todoist website: Log into your todoist.com account, go toSettings -> Integrations, and copy the API token. (You can also go directly to the page:https://todoist.com/prefs/integrations).Now run:$ actionista-todoist-configto set up the login token with your Actionista-Todoist installation. The API token is stored in~/.todoist_token.txt. Theactionista-todoist-configcommand will also create a default config file,~/.todoist_config.yaml, which you can edit to change default sorting and printing format.If you re-set your Todoist API token, you can update it either by running:$ actionista-todoist-config --token <your-token-here>or by manually editing the file~/.todoist_token.txtand place your token in here.USAGE:Theactionista-todoistpackage contains several command line apps (CLIs):todoist-action-cli- also available asactionista-todoist.todoist-cli.actionista-todoist-config.Thetodoist-action-cliCLI program uses the "action chain" approach, where you specify a sequence of "actions", which are used to filter/select tasks from Todoist and then sort, print, or reschedule the selected tasks in a batch-wise fashion.Thetodoist-cliCLI program is used mostly for things that doesn't fit the "action chain" philosophy. For instance, if you want to add a new task, that doesn't really fit into thetodoist-action-cliworkflow.(*) Instead, you can usetodoist-cli add-taskcommand to add a new task to Todoist. Thetodoist-cliis also used for other things, e.g. printing a list of your projects, etc. You can runtodoist-cli --helpto see all available commands.Finally, theactionista-todoist-configCLI program is used to set up Actionista-Todoist, configuring your API login token, and creating a default configuration file.(*) Thetodoist-action-clican technically be used to add tasks to Todoist, using the-add-taskaction command - however, this is not the recommended approach.todoist-action-cliusage:The general command line usage is:$ todoist-action-cli [actions] $ todoist-action-cli [-action [args]]Whereactionis one of the following actions:# Sorting and printing tasks: -print Print tasks, using a python format string. -sort Sort the list of tasks, by task attribute in ascending or descending order. # Task selection (filtering): -filter Generic task filtering method based on comparison with a specific task attribute. -has Generic task filtering method based on comparison with a specific task attribute. -is Special -is filter for ad-hoc or frequently-used cases, e.g. `-is not checked`, etc. -not Convenience `-not` action, just an alias for `-is not`. Can be used as e.g. `-not recurring`. -due Special `-due [when]` filter. Is just an alias for `-is due [when]`. -contains Convenience filter action using taskkey="content", op_name="contains". -startswith Convenience filter action using taskkey="content", op_name="startswith". -endswith Convenience filter action using taskkey="content", op_name="endswith". -glob Convenience filter action using taskkey="content", op_name="glob". -iglob Convenience filter action using taskkey="content", op_name="iglob". -eq Convenience filter action using taskkey="content", op_name="eq". -ieq Convenience filter action using taskkey="content", op_name="ieq". -content Convenience adaptor to filter tasks based on the 'content' attribute (default op_name 'iglob'). -name Convenience adaptor to filter tasks based on the 'content' attribute (default op_name 'iglob'). -project Convenience adaptor for filter action using taskkey="project_name" (default op_name "iglob"). -priority Convenience adaptor for filter action using taskkey="priority" (default op_name "eq"). -priority-eq Convenience filter action using taskkey="priority", op_name="eq". -priority-ge Convenience filter action using taskkey="priority", op_name="ge". -priority-str Convenience adaptor for filter action using taskkey="priority_str" (default op_name "eq"). -priority-str-eq Convenience filter action using taskkey="priority_str", op_name="eq". -p1 Filter tasks including only tasks with priority 'p1'. -p2 Filter tasks including only tasks with priority 'p2'. -p3 Filter tasks including only tasks with priority 'p3'. -p4 Filter tasks including only tasks with priority 'p3'. # Updating tasks: -reschedule Reschedule tasks to a new date/time. -mark-completed Mark tasks as completed using method='close'. -mark-as-done Mark tasks as completed using method='close'. # Synchronizing and committing changes with the server: -sync Pull task updates from the server to synchronize the local task data cache. -commit Commit is a sync that includes local commands from the queue, emptying the queue. Raises SyncError. -show-queue Show list of API commands in the POST queue. -delete-cache Delete local todoist data cache. -print-queue Show list of API commands in the POST queue. # Program behavior: -verbose, -v Increase program informational output verbosity. -yes, -y Disable confirmation prompt before enacting irreversible commands, e.g. -commit. -help, -h Print help messages. Use `-help <action>` to get help on a particular action.To see how to use each filter, type:$ todoist-action-cli -help <action_name>E.g.:$ todoist-action-cli -help project $ todoist-action-cli -help filter $ todoist-action-cli -help rescheduleAs you can see, typical usage is:$ todoist-action-cli -sync [one or more filter actions to select the tasks] -sort -printThe filter actions could be e.g. filtering by-name(same as-content),project,due_date_local_iso, etc. The-syncaction is optional; if you do not specify-sync, the program will just re-use the old cache, from last time you invoked-sync. You must invoke-syncat least once, when you first install this package, and you should always-syncif you have made any changes (e.g. from your phone) since your last sync. Finally, the-sortand-printcommands will sort and print the selected tasks.If you need to refine your filters, just run the command again. The data is cached locally, so if you omit the-syncaction, commands can be executed in rapid succession.Another example, to reschedule the due date for a bunch of tasks, would look like:$ todoist-action-cli [-sync] [filter actions] [-sort] [-print] -reschedule "Apr 21" -commitNOTE: Istronglyrecommend that you-printthe filtered tasks before you-renameor-reschedulethe tasks. When you invoke-commit, the changes cannot be undone automatically, so you may easily end up with a bunch of identically-named tasks with the same due date, if you forgot to apply the correct selection filters before renaming or rescheduling the tasks! For this reason, the program will, by default, ask you for confirmation before every-commit.Action arguments:Each action can be provided a set of arguments which are listed sequentially, separated by space. If one argument contains spaces, e.g. you are filtering by tasks in the project "Meeting notes", then you need to quote the argument as such:$ todoist-action-cli -sync -project "Meeting notes" -sort "project_name,content" ascending -printHere, we provided one argument to the-projectaction ("Meeting notes"), and two arguments to the-sortaction ("project_name,content"andascending).Some of the actions attempts to be "clever" when interpreting the arguments given. For instance, when filtering by project, you can do either:$ todoist-action-cli -project "Wedding*" $ todoist-action-cli -project glob "Wedding*" $ todoist-action-cli -project startswith WeddingThe general signature for the-projectaction is:$ todoist-action-cli -project [operator] valueHere,[operator]is the name of one of the many registered binary operators. These are used to compare the tasks against a given value. In the example above, if you do not specify any operator, then the "glob" operator is used. The "glob" operator allows you to use wild-cards for selecting tasks, the same way you select files on the command line. In our case, we "glob" against tasks with project name starting with the string "Wedding*". We could also have used the "startswith" operator, and omit the asterisk:startswith Wedding.For more info on how to use operators, see:$ todoist-action-cli -help operatorsAd-hoc CLI:Installing this project (actionista-todoist) withpipwill also give you some "ad-hoc" command line interface entry points:$ todoist-adhoc <command> <args> $ todoist-adhoc print-query <query> [<print-fmt>] $ todoist-adhoc print-completed-today [<print-fmt>] $ todoist-adhoc print-today-or-overdue-items [<print-fmt>] # And a couple of endpoints with convenient defaults, e.g.: $ todoist_today_or_overduePrior art: Other python-based Todoist projectsOther Todoist CLI packages that I know about:todoist-cli- A command line interface for batch creating Todoist tasks from a file. Makes manual requests against the web API url (rather than using the official todoist-python package). No updates since January 2016 (exceptsetup.pyandrequirements.txt). This probably doesn't work, given that it uses an old, obsolete API endpoint URL.todoicli- A rather new project (as of April 2018). Focuses on pre-defined queries for listing tasks, e.g. "today and overdue", "next 7 days", etc. Lots of other functionality, pretty extensive code base. Has integration with the toggl.com time tracking service. Uses the officialtodoist-pythonpackage, but still uses the "universal UTC" time format from the old v7 Sync API, and the v7 due-date structure. That means it probably won't work anymore, given that the v7 Sync API has been deprecated, and thetodoist-pythonpackage has switched to Sync API v8. For instance, thelist tdandlist n7usesitem['due_date_utc'], which will raise aKeyErrorwith the new v8 data structure. The CLI also expectsIDs(task_id,label_id,project_id), rather than text names, making it rather difficult to use for the end user.pydoist- A basic CLI to add Todoist tasks from the command line. Uses the officialtodoist-pythonpython API from Todoist. Lastest release was November 2016, so may not work with the new v8 Sync API.Other general python Todoist packages:python-todoist- The official python 'Todoist' package from Doist (the company behind Todoist). Is currently using the version 8.0 "Sync" API.pytodoist- An alternative Todoist API package. Also uses the v7 Sync API. A rather different approach to API wrapping, perhaps more object oriented. Focused on modelling individual Users/Projects/Tasks/Notes, where the official todoist-python package hasmanagersas the central unit (ItemsManager, ProjectsManager, NotesManager).Last update November 2016 - will be obsolete when the v7 Sync API is removed.
action-items-local
PyPI Package for Circles action-items-local Python
actionkit
No description available on PyPI.
actionkit-templates
ActionKit Template Renderer===========================### What this isIf you use [ActionKit](http://actionkit.com/) and edit its templates, then you might want to see what they look likelocally. If you install this (`pip install actionkit-templates`) then you can runaktemplates runserverYou can also run it on a different port than the default like so:aktemplates runserver 0.0.0.0:1234in a directory where you have a set of ActionKit templates (`wrapper.html`, etc), then you canview them on from a local port. This runs Django in a similar environment that ActionKitruns itself.Environment===========You can set some environment variables that will help you develop locally (e.g. static file versions).This is a 0.1 codebase, so things might change across versions -- probably limiting the full Djangomanage.py context and to expose those things by commandline, instead. In the meantime, you canlook at `actionkit_templates/settings.py` and search for `os.environ` for what it does.TEMPLATE_DIR: By default we search the local directory and a directory called template_set. If you run:TEMPLATE_DIR=actionkittemplates aktemplates runserverit will also look in the directory `actionkittemplates/`STATIC_ROOT: By default we serve the `./static/` directory at /static/ This goes well with code in yourwrapper.html template like this:```{% if args.env == "dev" or devenv.enabled or 'devdevdev' in page.custom_fields.layout_options %}<!-- development of stylesheets locally --><link href="/static/stylesheets/site.css" rel="stylesheet" />{% else %}<!-- production location of stylesheets --><link href="https://EXAMPLE.COM/static/stylesheets/site.css" rel="stylesheet" />{% endif %}```STATIC_FALLBACK: In the occasional moment when you are developing without an internet connection this will failto load:```{% load_js %}//ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js{% end %}```In that situation, if you set STATIC_FALLBACK to a directory where, e.g. `jquery.min.js`is present, then it will look for all the internet-external files in that directory.Note that this only works with `load_js` and `load_css` template tags.Contributing When You Run Into Something Not Covered====================================================Template Tags-------------Usually, these are easy to add here `actionkit_templates/templatetags/actionkit_tags.py`We should aim for support of all these:https://roboticdogs.actionkit.com/docs/manual/guide/customtags.htmlExtra contexts--------------If you make a context that's not covered already, please contribute with a patch to`actionkit_templates/contexts/` Note that these are also useful to browser, to seewhat variables you can access from a particular page context.
actionlint-py
Note: forpre-commithooks I recommend officially supported hooks: See docs:https://github.com/rhysd/actionlint/blob/main/docs/usage.md#pre-commitactionlint-pyA python wrapper to provide a pip-installableactionlintbinary.Internally this package provides a convenient way to download the pre-built actionlint binary for your particular platform.Installationpipinstallactionlint-pyUsageAfter installation, theactionlintbinary should be available in your environment (oractionlint.exeon windows). Remember to add youScriptsfolder toPATH.As a pre-commit hookSeepre-commitfor introduction.I recommend using officially supported pre-commit hooks from actionlint itselfSee docs:https://github.com/rhysd/actionlint/blob/main/docs/usage.md#pre-commitUse this repo if you can not use officially supported hooks (docker, golang, system) and you are fine with pythonpipwrapper.Sample.pre-commit-config.yamlusingpipas package manager:-repo:https://github.com/Mateusz-Grzelinski/actionlint-pyrev:v1.6.26.11hooks:-id:actionlintadditional_dependencies:[pyflakes>=3.0.1,shellcheck-py>=0.9.0.5]# actionlint has built in support for pyflakes and shellcheck, sadly they will not be auto updated. Check https://pypi.org/project/actionlint-py/ for latest version. Alternatively:# args: [-shellcheck=/path/shellcheck -pyflakes=/path/pyflakes]# note - invalid path in arguments will fail silentlyBecauseactionlint-pyis available as source distribution, pip build system will fetch binary from (public) github. It might cause problems with corporate proxy. In case of problems try this semi-manual setup that respects yourpip.ini:-repo:localhooks:-id:actionlintname:actionlintdescription:Lint GitHub workflows with actionlintadditional_dependencies:[actionlint-py]#additional_dependencies: [actionlint-py==1.6.26.11]# safer, but pre-commit autoupdate will not work# note: the pip versioning scheme is different from actionlint binary: not "v1.6.26" but "1.6.26.11" (last number is build system version)entry:actionlint#args: [-ignore "*.set-output. was depracated.*"]language:pythontypes:["yaml"]files:"^.github/workflows/"Alternative methods of running actionlintAs pre-commit hooksSeeofficial docs for pre-commit integration-repo:https://github.com/rhysd/actionlintrev:v1.6.26hooks:-id:actionlint# - id: actionlint-docker# - id: actionlint-systemUse as github action stepUse directly in github action, seeofficial docs for github action integration:name:Lint GitHub Actions workflowson:[push,pull_request]jobs:actionlint:runs-on:ubuntu-lateststeps:-uses:actions/checkout@v3-name:Download actionlintid:get_actionlintrun:bash <(curl https://raw.githubusercontent.com/rhysd/actionlint/main/scripts/download-actionlint.bash)shell:bash-name:Check workflow filesrun:${{ steps.get_actionlint.outputs.executable }} -colorshell:bashOr using docker:name:Lint GitHub Actions workflowson:[push,pull_request]jobs:actionlint:runs-on:ubuntu-lateststeps:-uses:actions/checkout@v3-name:Check workflow filesuses:docker://rhysd/actionlint:latestwith:args:-colorDevelopmentDevelopment of wrapper and releasing new version: seeREADME-DEV.mdRoadmapAdd actionlint hook as dockersupportshellcheck-pyin docker imageauto update docker version in.pre-commit-hooks.yamlwhen using_custom_build/auto_update_main.pyaddshellcheck-pyas dependency (or at least document)Update tag in readme in github action when releasing new versionUpload also binary distribution, not only source distributionAdd unit tests to build systemSeeREADME-DEV.mdfor more TODOs.Won't do unless asked:support all platforms that actionlint supports (like freebsd)
action-optimizer
No description available on PyPI.
actionpack
a lib for describing Actions and how they should be performedOverviewSide effects are annoying. Verification of intended outcome is often difficult and can depend on the system's state at runtime. Questions like"Is the file going to be present when data is written?"or"Will that service be available?"come to mind. Keeping track of external system state is just impractical, but declaring intent and encapsulating its disposition is doable.UsageWhat are Actions for?Actionobjects are used to declare intent:>>>action=Read('path/to/some/file')Theaction, above, represents the intent toReadthe contents from the file at some path. AnActioncan be "performed" and the result is captured by aResultobject:>>>result=action.perform()# produces a Result objectTheresultholds disposition information about the outcome of theaction. That includes information likewhether or not it was.successfulor that it was.produced_atsome unix timestamp(microseconds by default). To gain access to the value of theresult, check the.valueattribute. If unsuccessful, there will be anException, otherwise there will be an instance of some non-Exceptiontype.Can Actions be connected?AResultcan be produced by performing anActionand that value can be percolated through a collection ofActionTypesusing thePipelineabstraction:>>>pipeline=Pipeline(ReadInput('Which file? '),Read)The above, is not the most helpful incantation, but toss the following in awhileloop and witness some REPL-like behavior (bonus points for feeding itactualfilenames/filepaths).result=Pipeline(ReadInput('Which file? '),Read).perform()print(result.value)SometimesActionTypes in aPipelinedon't "fit" together. That's where thePipeline.Fittingcomes in:listen=ReadInput('What should I record? ')record=Pipeline.Fitting(action=Write,**{'prefix':f'[{datetime.now()}] ','append':True,'filename':filename,'to_write':Pipeline.Receiver},)Pipeline(listen,record).perform()⚠️NOTE:Writing to stdout is also possible using theWrite.STDOUTobject as a filename. How that works is an exercise left for the user.Handling multiple Actions at a timeAnActioncollection can be used to describe a procedure:actions=[action,Read('path/to/some/other/file'),ReadInput('>>> how goes? <<<\n> '),MakeRequest('GET','http://google.com'),RetryPolicy(MakeRequest('GET','http://bad-connectivity.com'),max_retries=2,delay_between_attempts=2)Write('path/to/yet/another/file','sup')]procedure=Procedure(actions)And aProcedurecan be executed synchronously or otherwise:results=procedure.execute()# synchronously by default_results=procedure.execute(synchronously=False)# async; not thread saferesult=next(results)print(result.value)AKeyedProcedureis just aProcedurecomprised of namedActions. TheActionnames are used as keys for convenient result lookup.prompt='>>> sure, I'llsaveitforya..<<<\n>'saveme=ReadInput(prompt).set(name='saveme')writeme=Write('path/to/yet/another/file','sup').set(name='writeme')actions=[saveme,writeme]keyed_procedure=KeyedProcedure(actions)results=keyed_procedure.execute()keyed_results=dict(results)first,second=keyed_results.get('saveme'),keyed_results.get('writeme')⚠️NOTE:Procedureelements are evaluatedindependentlyunlike with aPipelinein which the result of performing anActionis passed to the nextActionType.For the honeybadgersOne can also create anActionfrom some arbitrary function>>>Call(closure=Closure(some_function,arg,kwarg=kwarg))DevelopmentSetupBuild scripting is managed vianoxfile. Executenox -lto see the available commands (set theUSEVENVenvironment variable to view virtualenv-oriented commands). To get started, simply runnox. Doing so will installactionpackon your PYTHONPATH. Using theUSEVENVenvironment variable, a virtualenv can be created in the local ".nox/" directory with something like:USEVENV=virtualenv nox -s actionpack-venv-install-3.10All tests can be run withnox -s testand a single test can be run with something like the following:TESTNAME=<tests-subdir>.<test-module>.<class-name>.<method-name> nox -s testCoverage reports are optional and can be disabled using theCOVERAGEenvironment variable set to a falsy value like "no".Homebrewed ActionsMaking newactionpack.actionsis straightforward. After defining a class that inheritsAction, ensure it has an.instructionmethod. If any attribute validation is desired, a.validatemethod can be added.There is no need to addActiondependencies tosetup.py. Dependencies required for developing anActiongo in :::drum roll:::requirements.txt. When declaring yourActionclass, arequiresparameter can be passed a tuple.classMakeRequest(Action,requires=('requests',)):...This will check if the dependencies are installed and, if so, will register each of them as class attributes.mr=MakeRequest('GET','http://localhost')mr.requests#=> <module 'requests' from '~/actionpack/actionpack-venv/lib/python3/site-packages/requests/__init__.py'>
actionpi
ActionPi [WIP]Action/Dash camera powered by Raspberry Pi ZeroIntroductionActionPi is a DIY project that gives everyone the opportunity to build an action/dash camera with a20$budget.✨ Features📽FullHDvideo recordings📥Download videoeasily using USB cable or WiFi connection💡Rolling video appenders: record video in a circular buffer, never overwrite on reboot🔨Robust design: temperature control, OS read-only partition & low latency write-to-disk (never miss a single frame on power loss)🕹3D Printable case: it's avaialble on Thingiverse🚧 Working on documentation, please bear with me 🚧DocsCheck outwikito read how to setup your personal ActionPi.Do you want to build it on your own and/or contribute?I'm currently looking for someone interested in build a prototype for personal usage and also help me in the documentation phase of this project.
actionqueues
Action Queuesactionqueuesis a lightweight way to queue up commands for execution and rollback on failures.The idea is that it provides a framework for safely executing sequences of action with side-effects, like database writes, that might need later rolling back if later actions fail. In addition, it provides a standardised way for actions to be retried.For example, a user sign up process may write to several different systems. If one system is down, then the other systems modified so far need cleaning up before the failure is propagated back to the user. Usingactionqueueswith an action for each external system to be modified enables this pattern, along with simple retry semantics for likely-transient failures such as network blips.Installingpip install actionqueuesUsing Action QueuesIt's barebones, the main point is to provide a framework to work within for actions that have side effects. It's basically the Command pattern, with a tiny execution framework.AnActionis the lowest-level building block. It's any object withexecuteandrollbackmethods. TheActionis what handles executing each step of the overall workflow, and rolling back any changes made to external state. It's a single object so it can save state for rollback -- for example, primary keys for any created database rows so they can be deleted during rollback.The main task of a user ofactionqueuesis to create theActionclasses which implement the tasks their workflows require.OnceActionclasses are written, they can be executed. AnActionQueueholdsActionsfor execution and rollback. AddActionobjects to anActionQueuefor execution. CallActionQueue#executeto start running each action'sexecutemethod in the order theActionobjects were added to theActionQueue. Behaviour after this point is controlled by theexecuteandrollbackmethods on theActionobjects being executed by the queue.Normal operationThe default case is that no exception is raised during anexecuteand the next action in the queue is hasexecutecalled. This is shown below for a sequence ofActionobjects within anActionQueue.Exceptions duringexecutecause rollbackIf anAction#executeraises an exception, the ActionQueue notes where it's up to in the Actions queued up and then propagates the exception back up to the caller.It is then the caller's responsibility to catch the exception and then to callActionQueue#rollback. This is so the caller can know that the queue of actions failed and is able to log the exception before callingrollback.CallingActionQueue#rollbackwill execute therollbackmethod on all actions where theexecutemethod was called, including the one raising the exception, in the reverse order to that which theexecutemethod was called.Rollback will not be called on actions whereexecutehas not been called.Again, the default case at this point is thatrollbackmethods succeed and don't throw exceptions, leading to each being executed in succession.Exceptions duringrollbackIn contrast to a raised exception fromexecute, if an exception is raised during therollbackmethod, theActionQueuewill silently swallow the exception and continue executing therollbackmethods of earlierActionobjects in the queue.This is because, in the rollback scenario, it's most likely that all rollback actions should happen so the library assumes this. Thereforerollbackmethods should do their own logging of exceptions before re-raising them.Retrying failed operationsThere is an exception to the above rules. If theexecuteorrollbackmethod raises aactionqueue.ActionRetryExceptionthen theexecuteorrollbackmethod will be called again. TheActionRetryExceptioninit method takes an optionalms_backoffargument to specify a time to sleep before trying the method again, in milliseconds.TheActionQueuewill retry as long as the action keeps raisingActionRetryException, so the action must maintain a retry count to avoid endless retries. Seebelowfor some helper classes which cover common cases.Exampleimportrandomfromactionqueuesimportactionqueue,actionSUCCEED=0RETRY=1FAIL=2classMyAction(action.Action):def__init__(self,id):self._id=idself._value=0defexecute(self):"""Called if actions before it in the queue complete successfully.Raise any exception to indicate failure."""action=random.choice([SUCCEED,RETRY,FAIL])ifaction==RETRY:printself._id,"Throwing retry exception"raiseactionqueue.ActionRetryException(ms_backoff=0)elifaction==FAIL:printself._id,"Throwing failure exception"raiseException()else:printself._id,"Executing success action"self._value=1defrollback(self):"""Called in reverse order for all actions queued whose executemethod was called when the ActionQueue's rollback method is called."""printself._id,"Rolling back action"ifself._value==1:self._value=0q=actionqueue.ActionQueue()q.add(MyAction("a"))q.add(MyAction("b"))try:q.execute()except:q.rollback()Retry exception helpersIt can be tedious to keep track of the backoff and retry count for an action. Thereforeactionqueuesprovides helpers for this called exception factories. These are created when theActionis initialised, and when anexecutemethod hits a retriable exception, it calls the factory'sraise_exception()method. In general, this will throwActionRetryExceptionexceptions for a given number of retries, then throw a generic exception, or one provided by theActionobject.Using separate ExceptionFactory objects forexecuteandrollbackis usually required.The available exception factories are:DoublingBackoffExceptionFactorywhich will throw a configurable numberActionRetryExceptionexceptions, each doubling its backoff time.In this example, theZeroDivisionErrorwill cause 5 retries, at 100, 200, 400, 800 and 1600ms delays, by using aDoublingBackoffExceptionFactory:fromactionqueuesimportactionqueue,actionfromactionqueues.exceptionfactoryimportDoublingBackoffExceptionFactoryclassMyFailingAction(action.Action):def__init__(self):self._run=1self._execute_ex_factory=DoublingBackoffExceptionFactory(retries=5,ms_backoff_initial=100)self._rollback_ex_factory=DoublingBackoffExceptionFactory(retries=10,ms_backoff_initial=100)defexecute(self):"""Execute an always failing action, but have it retried 5 times."""print"Executing action",self._runself._run+=1try:1/0exceptZeroDivisionError,ex:self._execute_ex_factory.raise_exception(original_exception=ex)defrollback(self):print"Rollback action",self._runself._run+=1try:1/0exceptZeroDivisionError,ex:self._rollback_ex_factory.raise_exception(original_exception=ex)q=actionqueue.ActionQueue()q.add(MyFailingAction())try:q.execute()except:print"boom"
action-react
action-reactaction-reactis a tool that reacts to Telegram chat messages.Project BoardOverviewaction-reactspecifically listens to an account in a Telegram chat and makes corresponding http calls given the content of the message. This is inspired bytuixue.online-visa, a tool that periodically checks and publishes US consulate visa appointment availabilities across the world.A possible application of this tool is to use it to listen to a Telegram bot publishing messages about new visa appointments, and makes http calls to the consulate appointment service to grab the desired spot.InstallationInstall the library's dependencies and build the library using:pip install action-reactUsageIn your code, begin by importing the package:from action-react import mainYou can connect it to a telegram chat using:main(api_id, api_hash, phone, username, target_date, cities)For example, you can usemain("123", "hash123", "+12345678901", "username", datetime.date(2022,2,2), ["boston", "houston"])to start the function.Alternatively, you can directly runpython main.pyafter setting up the configurations inconfig.ini.
actionrules-lukassykora
Action RulesAction Rules (actionrules) is an implementation of Action Rules from Classification Rules algorithm described inDardzinska, A. (2013). Action rules mining. Berlin: Springer.If you use this package, please cite:Sýkora, Lukáš, and Tomáš Kliegr. "Action Rules: Counterfactual Explanations in Python." RuleML Challenge 2020. CEUR-WS.http://ceur-ws.org/Vol-2644/paper36.pdfGIT repositoryhttps://github.com/lukassykora/actionrulesInstallationpip install actionrules-lukassykoraJupyter NotebooksTitanicIt is the best explanation of all possibilities.TelcoA brief demonstration.RasBased on the example in (Ras, Zbigniew W and Wyrzykowska, ARAS: Action rules discovery based on agglomerative strategy, 2007).AttritionHigh-Utility Action Rules Mining example.Example 1Get data from csv. Get action rules from classification rules. Classification rules have confidence 55% and support 3%. Stable part of action rule is "Age". Flexible attributes are "Embarked", "Fare", "Pclass". Target is a Survived value 1.0. No nan values. Use reduction tables for speeding up. Minimal 1 stable antecedent Minimal 1 flexible antecedentfromactionrules.actionRulesDiscoveryimportActionRulesDiscoveryactionRulesDiscovery=ActionRulesDiscovery()actionRulesDiscovery.read_csv("data/titanic.csv",sep="\t")actionRulesDiscovery.fit(stable_attributes=["Age"],flexible_attributes=["Embarked","Fare","Pclass"],consequent="Survived",conf=55,supp=3,desired_classes=["1.0"],is_nan=False,is_reduction=True,min_stable_attributes=1,min_flexible_attributes=1,max_stable_attributes=5,max_flexible_attributes=5)actionRulesDiscovery.get_action_rules()The output is a list where the first part is an action rule and the second part is a tuple of (support before, support after, action rule support) and (confidence before, confidence after, action rule confidence).Example 2Get data from pandas dataframe. Get action rules from classification rules. Classification rules have confidence 50% and support 3%. Stable attributes are "Age" and "Sex". Flexible attributes are "Embarked", "Fare", "Pclass". Target is a Survived that changes from 0.0 to 1.0. No nan values. Use reduction tables for speeding up. Minimal 1 stable antecedent Minimal 1 flexible antecedentfromactionrules.actionRulesDiscoveryimportActionRulesDiscoveryimportpandasaspddataFrame=pd.read_csv("data/titanic.csv",sep="\t")actionRulesDiscovery=ActionRulesDiscovery()actionRulesDiscovery.load_pandas(dataFrame)actionRulesDiscovery.fit(stable_attributes=["Age","Sex"],flexible_attributes=["Embarked","Fare","Pclass"],consequent="Survived",conf=50,supp=3,desired_changes=[["0.0","1.0"]],is_nan=False,is_reduction=True,min_stable_attributes=1,min_flexible_attributes=1,max_stable_attributes=5,max_flexible_attributes=5)actionRulesDiscovery.get_pretty_action_rules()The output is a list of action rules in pretty text form.
actions
UNKNOWN
actionsBy
No description available on PyPI.
action-sdk-for-cache-mock
No description available on PyPI.
actions-includes
:warning: This project ismostlyabandoned.You are probably better usingComposite ActionsorReusable workflows.actions-includesAllows including an action inside another action (by preprocessing the YAML file).Instead of usingusesorrunin your action step, use the keywordincludes.Once you are using theincludesargument, the workflows can be expanded using this tool as follows:# python -m actions_include <input-workflow-with-includes> <output-workflow-flattened>python-mactions_includes./.github/workflows-src/workflow-a.yml./.github/workflows/workflow-a.ymlUsage with dockerdockercontainerrun--rm-it-v$(pwd):/github/workspace--entrypoint=""ghcr.io/mithro/actions-includes/image:mainpython-mactions_includes./.github/workflows-src/workflow-a.yml./.github/workflows/workflow-a.ymlincludes:stepsteps:-name:Other steprun:|command-includes:{action-name}with:{inputs}-name:Other steprun:|commandThe{action-name}follows the same syntax as the standard GitHub actionusesand the action referenced should look exactly like aGitHub "composite action"exceptruns.usingshould beincludes.For example;{owner}/{repo}@{ref}- Public action ingithub.com/{owner}/{repo}{owner}/{repo}/{path}@{ref}- Public action under{path}ingithub.com/{owner}/{repo}../{path}- Local action under local{path}, IE./.github/actions/{action-name}.As it only makes sense to reference composite actions, thedocker://form isn't supported.As you frequently want to include local actions,actions-includesextends the{action-name}syntax to also support:/{name}- Local action under./.github/includes/actions/{name}.This is how composite actions should have worked.includes-script:You can include a script (e.g., a Python or shell script) in your workflow.yml file using theincludes-scriptstep.Example script file:script.pyprint('Hello world')To include the script, reference it in anincludes-scriptaction in yourworkflow.yml, like so:steps:-name:Other steprun:|command-name:Helloincludes-script:script.py-name:Other steprun:|commandWhen the workflow.yml is processed by runningpython -m actions_includes.py workflow.in.yml workflow.out.yml, the resultantworkflow.out.ymllooks like this:steps:-name:Other steprun:|command-name:Helloshell:pythonrun:|print('Hello world')-name:Other steprun:|commandTheshellparameter is deduced from the file extension, but it is possible to use a custom shell by setting theshellparameter manually.Using a pre-commit hookWhen you use actions-includes, it may be useful to add a pre-commit hook (seehttps://git-scm.com/docs/githooks) to your project so that your workflow files are always pre-processed before they reach GitHub.With a git hooks packageThere are multiple packages (notablypre-commit; seehttps://pre-commit.com/) that support adding pre-commit hooks.In the case of using thepre-commitpackage, you can add an entry such as the following to yourpre-commit-config.yamlfile:- repo: local hooks: - id: preprocess-workflows name: Preprocess workflow.yml entry: python -m actions_includes.py workflow.in.yml workflow.out.yml language: system always-run: trueWithout a git hooks packageAlternatively, to add a pre-commit hook without installing another package, you can simply create or modify.git/hooks/pre-commit(relative to your project root). A sample file typically lives at.git/hooks/pre-commit.sample.The pre-commit hook should run the commands that are necessary to pre-process your workflows. So, your.git/hooks/pre-commitfile might look something like this:#!/bin/bashpython-mactions_includes.pyworkflow.in.ymlworkflow.out.yml||{echo"Failed to preprocess workflow file."""}To track this script in source code management, you'll have to put it in a non-ignored file in your project that is then copied to.git/hooks/pre-commitas part of your project setup. Seehttps://github.com/ModularHistory/modularhistoryfor an example of a project that does this with a setup script (setup.sh).
actions-in-fly
Test project
actions-ips
No description available on PyPI.
actionsMCP
This package provides the necessary parameters to perform actions on MCP. The code is compiled and can only run on Ubuntu 22.04. In order to use it, you will need to be included in the list of allowed addresses.
actions-paket-test-vol1234
-auto-release Sample auto release repository to PyPi using Github actionsContains files for automatic deployment to PyPi using Github Actions and Github tags
actions-pipeline-example
No description available on PyPI.
actions-python-core
actions-python-coreCore functions for setting results, logging, registering secrets and exporting variables across actionsUsageImport the packagefromactionsimportcoreInputs/OutputsAction inputs can be read withget_inputwhich returns astrorget_boolean_inputwhich parses a boolean based on theyaml 1.2 specification. Ifrequiredset to be false, the input should have a default value inaction.yml.Outputs can be set withset_outputwhich makes them available to be mapped into inputs of other actions to ensure they are decoupled.my_input=core.get_input("input_name",required=True)my_boolean_input=core.get_boolean_input("boolean_input_name",required=True)my_multiline_input=core.get_multiline_input("multiline_input_name",required=True)core.set_output("output_key","output_value")TBD
actions-python-github
actions-python-githubA hydrated githubkit client.UsageImport the packagefromactionsimportgithubTBD
actionspytoolkit
actions-pytoolkitGitHub Actions Development Tools in Python
actions-security-analyzer
asaasa (actions-security-analzyer) is a tool to analyze the security posture of your GitHub Actions.InstallationMake sure you have$HOME/.local/binin your PATHpip install actions-security-analzyerUsageasa --file action.yml asa -d directory-with-actions/ --verbose asa --file action.yml --ignore-warnings asa --list-checksUseasain Your GitHub WorkflowsExamplename:'RunActionsSecurityAnalzyer'on:push:branches:-main-devpaths:-'.github/workflows/**'jobs:RunAsa:runs-on:ubuntu-lateststeps:-name:"Checkoutrepo"uses:actions/checkout@96f53100ba2a5449eb71d2e6604bbcd94b9449b5# v3.5.3-name:"Runasascanner"uses:"bin3xish477/asa@ee733379e314d44f1a960a70339ee5e5d19e404d"with:dir:"./actions/"verbose:trueno-summary:trueignore-checks:'check_for_inline_scriptcheck_for_cache_action_usage'Checks Performed byasaName:check_for_3p_actions_without_hash, Level:FAILThis check identifies any third-party GitHub Actions in use that have been referenced via a version number such asv1.1instead of commit SHA haah. Using a hash can help mitigate supply chain threats in a scenario where a threat actor has compromised the source repository where the 3P action lives.Name:check_for_allow_unsecure_commands, Level:FAILThis check looks for the usage of environment variable calledACTIONS_ALLOW_UNSECURE_COMMANDSwhich allows for an Action to get access to dangerous commands (get-env,add-path) which can lead to code injection and credential thefts opportunities.Name:check_for_cache_action_usage, Level:WARNThis check finds any usage of GitHub's caching Action (actions/cache) which may result in sensitive information disclosure or cache poisoning.Name:check_for_dangerous_write_permissions, Level:FAILThis check looks for write permissions granted to potentially dangerous scopes such as thecontentsscope which may allow an adversary write code into the target repository if they're able to compromise the workflow. It's also looks for usage of thewrite-allwhich gives the action complete write access to all scopes.Name:check_for_inline_script, Level:WARNThis check simply warns that you're using an inline script instead of GitHub Action. Inline scripts are susceptible to script injection attacks (another check covered byasa). It is recommended to write an action and pass any required context values as inputs to that action which removes script injection vector because action input are properly treated as arguments and are not evaluated as part of a script.Name:check_for_pull_request_target, Level:FAILThis check looks for the usage of the dangerous event triggerpull_request_targetwhich allows workflow executions to run in the context of the repository that defines the workflow, not the repository that the pull request originated from, potentially allowing a threat actor to gain access to a repositories sensitive secrets!Name:check_for_script_injection, Level:FAILThis check looks for the most commonly known security risk to GitHub Action - script injection. Script injection occurs when an action directly includes (using the${{ ... }}syntax) a GitHub Context variable(s) in an inline script that can be controlled by an untrusted actor, resulting in command execution in the interpreted shell. These user-controllable parameters should be passed into an inline script as environment variables.Name:check_for_self_hosted_runners, Level:WARNThis checks attempts to identify the usage of self-hosted runners. Self-hosted runners are dangerous because if the Action is compromised it may allow a threat actor to gain access to on premise environment or establish persistence mechanisms on a server you own/rent.Name:check_for_aws_configure_credentials_non_oidc, Level:WARNThis checks looks for the usage of AWS'saws-actions/configure-aws-credentialsaction and attempts to identify non-OIDC authentication parameters. Non-OIDC authentication types are less secure than OIDC because they require the creation of long-term credentials which can be compromised, however, OIDC tokens are short-lived and are usually scoped to only the permissions that are essential to a workflow and thus help reduce the attack surface.Name:check_for_pull_request_create_or_approve, Level:WARNThis check looks for Action that have logic related to creating or improving pull requests. Creating or approving pull requests via automation poses a security risk if sufficient controls aren't in place to protect against malicious code being merged into a repository.Name:check_for_remote_script, Level:WARNThis check looks for a URL in an inline script of a GitHub Action which usually signals the inclusion of a remote script which can be dangerous.ReferencesSecurity hardening for GitHub Actions
actions-server
Actions ServerA very simple, multi-threaded HTTP server.Mainly designed for a very simple tasks, f.ex. 3 json-based endpoints with a simple logicIt utilizes a concept of "Actions" - the functions that can be executed to provide HTTP response.Important note:This server DOES NOT cover all HTTP functionality. This is intentional and probably will not be changed in the future.Usagefromactions_serverimport*ACTIONS=[JsonGet("/get",lambdaparams:{"response":"ok from GET action"}),JsonPost("/post",lambdaparams,body:{"response":"ok from POST action"}),Redirect("/","/get"),StaticResources("/static","./src/web")]server=http_server(port=80,actions=ACTIONS,thread_count=5)try:server.start(block_caller_thread=True)finally:server.stop()In this example, a server will be started on port 80 and the main thread will be blocked. The server will react on several requests:curl -X GET "http://localhost:80/get"will produce{"response": "ok from GET action"}responsecurl -X POST "http://localhost:8080/postwill produce{"response": "ok from POST action"}responsecurl -X GET "http://localhost:80/will send HTTP 301 Redirect to `http://localhost:80/get"curl -X GET "http://localhost:80/static/aaa.pngwill return an image./src/web/aaa.pngActions out-of-the-boxJsonGet(endpoint, callable)Will listen to the endpointendpoint, callcallable(params)(where params is a dict of arguments; note - values are always an array!) and convert resulting dict into JSONJsonPost(endpoint, callable, body)Will listen to the endpointendpoint, callcallable(params, body)(where params is a dict of arguments; note - values are always an array!, and body is a parsed the request body into dict) and convert resulting dict into JSONRedirect(from, to)Will send HTTP 301 RedirectStaticResources(path, dir)Will server all files fromdirunder pathpathImplementing custom actionfromactions_serverimport*classMyCustomResponse(Response):def__init__(self,payload):self._payload=payloaddefheaders(self)->dict:return{'Content-type':'text/plain'}defresponse_body_bytes(self):returnself._payload.encode('utf-8')classMyCustomAction(Action):defcan_handle(self,method,path,params,payload):returnmethod=='GET'andpath=='/hello'defhandle(self,method,path,params,payload)->Response:returnMyCustomResponse("hello there!")Notes:parametermethodmay contain two strings -GETorPOSTresponse body must be bytes!
actions-test
No description available on PyPI.
actions-toolkit
Actions ToolkitThe GitHub Actions ToolKit provides an SDK to make creating actions easier in Python.InstallationAction Toolkit is available on PyPI:$python-mpipinstallactions-toolkitAction Toolkit officially supports Python 3.6+.Usage>>>importos>>>fromactions_toolkitimportcore>>>os.environ['INPUT_NAME']='Actions Toolkit'>>>core.get_input('name',required=True)'Actions Toolkit'>>>core.error('Something went wrong.')::error::Somethingwentwrong.>>>core.info('Run successfully.')Runsuccessfully.>>>core.set_failed('SSL certificates installation failed.')::error::SSLcertificatesinstallationfailed.For more examples and API documentation, please see thecore&github.ContributingContributions are always welcomed!Here are the workflow for contributors:Fork to your ownClone fork to local repositoryCreate a new branch and work on itKeep your branch in syncCommit your changes (make sure your commit message concise)Push your commits to your forked repositoryCreate a pull requestPlease refer toCONTRIBUTINGfor detailed guidelines.LicenseThe scripts and documentation in this project are released under theMIT License.
actiontest
actionTestTest build github action
action-trees
Action TreesDocumentation:https://roxautomation.gitlab.io/action-treesSource Code:https://gitlab.com/roxautomation/action-treesSummaryAction Treesis anasyncio-based framework designed for orchestrating hierarchical, asynchronous actions. It enables the structuring of complex processes into manageable, interdependent tasks, facilitating the execution of multi-step workflows either sequentially or in parallel. This framework is particularly suitable for systems that require coordinated, concurrent task management.TheActionItemclass, a key component of theaction_treesmodule, is an abstract base class tailored for managing hierarchical action items, especially in VDA5050 compliant systems. Its main features include:State Management:Manages various states (NONE,WAITING,INITIALIZING,RUNNING,PAUSED,FINISHED,FAILED) for action items, ensuring valid transitions between these states.Exception Handling:Implements custom exceptions (StateTransitionExceptionandActionFailedException) to handle invalid state transitions and failures in action completion.Task Execution and Management:Supports starting, pausing, resuming, and canceling action items, with an emphasis on asynchronous execution usingasyncio.Child Actions Handling:Provides functionality for adding, removing, and managing child action items, enabling a structured, hierarchical approach to action management.Core Method Implementation:Requires subclasses to implement the_on_runabstract method to define the primary behavior of the action item. An optional_on_initmethod can be implemented for initial setup steps.Action Trees vs. Behavior TreesBehavior Trees (BTs) and Action Trees (ATs) use a tree-like structure but have different goals. BTs make choices aboutwhat to do nextbased on current situations, which is great for tasks needing quick decisions. ATs are aboutexecutingcomplex tasks efficiently, with a focus on robust error-handling errors and asynchronous operation. In short, BTs are for making decisions, while ATs are for carrying out tasks effectively.Example - Coffee makerLet's simulate a coffee-making machine using the following action hierarchy:- cappuccino_order - prepare - initialize - clean - make_cappuccino - boil_water - grind_coffeeAn implementation would look like this:importasynciofromaction_treesimportActionItemclassAtomicAction(ActionItem):"""Mock action with no children."""def__init__(self,name:str,duration:float=0.1):super().__init__(name=name)self._duration=durationasyncdef_on_run(self):awaitself._wait_if_paused()# pause/resume helperawaitasyncio.sleep(self._duration)classPrepareMachineAction(ActionItem):"""Prepare the machine."""def__init__(self):super().__init__(name="prepare")self.add_child(AtomicAction(name="initialize"))self.add_child(AtomicAction(name="clean"))asyncdef_on_run(self):# sequentially run childrenawaitself.get_child("initialize").start()awaitself.get_child("clean").start()classMakeCappuccinoAction(ActionItem):"""Make cappuccino."""def__init__(self):super().__init__(name="make_cappuccino")self.add_child(AtomicAction(name="boil_water"))self.add_child(AtomicAction(name="grind_coffee"))asyncdef_on_run(self):# simultaneously run childrenawaitself._start_children_parallel()classCappuccinoOrder(ActionItem):"""High-level action to make a cappuccino."""def__init__(self):super().__init__(name="cappuccino_order")self.add_child(PrepareMachineAction())self.add_child(MakeCappuccinoAction())asyncdef_on_run(self):forchildinself.children:awaitchild.start()# create root orderorder=CappuccinoOrder()# start tasks in the backgroundawaitorder.start()DevelopmentseedocsThis project was forked fromcookiecutter templatetemplate.
action-tutorials-interfaces
Vladyslav Kotko
action-tutorials-py
Vladyslav Kotko
action-updater
Action UpdaterThe actions updater will make it easy to update actions:🥑 updated syntax and commands🥑 versions of actions, either for releases or commits🥑 preview, write to new file, or write in place!🥑 run as a GitHub action workflow for annual checks!You can see the⭐️ Documentation ⭐️for complete details!⭐️ Quick Start ⭐️InstallationThe module is available in pypi asaction-updater, and to install we first recommend some kind of virtual environment:$python-mvenvenv $sourceenv/bin/activateAnd then install from pypi using pip:$pipinstallaction-updaterUsageFor all commands below, the actions updater can accept a directory with yaml files, or a single yaml file that matches the GitHub actions schema.View updaters available (and descriptions)$action-updaterlist-updatersYou should likely detect (to preview) before you write the changes to file.# Run all updaters$action-updaterdetect.github/workfows/main.yaml# Only detect for the setoutput updater$action-updaterdetect-usetoutput.github/workfows/main.yamlAnd finally, write updates to file!$action-updaterupdate.github/workfows/main.yaml🎨 Screenshots 🎨If a file has updates, it will print to the terminal the updated file for preview.And after you runupdate(described below) you will see all green!Running across many files:And that's it! The action comes with severalupdatersthat will look for particular aspects to lint or update. If you have a request for a new updated, pleaseopen an issue.Feature IdeasThis could be fairly easy to extend to allow for more "linting" style actions to reflect preferences in style, e.g:$action-updaterlint.github/workflows/main.yamlIf this sounds interesting to you, pleaseopen an issueto discuss further! We currently do some basic linting, as the yaml loading library has preferences for saving with respect to spacing, etc.😁️ Contributors 😁️We use theall-contributorstool to generate a contributors graphic below.Vanessasaurus💻Mike Henry💻LicenseThis code is licensed under the MPL 2.0LICENSE.
actionweaver
ActionWeaver🪡 AI application framework that puts function-calling as a first-class feature 🪡Supporting both OpenAI API and Azure OpenAI service!🚀Explore Our Cookbooks For Tutorials & Examples!🚀Discord:https://discord.gg/fnsnBB99C2Considering about using LLM in your business or application ? We'd love to provide help and consultation, let's chat !Feel free to get in touch! Contact us [email protected] quickly become familiar with ActionWeaver, please take a look at notebooks listed below.QuickStartUsing Pydantic for Structured Data Parsing and ValidationFunction Validation with Pydantic and Exception HandlerBuilt Traceable Action with LangSmith TracingAction OrchestrationStateful AgentStar us on Github!ActionWeaver is an AI application framework that is designed based on the concept of LLM function calling, while popular frameworks like Langchain and Haystack are built around the concept of pipelines. ActionWeaver strives to be the most reliable, user-friendly, high-speed, and cost-effective function-calling framework for AI engineers.Our vision is to enable seamlessly merging traditional computing systems with the powerful reasoning capabilities of Language Model Models.Features:Function Calling as First-Class Citizen: Put function-calling at the core of the framework.Extensibility: Integrate ANY Python code into your agent's toolbox with a single line of code, or combining tools from other ecosystems like LangChain or Llama Index.Function Orchestration: Build complex orchestration of function callings. including intricate hierarchies or chains.Telemetry and Observability: ActionWeaver's design places a strong emphasis on developer experience and efficiency. Take a look atthis linkto see how ActionWeaver implements LLM telemetry, including tracing.At a high level, ActionWeaver simplifies the process of creating functions, orchestrating them, and handling the invocation loop. An "action" in this context serves as an abstraction of functions or tools that users want the Language Model (LLM) to handle.InstallationYou can install ActionWeaver using pip:pipinstallactionweaverQuickstartUse theLATESTOpenAI API that supports parallel function calling !fromactionweaver.llmsimportwrapfromopenaiimportOpenAIopenai_client=wrap(OpenAI())or using Azure OpenAI service to start a chat completion modelimportosfromopenaiimportAzureOpenAIazure_client=wrap(AzureOpenAI(azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),api_key=os.getenv("AZURE_OPENAI_KEY"),api_version="2023-10-01-preview"))Add ANY Python function as a tool to the Large Language Model.Developers can attachANYPython function as a tool with a simple decorator. In the following example, we introduce actionGetCurrentTime, and then proceed to use the OpenAI API to invoke it.ActionWeaver utilizes the decorated method's signature and docstring as a description, passing them along to OpenAI's function API. The Action decorator is also highly adaptable and can be combined with other decorators, provided that the original signature is preserved.fromactionweaverimportaction@action(name="GetCurrentTime")defget_current_time()->str:"""Use this for getting the current time in the specified time zone.:return: A string representing the current time in the specified time zone."""importdatetimecurrent_time=datetime.datetime.now()returnf"The current time is{current_time}"# Ask LLM what time is itresponse=openai_client.create(model="gpt-3.5-turbo",messages=[{"role":"user","content":"what time is it"}],actions=[get_current_time])Easily integrate tools from libraries such asLangchainfromactionweaver.actions.factories.langchainimportaction_from_toolfromlangchain_community.tools.google_search.toolimportGoogleSearchRunfromlangchain_community.utilities.google_searchimportGoogleSearchAPIWrappersearch_tool=GoogleSearchRun(api_wrapper=GoogleSearchAPIWrapper())openai_client.create(model="gpt-3.5-turbo",messages=[{"role":"user","content":"what date is today?"}],actions=[action_from_tool(search_tool)])Force execution of an actionYou can also force the language model to execute the action.get_current_time.invoke(openai_client,messages=[{"role":"user","content":"what time is it"}],model="gpt-3.5-turbo",stream=False,force=True)Structured extractionYou can create a Pydantic model to define the structure of the data you want to extract, and then force the language model to extract structured data from information in the prompt.frompydanticimportBaseModelfromactionweaver.actions.factories.pydantic_model_to_actionimportaction_from_modelclassUser(BaseModel):name:strage:intaction_from_model(User,stop=True).invoke(client,messages=[{"role":"user","content":"Tom is 31 years old"}],model="gpt-3.5-turbo",stream=False,force=False)Actions of Stateful AgentDevelopers also could create a class and enhance its functionality using ActionWeaver's action decorators.fromopenaiimportOpenAIfromactionweaver.llmsimportwrapfromactionweaverimportactionclassAgentV0:def__init__(self):self.llm=wrap(OpenAI())self.messages=[]self.times=[]def__call__(self,text):self.messages+=[{"role":"user","content":text}]returnself.llm.chat.completions.create(model="gpt-3.5-turbo",messages=self.messages,actions=[self.get_current_time])@action(name="GetCurrentTime")defget_current_time(self)->str:"""Use this for getting the current time in the specified time zone.:return: A string representing the current time in the specified time zone."""importdatetimecurrent_time=datetime.datetime.now()self.times+=[str(current_time)]returnf"The current time is{current_time}"agent=AgentV0()agent("what time is it")# Output: 'The current time is 20:34.'# You can invoke actions just like regular instance methodsagent.get_current_time()# Output: 'The current time is 20:34.'Grouping and Extending Actions Through InheritanceIn this example, we wrap theLangChain Google searchas a method, and define a new agent that inherits the previous agent and LangChain search tool. This approach leverages object-oriented principles to enable rapid development and easy expansion of the agent's capabilities.In the example below, through inheritance, the new agent can utilize the Google search tool method as well as any other actions defined in the parent classes.classLangChainTools:@action(name="GoogleSearch")defgoogle_search(self,query:str)->str:"""Perform a Google search using the provided query.This action requires `langchain` and `google-api-python-client` installed, and GOOGLE_API_KEY, GOOGLE_CSE_ID environment variables.See https://python.langchain.com/docs/integrations/tools/google_search.:param query: The search query to be used for the Google search.:return: The search results as a string."""fromlangchain.utilitiesimportGoogleSearchAPIWrappersearch=GoogleSearchAPIWrapper()returnsearch.run(query)classAgentV1(AgentV0,LangChainTools):def__call__(self,text):self.messages+=[{"role":"user","content":text}]returnself.llm.chat.completions.create(model="gpt-3.5-turbo",messages=self.messages,actions=[self.google_search])agent=AgentV1()agent("what happened today")"""Output: Here are some events that happened or are scheduled for today (August 23, 2023):\n\n1. Agreement State Event: Event Number 56678 - Maine Radiation Control Program.\n2. Childbirth Class - August 23, 2023, at 6:00 pm.\n3. No events scheduled for August 23, 2023, at Ambassador.\n4. Fine Arts - Late Start.\n5. Millersville University events.\n6. Regular City Council Meeting - August 23, 2023, at 10:00 AM.\n\nPlease note that these are just a few examples, and there may be other events happening as well."""Orchestration of Actions (Experimental)ActionWeaver enables the design of hierarchies and chains of actions by passing inorchargument.orchis a mapping from actions as keys to values includinga list of actions: if the key action is invoked, LLM will proceed to choose an action from the provided list, or respond with a text message.an action: after key action is invoked, LLM will invoke the value action.None: after key action is invoked, LLM will respond with a text message.For example, let's say we have actions a1, a2, a3.client.create([{"role":"user","content":"message"}],actions=[a1,a2,a3],# First, LLM respond with either a1, a2 or a3, or text without action# Define the orchestration logic for actions:orch={a1.name:[a2,a3],# If a1 is invoked, the next response will be either a2, a3 or a text response.a2.name:a3,# If a2 is invoked, the next action will be a3a3.name:[a4]# If a3 is invoked, the next response will be a4 or a text response.a4.name:None# If a4 is invoked, the next response will guarantee to be a text message})Example: Hierarchy of ActionsInstead of overwhelming OpenAI with an extensive list of functions, we can design a hierarchy of actions. In this example, we introduce a new class that defines three specific actions, reflecting a hierarchical approach:fromtypingimportListimportosclassFileAgent(AgentV0):@action(name="FileHandler")defhandle_file(self,instruction:str)->str:"""Handles ALL user instructions related to file operations.Args:instruction (str): The user's instruction about file handling.Returns:str: The response to the user's question."""print(f"Handling{instruction}")returninstruction@action(name="ListFiles")deflist_all_files_in_repo(self,repo_path:str='.')->List:"""Lists all the files in the given repository.:param repo_path: Path to the repository. Defaults to the current directory.:return: List of file paths."""print(f"list_all_files_in_repo:{repo_path}")file_list=[]forroot,_,filesinos.walk(repo_path):forfileinfiles:file_list.append(os.path.join(root,file))breakreturnfile_list@action(name="ReadFile")defread_from_file(self,file_path:str)->str:"""Reads the content of a file and returns it as a string.:param file_path: The path to the file that needs to be read.:return: A string containing the content of the file."""print(f"read_from_file:{file_path}")withopen(file_path,'r')asfile:content=file.read()returnf"The file content:\n{content}"def__call__(self,text):self.messages+=[{"role":"user","content":text}]returnself.llm.chat.completions.create(model="gpt-3.5-turbo",messages=self.messages,actions=[self.list_all_files_in_repo],orch={self.handle_file.name:[self.list_all_files_in_repo,self.read_from_file]})ContributingContributions in the form of bug fixes, new features, documentation improvements, and pull requests are VERY welcomed.📔 Citation & AcknowledgementsIf you find ActionWeaver useful, please consider citing the project:@software{Teng_Hu_ActionWeaver_2024,author={TengHu},license={Apache-2.0},month=Aug,title={ActionWeaver:ApplicationFrameworkforLLMs},url={https://github.com/TengHu/ActionWeaver},year={2023}}
actipy
actipyA Python package to process accelerometer data.Axivity3 (.cwa), Actigraph (.gt3x), and GENEActiv (.bin) files are supported, as well as custom CSV files.Axivity3 is the activity tracker watch used in the large-scaleUK-Biobank accelerometer study.Getting startedPrerequisitePython 3.8 or greater$python--version# or python3 --versionJava 8 (1.8.0) or greater$java-versionInstall$pipinstallactipyUsageProcess an Axivity3 (.cwa) file:importactipydata,info=actipy.read_device("sample.cwa.gz",lowpass_hz=20,calibrate_gravity=True,detect_nonwear=True,resample_hz=50)Output:data [pandas.DataFrame]x y z temperaturetime2014-05-07 13:29:50.430 -0.513936 0.070043 1.671264 20.0000002014-05-07 13:29:50.440 -0.233910 -0.586894 0.081946 20.0000002014-05-07 13:29:50.450 -0.080303 -0.951132 -0.810433 20.0000002014-05-07 13:29:50.460 -0.067221 -0.976200 -0.864934 20.0000002014-05-07 13:29:50.470 -0.109617 -0.857322 -0.508587 20.000000... ... ... ... ...info [dict]Filename : data/sample.cwa.gzFilesize(MB) : 69.4Device : AxivityDeviceID : 13110ReadErrors : 0SampleRate : 100.0ReadOK : 1StartTime : 2014-05-07 13:29:50EndTime : 2014-05-13 09:50:33NumTicks : 51391800WearTime(days) : 5.847725231481482NumInterrupts : 1ResampleRate : 100.0NumTicksAfterResample : 25262174LowpassOK : 1LowpassCutoff(Hz) : 20.0CalibErrorBefore(mg) : 82.95806873592024CalibErrorAfter(mg) : 4.434966371604519CalibOK : 1NonwearTime(days) : 0.0NumNonwearEpisodes : 0...If you have a CSV file that you want to process, you can also use the data processing routines fromactipy.processing:importactipy.processingasPdata,info_lowpass=P.lowpass(data,100,20)data,info_calib=P.calibrate_gravity(data)data,info_nonwear=P.detect_nonwear(data)data,info_resample=P.resample(data,sample_rate)See thedocumentationfor more.LicenseSeeLICENSE.md.
activ
No description available on PyPI.
activate
Use ACTiVATED crack on Steam gameThis script should be used only for fair purposes like making your own backup copies of games you own on your account. It not supposed to be used for piracy.InstallInstall theactivatepackage using your favourive method, e. g.pipsi.Copy latest ACTiVATED crack.so-files to~/.local/share/activate/x86/libsteam_api.soand~/.local/share/activate/x86_64/libsteam_api.so.UseGo to game directory and enteractivate. The script may ask for AppID if it can’t find it. That’s it.cd ~/Games/SuperSteamGame activateIt will replace libsteam_api.so files with ACTiVATED crack (of right architecture), detect a version of Steam interfaces and fill activated.ini files with right interfaces section.To specify a custom username instead of “Cracked”, place it to~/.config/activate:{ "username": "ZeDoCaixao" }
activate-aiida
activate-aiidaThis is a small package to build around the internalaiida-coretools (v1.2), to quickly create and launchisolatedAiiDA environments from scratch. Its focussed on development, but can also be used more generally.The basic steps are:Buy a new computerInstall CondaOn linux:wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.shbash miniconda.shconda update condaCreate the development environment:conda env create -n aiida-dev -f aiida-dev-env.ymlconda activate aiida-devThis is a copy ofenvironment.yamlin aiida-core, but with all the extra development packages, and other goodies like jupyter labInstall aiida-core in development modepip install --no-deps -e .Runsource aiida-activate --helpto see the optionsThis needs to be called withsource, so that it can set-up some environmental variablesRunningsource aiida-activate setup.yaml -c -w 4will:Initialise a database atstore_path/pgsql, if it doesn't already existKill any currently running postgres serverStart up a postgres server with the desired settingsEnsure RabbitMQ is runningSet the aiida repository path tostore_path/.aiidaRunverdi quicksetup --config=setup.yaml, if the profile does not already existSet the profile as the default profileStop any current daemon and start one with 4 workersActivate verdi tab completionWhen your done,aiida-deactivatewill stop the daemon and the postgres serverExample Config Filestore_path:/home/csewell/Documents/aiida-core/test_reposu_db_username:chrisjsewell# su_db_password: # not yet supporteddb_engine:postgresql_psycopg2db_backend:djangodb_host:localhostdb_port:5432db_name:basic_dbdb_username:chrisjsewelldb_password:nicedayprofile:basicemail:[email protected]_name:Chrislast_name:Sewellinstitution:EPFLnon_interactive:trueExample CLI$sourceaiida-activatesetup.yaml-c-w4parsed args: -c true -w 4 setup.yaml- Reading variables from setup.yaml- Setting Up SQL DatabasePGDATA='/home/csewell/Documents/aiida-core/test_repo/pgsql'- Activating Postgres server: /home/csewell/Documents/aiida-core/test_repo/pgsql on port 5432waiting for server to start.... doneserver startedLogging Postgres server to: /home/csewell/Documents/aiida-core/test_repo/pgsql/postgres_env_.log- Ensure RabbitMQ Running- Setting Up AiiDa DatabaseAIIDA_PATH='/home/csewell/Documents/aiida-core/test_repo'Info: Database user "chrisjsewell" already exists!Use it? [y/N]: ySuccess: created new profile `basic`.Info: migrating the database.Operations to perform:Apply all migrations: auth, contenttypes, dbRunning migrations:Applying contenttypes.0001_initial... OKApplying contenttypes.0002_remove_content_type_name... OKApplying auth.0001_initial... OKApplying auth.0002_alter_permission_name_max_length... OKApplying auth.0003_alter_user_email_max_length... OKApplying auth.0004_alter_user_username_opts... OKApplying auth.0005_alter_user_last_login_null... OKApplying auth.0006_require_contenttypes_0002... OKApplying auth.0007_alter_validators_add_error_messages... OKApplying auth.0008_alter_user_username_max_length... OKApplying auth.0009_alter_user_last_name_max_length... OKApplying auth.0010_alter_group_name_max_length... OKApplying auth.0011_update_proxy_permissions... OKApplying db.0001_initial... OKApplying db.0002_db_state_change... OKApplying db.0003_add_link_type... OKApplying db.0004_add_daemon_and_uuid_indices... OKApplying db.0005_add_cmtime_indices... OKApplying db.0006_delete_dbpath... OKApplying db.0007_update_linktypes... OKApplying db.0008_code_hidden_to_extra... OKApplying db.0009_base_data_plugin_type_string... OKApplying db.0010_process_type... OKApplying db.0011_delete_kombu_tables... OKApplying db.0012_drop_dblock... OKApplying db.0013_django_1_8... OKApplying db.0014_add_node_uuid_unique_constraint... OKApplying db.0015_invalidating_node_hash... OKApplying db.0016_code_sub_class_of_data... OKApplying db.0017_drop_dbcalcstate... OKApplying db.0018_django_1_11... OKApplying db.0019_migrate_builtin_calculations... OKApplying db.0020_provenance_redesign... OKApplying db.0021_dbgroup_name_to_label_type_to_type_string... OKApplying db.0022_dbgroup_type_string_change_content... OKApplying db.0023_calc_job_option_attribute_keys... OKApplying db.0024_dblog_update... OKApplying db.0025_move_data_within_node_module... OKApplying db.0026_trajectory_symbols_to_attribute... OKApplying db.0027_delete_trajectory_symbols_array... OKApplying db.0028_remove_node_prefix... OKApplying db.0029_rename_parameter_data_to_dict... OKApplying db.0030_dbnode_type_to_dbnode_node_type... OKApplying db.0031_remove_dbcomputer_enabled... OKApplying db.0032_remove_legacy_workflows... OKApplying db.0033_replace_text_field_with_json_field... OKApplying db.0034_drop_node_columns_nodeversion_public... OKApplying db.0035_simplify_user_model... OKApplying db.0036_drop_computer_transport_params... OKApplying db.0037_attributes_extras_settings_json... OKApplying db.0038_data_migration_legacy_job_calculations... OKApplying db.0039_reset_hash... OKApplying db.0040_data_migration_legacy_process_attributes... OKApplying db.0041_seal_unsealed_processes... OKApplying db.0042_prepare_schema_reset... OKApplying db.0043_default_link_label... OKApplying db.0044_dbgroup_type_string... OKSuccess: database migration completed.- Starting AiiDARescanning aiida pluginsSetting default profile: basicSuccess: basic set as default profileStopping any current daemonProfile: basicDaemon was not runningActivating daemon for profile: basic with 4 workersActivating verdi tab completion- Finishing Status:✓ config dir: /home/csewell/Documents/aiida-core/test_repo/.aiida✓ profile: On profile basic✓ repository: /home/csewell/Documents/aiida-core/test_repo/.aiida/repository/basic✓ postgres: Connected as chrisjsewell@localhost:5432✓ rabbitmq: Connected to amqp://127.0.0.1?heartbeat=600✓ daemon: Daemon is running as PID 22227 since 2020-04-10 00:55:10$deactivate-aiidaStopping Daemon:Profile: basicWaiting for the daemon to shut down... OKStopping Postgres:waiting for server to shut down.... doneserver stoppedDone!TroubleshootingIf postgres is not stopped correctly you may get this error:psql: could not connect to server: No such file or directoryIn this case you may have to manually delete thepath/to/database/postmaster.pidfile (seehere)If a port has been left open (fromhere):>> sudo lsof -i :PORTNUM >> sudo kill -9 PID
activate-virtualenv
activate-virtualenvactivate-virtualenvis a Python project that offers a context manager that allows users to easily activate a virtual environments programatically. A virtual environment is an isolated Python environment that allows users to manage and install packages separately from their system Python installation. This project provides a simple and convenient way to activate and (automatically) deactivate a virtual environment and start working within it.InstallationTo install theactivate-virtualenvpackage, you can usepip:pipinstallactivate-virtualenvOr rye:ryeinstallactivate-virtualenvOr anything else that downloads packages fromPyPI.UsageTo use theactivate_virtualenvcontext manager, you'll need to import it into your Python script. Here's how to get started:Import theactivate_virtualenvcontext manager:fromactivate_virtualenvimportactivate_virtualenvCreate an instance ofactivate_virtualenvby providing the path to your target virtual environment:# Replace 'path_to_your_venv' with the actual path to your virtual environment.withactivate_virtualenv('path_to_your_venv'):# Your code here# The virtual environment is active within this block.# The original environment is automatically restored when exiting the 'with' block.ExampleHere's an example of how you can useactivate_virtualenv:fromactivate_virtualenvimportactivate_virtualenv# Specify the path to your virtual environmentvenv_path='/path/to/your/virtualenv'# Activate the virtual environment temporarilywithactivate_virtualenv(venv_path):# Your code that requires the virtual environmentfromsubscriptimportfunction# Your dependencies in the specified virtual environment will be used here.function()# Once you exit the 'with' block, you are back to your original environment.# You can now use the modules and dependencies from your original environment.How It WorksTheactivate_virtualenvcontext manager temporarily modifies environment variables and Python'ssysmodule to activate the specified virtual environment. When you enter thewithblock, it runs the virtual environment activation scriptactivate_this.pyfound in thebindirectory of the specified virtual environment. This sets up the environment to use the virtual environment's Python interpreter and dependencies. When you exit thewithblock, it restores the original environment settings by manually undoing the changes made ny the activation script.Important NotesThis script is intended for use in Python 3 environments.Ensure that the virtual environment path you provide is valid. It should be the path to the virtual environment's root directory.Make sure that the virtual environment contains the standardactivate_this.pyscript in thebindirectory. This script is required to activate the virtual environment. If you are using poetry, rye or virtualenv to manage your virtual environments, this script should be present by default.LicenseThis project is licensed under the MIT License. For more information, please refer to theLICENSEfile.Issues and ContributionsIf you encounter any issues or have suggestions for improvements, please feel free toopen an issueor submit a pull request on theGitHub repository.
activation
activationsactivation function for machine learning. This package only computes the sigmoid activation function on an scalar or a numpy arrayFree software: MIT licenseDocumentation:https://activation.readthedocs.io.FeaturesTODOCreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History0.1.0 (2018-03-10)First release on PyPI.
activation-functions
# activation_functions
activations
AReLU: Attention-based-Rectified-Linear-UnitActivation function player with PyTorch on supervised/transfer/meta learning.ContentIntroductionInstallRunExploreMeta Learning1. IntroductionThis repository is the implementation of paperAReLU: Attention-based-Rectified-Linear-Unit.This project is friendly to newcomers of PyTorch.You can design and compare different activation functions under different learning tasks.2. InstallInstall activations as packagepipinstallactivationscondacreate-nAFPpython=3.7-y condaactivateAFP pipinstall-rrequirements.txtNOTE: PAU is only CUDA supported. You have to compile it first:pipinstallairspeed==0.5.14cdactivations/pau/cuda pythonsetup.pyinstallThe code of PAU is directly token fromPAU, if you occur any problems while compiling, please refer to the original repository.If you just want to have a quick start, and do not want to compile with PAU, just comment out the following lines inactivations/__init__.py:try:from.pau.utilsimportPAU__class_dict__["PAU"]=PAUexceptException:raiseNotImplementedError("")3. Runpython-mvisdom.server&# start visdompythonmain.py# run with default parametersClickhereto check your training process.NOTE: The program will download and save dataset under args.data_root automatically.Run with specified parameterspythonmain_mnist.py-husage:main_mnist.py[-h][--batch_sizeBATCH_SIZE][--lrLR][--lr_auxLR_AUX][--epochsEPOCHS][--epochs_auxEPOCHS_AUX][--timesTIMES][--data_rootDATA_ROOT][--dataset{MNIST,SVHN,EMNIST,KMNIST,QMNIST,FashionMNIST}][--dataset_aux{MNIST,SVHN,EMNIST,KMNIST,QMNIST,FashionMNIST}][--num_workersNUM_WORKERS][--net{BaseModel,ConvMNIST,LinearMNIST}][--resumeRESUME][--af{APL,AReLU,GELU,Maxout,Mixture,SLAF,Swish,ReLU,ReLU6,Sigmoid,LeakyReLU,ELU,PReLU,SELU,Tanh,RReLU,CELU,Softplus,PAU,all}][--optim{SGD,Adam}][--cpu][--exname{AFS,TransferLearning}][--silent]ActivationFunctionPlayerwithPyTorch.optionalarguments:-h,--helpshowthishelpmessageandexit--batch_sizeBATCH_SIZEbatchsizefortraining--lrLRlearningrate--lr_auxLR_AUXlearningrateoffinetune.onlyusedwhiletransferlearning.--epochsEPOCHStrainingepochs--epochs_auxEPOCHS_AUXtrainingepochs.onlyusedwhiletransferlearning.--timesTIMESrepeatruningtimes--data_rootDATA_ROOTthepathtodataset--dataset{MNIST,SVHN,EMNIST,KMNIST,QMNIST,FashionMNIST}thedatasettoplaywith.--dataset_aux{MNIST,SVHN,EMNIST,KMNIST,QMNIST,FashionMNIST}thedatasettoplaywith.onlyusedwhiletransferlearning.--num_workersNUM_WORKERSnumberofworkerstoloaddata--net{BaseModel,ConvMNIST,LinearMNIST}networkarchitectureforexperiments.youcanaddnewmodelsin./models.--resumeRESUMEpretrainedpathtoresume--af{APL,AReLU,GELU,Maxout,Mixture,SLAF,Swish,ReLU,ReLU6,Sigmoid,LeakyReLU,ELU,PReLU,SELU,Tanh,RReLU,CELU,Softplus,PAU,all}theactivationfunctionusedinexperiments.youcanspecifyanactivationfunctionbyname,ortrywithallactivationfunctionsby`all`--optim{SGD,Adam}optimizerusedintraining.--cpuwithcudatraining.thiswouldbemuchfaster.--exname{AFS,TransferLearning}experimentnameofvisdom.--silentifTrue,shutdownthevisdomvisualizer.Full Experimentnohup./main_mnist.sh>main_mnist.log&4. ExploreNew activation functionsWrite a python script file underactivations, such asnew_activation_functions.py, where contains the implementation of new activation function.Import new activation functions inactivations/__init__.py.New network structureWrite a python script file undermodels, such asnew_network_structure.py, where contains the definition of new network structure. New defined network structure should be a subclass ofBaseModel, which defined inmodels/models.py.Import new network structure inmodels/__init__.py.NOTE: New activation functions and network sctructures will be automatically added into argparse. So, it is not necessary to modifymain.py.5. Meta LearningSetuppipinstalllearn2learnRunpythonmeta_mnist.py--helpusage:meta_mnist.py[-h][--waysN][--shotsN][-tpsN][-fasN][--iterationsN][--lrLR][--maml-lrLR][--no-cuda][--seedS][--download-locationS][--afs{APL,AReLU,GELU,Maxout,Mixture,SLAF,Swish,ReLU,ReLU6,Sigmoid,LeakyReLU,ELU,PReLU,SELU,Tanh,RReLU,CELU,Softplus,PAU}]Learn2LearnMNISTExampleoptionalarguments:-h,--helpshowthishelpmessageandexit--waysNnumberofways(default:5)--shotsNnumberofshots(default:1)-tpsN,--tasks-per-stepNtasksperstep(default:32)-fasN,--fast-adaption-stepsNstepsperfastadaption(default:5)--iterationsNnumberofiterations(default:1000)--lrLRlearningrate(default:0.005)--maml-lrLRlearningrateforMAML(default:0.01)--no-cudadisablesCUDAtraining--seedSrandomseed(default:1)--download-locationSdownloadlocationfortraindata(default:data--afs{APL,AReLU,GELU,Maxout,Mixture,SLAF,Swish,ReLU,ReLU6,Sigmoid,LeakyReLU,ELU,PReLU,SELU,Tanh,RReLU,CELU,Softplus,PAU}activationfunctionusedtometalearning.Run allnohup ./meta_mnist.sh > meta_mnist.log &CitationIf you use this code, please cite the following paper:@misc{AReLU, Author = {Dengsheng Chen and Kai Xu}, Title = {AReLU: Attention-based Rectified Linear Unit}, Year = {2020}, Eprint = {arXiv:2006.13858}, }
activator
No description available on PyPI.
active
No description available on PyPI.
active911-python
Active911 Python BindingsInstallvia pippip install active911-pythonManual InstallationDownload release and install by:python setup.py installSetup / InitializeImport Packageimport active_911Initialize classclient = active_911.Active911Client(access_token='Enter Access Token Here')Environment Variable SupportIf access_token is not passed you can setACTIVE911_ACCESS_TOKENas the environment variable.Available Methods:get_agency()This function will return the authorized agency and is considered the root of the API.get_device_info(device_id=None)This function will return detailed device information.get_device_alerts(device_id=None, alert_days=None, alert_minutes=None)This function will return agency alerts by device. Number of alerts can be set by alert_days(default:10 Max:30) or alert_minutes, alert_minutes supersedes alert_days if set.get_alerts(alert_days=None, alert_minutes=None)Returns agency alerts. Number of alerts can be set by alert_days(default:10 Max:30) or alert_minutes, alert_minutes supersedes alert_days if set.get_alert_detail(alert_id=None)Returns alert detail by alert_id.get_locations(locations_page=None, locations_per_page=None)Returns all map data locations.get_location_detail(location_id)Returns location point detailget_resource_detail(resource_id)Returns location point resource detailImportant Notes:Full OAUTH scope is required for proper functionality.read_agency Allows read-only access to this agency's information (Name, Address etc).read_alert Allows read-only access to all alerts in the agency.read_response Allows read-only access to responses to all alerts in the agency.0read_device Allows read-only access to all device information in the agency.read_mapdata Allows read-only access to all locations and resources in the agency.write_mapdata Allows creation of locations and resources for the agency.TODOS:Add POST request support for mapping locations.Support locations_coordinate_window