package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
act-scio
act-scio2Scio v2 is a reimplementation ofScioin Python3.Scio usestikato extract text from documents (PDF, HTML, DOC, etc).The result is sent to the Scio Analyzer that extracts information using a combination of NLP (Natural Language Processing) and pattern matching.Changelog0.0.42SCIO now supports setting TLP on data upload, to annotate documents withtlptag. Documents downloaded by feeds will have a default TLP white, but this can be changed in the config for feeds.Source codeThe source code the workers are available ongithub.SetupTo setup, first install from PyPi:sudopip3installact-scioYou will also need to installbeanstalkd. On debian/ubuntu you can run:sudoaptinstallbeanstalkdConfigure beanstalk to accept larger payloads with the-zoption. For red hat derived setups this can be configured in/etc/sysconfig/beanstalkd:MAX_JOB_SIZE=-z524288You then need to install NLTK data files. A helper utility to do this is included:scio-nltk-downloadYou will also need to create a default configuration:scio-configuserAPITo run the api, execute:scio-apiThis will setup the API on 127.0.0.1:3000. Use--port <PORT> and --host <IP>to listen on another port and/or another interface.For documentation of the API endpoint seeAPI.md.ConfigurationYou can create a default configuration using this command (should be run as the user running scio):scio-configuserCommon configuration can be found under ~/.config/scio/etc/scio.iniRunning ManuallyScio Tika ServerThe Scio Tika server reads jobs from the beanstalk tubescio_docand the extracted text will be sent to the tubescio_analyze.The first time the server runs, it will download tika using maven. It will use a proxy if$https_proxyis set.scio-tika-serverscio-tika-serverusestika-pythonwhich depends on tika-server.jar. If your server has internet access, this will downloaded automatically. If not or you need proxy to connect to the internet, follow the instructions on "Airagap Environment Setup" here:https://github.com/chrismattmann/tika-python. Currently only tested with tika-server version 2.7.0.Scio Analyze ServerScio Analyze Server reads (by default) jobs from the beanstalk tubescio_analyze.scio-analyzeYou can also read directly from stdin like this:echo"The companies in the Bus; Finanical, Aviation and Automobile industry are large."|scio-analyze--beanstalk=--elasticsearch=Scio SubmitSubmit document (from file or URI) toscio_api.Example:scio-submit\--urihttps://www2.fireeye.com/rs/848-DID-242/images/rpt-apt29-hammertoss.pdf\--scio-baseurihttp://localhost:3000/submit\--tlpwhiteRunning as a serviceSystemd compatible service scripts can be found under examples/systemd.To install:sudocpexamples/systemd/*.service/usr/lib/systemd/system sudosystemctlenablescio-tika-server sudosystemctlenablescio-analyze sudoservicestartscio-tika-server sudoservicestartscio-analyzescio-feed cron jobTo continously fetch new content from feeds, you can add scio-feed to cron like this (make sure the directory $HOME/logs exists):# Fetch scio feeds every hour 0 * * * * /usr/local/bin/scio-feeds >> $HOME/logs/scio-feed.log.$(date +\%s) 2>&1 # Delete logs from scio-feeds older than 7 days 0 * * * * find $HOME/logs/ -name 'scio-feed.log.*' -mmin +10080 -exec rm {} \;Local developmentUse pip to install inlocal development mode. act-scio uses namespacing, so it is not compatible with usingsetup.py installorsetup.py develop.In repository, run:pip3install--user-e.
acts.core
No description available on PyPI.
actsecmodel
ActSecModel PackageThis package contains most of the key code that I used in the first half of 2023 for the Reddit Deliberation Project. The fucntions are explained here, guides for setting up variables to be compatible with these functions are explained, and an example is given at the bottom. Any code that I used that isn't here was already part of another package, like matplotlib or networkx.SETTING UP VARIABLES/ARRAYSIn order to be compatible with the code as I have written it, I would recommend following this set up. Use the following code to initialise your actor and comments lists:actorList = [i for i in range(noActors)] commentsList = [i for i in range(noActors,(noActors+timeSteps))]This ensures that each potential actor and potential comment are initialised in advance, which was one of the properties of the method I used, and it means each actor and comment's ID matches its row/column in the adjacency array. This code is captured in the function initialise_lists for convenience.The commentOwnersList is a list that contains each actor who posted a comment in the order that they posted them. The index of each actor corresponds to the index of the comment in the commentList that they posted.FUNCTIONS NOT IN THIS PACKAGEFunctions for the actor layer measures are all contained in the Network X package, and are listed below:cliques = len(list(nx.enumerate_all_cliques(A.to_undirected()))) transitivity = nx.transitivity(A) reciprocity = nx.overall_reciprocity(A) clustering = nx.average_clustering(A)I used matplotlib and Gephi for all the graphing and visuals. A tutorial for the 95% confidence elipses is under this linkhttps://matplotlib.org/stable/gallery/statistics/confidence_ellipse.html, since that took me longer to find on account of being in the examples section of the documentation, rather than the reference.FUNCTION DOCUMENTATIONACTIVATION FUNCTIONSactivation(binsList):A fucntion that choses an actor for activation.binslist.A list containing the probability bins for each actor to fall into. Each item in the list should be the cumulative probability of the respective actor being chosen.RETURNS: The integer index of the actor chosen for activation (in my program, each actor was assigned a number from 0 upwards, so this index was also the actor's ID, and later code relfects this. The same was not true for comments)uniform_bins(n):A fucntion that creates the binsList for uniform activation.n.The number of actors.RETURNS: The list binsList for uniform activationzipfs_bins(n, s):A fucntion that creates the binsList for Zipf's Law activation.n.The number of actors.s.The Zipf's constant.RETURNS: The list binsList for Zipf's Law activationSELECTION FUNCTIONSuniform_selection(allCurrentComments):A function for choosing a comment with uniform selection.allCurrentComments.A list containing all comments that have currently been madeRETURNS: The integer index of the comment chosen for selectionbarabasi_albert_selection(commentNetwork, allCurrentComments):A function for choosing a comment with Barabasi-Albert selection.commentNetwork.The discussion layer of the current NetworkX networkallCurrentComments.A list containing all comments that have currently been madeRETURNS: The integer index of the comment chosen for selectionbianconi_barabasi_layer_selection(commentNetwork, allCurrentComments):A function for choosing a comment with level selection.commentNetwork.The discussion layer of the current NetworkX networkallCurrentComments.A list containing all comments that have currently been madeRETURNS: The index of the comment chosen for selectionbianconi_barabasi_recency_selection(commentNetwork, allCurrentComments):A function for choosing a comment with recency selection.commentNetwork.The discussion layer of the current NetworkX networkallCurrentComments.A list containing all comments that have currently been madeRETURNS: The integer index of the comment chosen for selectionGENERAL FUNCTIONSgeneralised_harmonic_sum(N,s):A function to find the generalised harmonic sum.N.The number of values to be summed overs.The power of the denominatorRETURNS: The float value of the generalised harmonic sum over the first N terms.initialise_lists(nActors, tSteps):A function to initialise the actor and comments lists for a predetermined number of actors and comments.nActors.The total number of actorstSteps.The total number of timesteps (and thus total number of comments)RETURNS: The actor and comments lists, in that order, separated by a comma.iterate_reddit_network(currentTimeStep, adjacencyMatrix, activatedActor, selectedCommentValue, commentOwnersList, commentsList):A function that receives the activated actor and selected commennt and updates the adjacency matrix accordingly.currentTimeStep.An integer for the current timestepadjacencyMatrix.The current adjacency matrix for the multilayer networkactivatedActor.The actor chosen for activation. In my case, the index and actor ID were identical, so I simply fed in the indexselectedCommentValue.The index in the commentsList for the selected commentcommentOwnersList.The list of comment owners (see top)commentsList.The commentsList list representing the list of all commentsRETURNS: The updated adjacency matrix.width_and_depth(rootNode, commentsGraph):A function to find the mean and maximum width and depth measures from a discussion layer graph.rootNode.The root node/initial post of the discussion layer graph.commentsGraph.The NetworkX graph for the discussion layer.RETURNS: The following list: [maxWidth, meanWidth, maxDepth, meanDepth]standard_actsecmodel(tSteps, nActors, aBinsList, selectionType):The function I used in my model, that completely iterates through a set number of timesteps, for a set number of actors, for the activation and selection types available in this package.tStepss.The number of timesteps to run for.nActors.The number of actors in the actor layer (note that not all actors may participate)aBinsList.The binsList list for the activation type being usedselectionType.A string to determine which selection type to be used, choosing from 'uniform', 'barabsi', 'layer', or 'recency'RETURNS: The following list: [maxWidth, meanWidth, maxDepth, meanDepth, developmentArray, cliques, transitivity, reciprocity, clustering, actorDevelopmentArray]. The maxWidth, meanWidth, maxDepth, meanDepth, cliques, transitivity, reciprocity, and clustering measures are all terminal. developmentArray is a Numpy array that contains the development of the discussion layer measures over time. actorDevelopmentArray is a Numpy array that contains the development of the actor layer measures over time.EXAMPLE CODE# INITIALISING timeSteps = 25 noActors = 20 actorList = [i for i in range(noActors)] commentsList = [i for i in range(noActors,(noActors+timeSteps))] adjacencyMatrix = np.zeros((noActors+timeSteps,noActors+timeSteps)) #adjacencyMatrix[pointing to][pointing away from] adjacencyMatrix[noActors][0] += 1 G = nx.from_numpy_array(adjacencyMatrix, create_using=nx.DiGraph) commentOwners = [0] binsList = zipfs_bins(noActors, 1) #If activation depends on the state of the graph, move to within iterations widthDepthArray = np.zeros((timeSteps, 4)) # ITERATIONS for t in range(1, timeSteps): # ACTIVATION currentActor = activation(binsList) # SELECTION tempCommentsList = (commentsList[0:t]) #A temporary comment list is created so that it's only as long as the current number of comments targetCommentValue = barabasi_albert_selection(G.subgraph(tempCommentsList), tempCommentsList) # UPDATE MATRIX AND GRAPH adjacencyMatrix = iterate_reddit_network(t, adjacencyMatrix, currentActor, targetCommentValue, commentOwners, commentsList) G = nx.from_numpy_array(adjacencyMatrix, create_using=nx.DiGraph) C = G.subgraph(commentsList) # WIDTH, DEPTH, OR OTHER MEASURES tempWidthDepth = width_and_depth(commentsList[0], C) for j in range(4): widthDepthArray[t][j] = tempWidthDepth[j] # RESULTS print(widthDepthArray)
actseg
Reference Action Segmentation Evaluation CodeThis repository contains the reference code for action segmentation evaluation.If you have a bug-fix/improvement or if you want to add a new features please send a pull request or open an issue.InstallationTheactseglibrary is available onPyPI.pipinstallactsegDevelopmentmakeinit maketestExample UsageAll the metrics have the same api.fromactseg.evalimportMoFAccuracy,Editpred1=[0,0,0,1,0,1,1,1,0]pred2=[1,2,3,0,0,1,2,3,0,0,0,1,2,3,0,0,0,0]target1=[0,0,1,1,2,1,1,0,0]target2=[1,1,1,1,1,2,2,2,2,2,2,3,3,3,3,3,3,3]metrics=[MoFAccuracy(),Edit()]forp,tinzip([pred1,pred2],[target1,target2]):forminmetrics:m(targets=t,predictions=p)forminmetrics:print(m)# MoF: 0.3333333333333333# Edit: 52.5MetricsFrame-wise MetricsMoF (Accuracy)F1ScoreIoDIoUSegment-wise MetricsEdit (Edit distance or matching score)Specifying Ignore ClassFor some Metrics it is possible to specify the indices of classes to ignore (e.g. Background) by passingignore_idsparameter to the constructor.AcknowledgementPlease seesrc/actseg/externalfor external sources used in this project.
actsegextract
segid extracting tool for team act
actselectw
Weighted Activity Selection=====================Description:-------------The weighted activity selection problem is a combinatorial optimization problem which calculates the highest weight one can get from performing non-conflicting activities within a given time frame. It also returns a list of respective activities.The algorithm uses dynamic programming to break up the problem into a series of overlapping subproblems and build up solutions to larger and larger subproblem.**Optimal substructure**The algorithm calculates the maximum weight for activities 1 ... j where j is some activity from the set. If the algorithm selects activity j, it should add the activity's weight to the optimal solution to the problem which consists of remaining compatible activities calculated in one of the previous iterations.**Overlapping subproblems**Different activities can have the same set of compatible jobs which means that by memoizing the solutions to such subproblems we can dramatically decrease the time complexity.**Complexity**A brute-force recursive solution to this problem is correct but spectacularly slowbecause of redundant sub-problems discussed above. As a result, it has exponential time complexity. The dynamic programming solution has a complexity O(n^2). This algorithm uses binary search to find the previous activity which decreases the complexity to O(nlogn).In this implementation, the algorithm first sorts the activities by their finish time, which requires O(nlogn). For each j, it finds the previous compatible activity using binary search O(logn). As we have to repeat this procedure n times, it results in the complexity of O(nlogn). Constructing a list of selected activities also requires O(nlogn) in the worst case as the algorithm loops through the reversed list (n) and searches for the previous activity (logn). The resulting complexity is O(nlogn) + O(nlogn) + O(nlogn) --> O(nlogn).Example implementation:-----------------------------Suppose you are a CEO in a small company and you need to attend a set of meetings to raise funds for a new project. Unfortunately, you can only attend one meeting at a time and some of them are overlapping. Each meeting has start and end times and a weight associated with it that corresponds to the amount of money you will raise if you attend it. The algorithm finds the highest profit you can get in such a situation and prints out the list of meetings you should attend.Here's an example case:|Names|a|b|c|d||----------|--------|----------|---------|----------|| Weight| $300| $250| $400| $450||Start/End |9:00-13:00|11:00-14:00:|16:00-18:00|15:00-21:00|In this example, activities a, b and and c, d overlap. That's why the algorithm will choose 2 activities which generate the highest profit - a and d - which will give you a total of $750.**The input** of this algorithm is a list of activities with their names, start and finish times, and weights:meetings = [activity("a", 9, 13, 300), activity("b", 11, 14, 250), activity("c", 16, 18, 400), activity("d",15, 21, 450)]To **run** the algorithm, you should simply call the function "actselectw":print actselectw(meetings)**The output** is the maximum weight possible for this set of activities and the activities which have been selected by the algorithm.Max weight is 750. You should pick activities ['a', 'd'].A link to the GitHub repository: https://github.com/ritakurban/actselectw
actsnclass
No description available on PyPI.
actspotter
Theactspotteris a library / tensorflow model for detecting activities. It allows to classify body activities in images or videos. The package is limited to videos and images with only one person by design.The following classes are available:nonepull_up_uppull_up_downpull_up_nonepush_up_uppush_up_downpush_up_nonesit_up_upsit_up_downsit_up_noneThe package is currently in early development.Future plansTensorflow model deployment will be integrated soon. Currently this package allows to classify push-ups, sit-ups and pull-ups. In future version kicks and others body activities will follow.It is also planned to provide a signal processing layer that allows to easily detect connected activites and count them.Another application will be to integrate with keyboard drivers so that activities could be used for controlling video games (e.g. by kicks).InstallationInstall this library in avirtualenvusing pip.virtualenvis a tool to create isolated Python environments. The basic problem it addresses is one of dependencies and versions, and indirectly permissions.Withvirtualenv, it’s possible to install this library without needing system install permissions, and without clashing with the installed system dependencies.Supported Python VersionsPython >= 3.6Mac/Linuxpip install virtualenv virtualenv <your-env> source <your-env>/bin/activate <your-env>/bin/pip install actspotterWindowspip install virtualenv virtualenv <your-env> <your-env>\Scripts\activate <your-env>\Scripts\pip.exe install actspotterExample UsageRequirement: cv2 (opencv) installed.Classification of images:importcv2importtensorflowastffromactspotterimportImageClassifier,classify_image_input_dimension,class_namesdef_resize(frame,dim=classify_image_input_dimension):frame=cv2.resize(frame,dim,interpolation=cv2.INTER_AREA)returnframedef_to_tf_array(frame,dim=classify_image_input_dimension):frame=_resize(frame,dim)frame=cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)frame=tf.convert_to_tensor(frame,dtype=tf.float32)returnframeimages=[to_tf_array(cv2.imread("test.jpg")),]print(class_names)print(image_classifier.classify_images(images))Classification of a video:importcv2importtensorflowastffromactspotterimportVideoClassifier,classify_image_input_dimensiondef_resize(frame,dim=classify_image_input_dimension):returnframedef_to_tf_array(frame,dim=classify_image_input_dimension):frame=_resize(frame,dim)frame=cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)frame=tf.convert_to_tensor(frame,dtype=tf.float32)returnframecap=cv2.VideoCapture(0)video_classifier=VideoClassifier(buffer_size=4)video_classifier.start()whilecap.isOpened():ret,frame=cap.read()ifret==True:video_classifier.add_image(to_tf_array(frame))state=video_classifier.get_last_classification()print(state)frame=resize(frame,dim=(600,600))cv2.putText(frame,f"{state}",(10,40),0,2,255)cv2.imshow("Frame",frame)waitkey=cv2.waitKey(25)&0xFFifwaitkey==ord("q"):breakvideo_classifier.exit()cap.release()cv2.destroyAllWindows()
act-types
ACT typesIntroductionThese scripts are used to add types to theACTdata model (object types and fact types).InstallationThis project requires that you have a running installation of theact-platform.Install from pippipinstallact-typesBreaking changes2.0 Updated data modelThis version includes breaking changes to the data model. It is advised to do a reimport of all data and import using act-worker with version >= 2.0.0.The following changes are implemented:act-types and act-graph-datamodel is moved to act-admin and act-utils.Local developmentUse pip to install inlocal development mode. act-types (and act-api) uses namespacing, so it is not compatible with usingsetup.py installorsetup.py develop.In repository, run:pip3install--user-e.It is also necessary to install in local development mode to correctly resolve the files that are read by the--default-*options when doing local changes. These are read from etc under act.types and if the package is installed with "pip install act-types" it will always read the files from the installed package, even though you do changes in a local checked out repository.
actua
No description available on PyPI.
actuality
multiverseImplementing universes and things!
actual-module
My first Python package with a slightly longer description
actual-module-test
My first Python package with a slightly longer description
actual-module-test2
My first Python package with a slightly longer description
actual-module-test-again
My first Python package with a slightly longer description
actualSonLib
No description available on PyPI.
actuapy
Copyright (c) 2019 Rei MizutaPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.Description: # Setuppyenv1.2.7Python3.6.5anaconda3-5.2.0# Usage` bash python setup.py install # for install python setup.py test # for test `Platform: UNKNOWN Description-Content-Type: text/markdown
actuarialCalculations
This package includes financial math calculations for actuarial calculations such as annuity calculations, amortization schedule, sinking fund calculations and constructing tables with those formulas.Installationpip install actuarialCalculationsAnnuity CalculationsCalcuating present value: Assuming our user would like to know the downpayment amount with provided kwargs below;Logic: terms, period and interest amount rate. Price - (presentValue* _N _ Repay Amount ) = Down paymentCalculating the future value: Assuming our user pays at the end of the given period, so our program should accumulate the value with gthe iven interest rate;Logic: Accumulated Down payment(1+i)**N = Accumulated Price(1+i)*N - (AccumulatedValue_N _ Repay Amount)Calculating The Down Payment with Given Time Value: Assuming our user wants to calculate the downpayment with the given period of timeLogic: Accumulated Down payment = Accumulated Price ** T -(presentValue * Repay Amount ) ** T- (AccumulatedValue * Repay Amount)**-(N+T) / (1+i)**TUSAGE.. code:: python''' Present Value method takes 5 parameters as intergers and returns downpayment amount ''' presentValue = PresentValue( InterestRate, effectiveInterestTerms, fixedPeriod, repayAmount, price) ''' Accumulated Value method takes 5 parameters as intergers and returns downpayment amount (future value) ''' accumulatedValue = AccumulatedValue( InterestRate, effectiveInterestTerms, fixedPeriod, repayAmount, price) ''' Calculate Given Time method takes 5 parameters as intergers and returns downpayment amount at any given time. ''' calculateGivenTime = CalculateGivenTime( InterestRate, effectiveInterestTerms, fixedPeriod, repayAmount, price)Amortization Schedule Calculationmethod takes 4 parametersInterest rate Years Frequency of the interest that hits in that period of time Loan AmountUSAGE.. code:: python'''Create an instance of CalctulateAmortization class and pass the parameters as intergers ''' calculate = CalctulateAmortization(interestRate, years, frequency, loanAmount) '''To run the calculations we need to call execute funtion ''' calculate.execute()Sinking Fund Calculationmethod takes 4 parametersInterest rate Years Frequency of the interest that hits in that period of time AmountUSAGE.. code:: python''' Create an instance of CalculateSinkingFund class and pass the parameters as integers''' calculate = CalculateSinkingFund(interestRate, years, frequency, amount) '''To run the calculations we need to call execute funtion ''' calculate.execute()CONTACTThe package is created by Sadik Erisen. Please send email [email protected] you have questions or comments.LICENCEMIT
actuarialCalculus
Actuarial CalculusWelcome to the actuarial calculus library. You can download this library to perform calculation on insurance products, anuities, reserves and premiums.Documentation comming soon
actuarialmath
This Python package implements fundamental methods for modeling life contingent risks, and closely follows the coverage of traditional topics in actuarial exams and standard texts such as the “Fundamentals of Actuarial Math - Long-term” exam syllabus by the Society of Actuaries, and “Actuarial Mathematics for Life Contingent Risks” by Dickson, Hardy and Waters.OverviewThe package comprises three sets of classes, which:Implement general actuarial methodsBasic interest theory and probability lawsSurvival functions, expected future lifetimes and fractional agesInsurance, annuity, premiums, policy values, and reserves calculationsAdjust results forExtra mortality risks1/mthly payment frequency using UDD or Woolhouse approachesSpecify and load a particular form of assumptionsRecursion inputsLife table, select life table, or standard ultimate life tableMortality laws, such as constant force of maturity, beta and uniform distributions, or Makeham’s and Gompertz’s lawsQuick Startpip install actuarialmathalso requiresnumpy,scipy,matplotlibandpandas.Start Python (version >= 3.10) or Jupyter-notebookSelect a suitable subclass to initialize with your actuarial assumptions, such asMortalityLaws(or a special law likeConstantForce),LifeTable,SULT,SelectLifeorRecursion.Call appropriate methods to compute intermediate or final results, or tosolveparameter values implicitly.Adjust the answers withExtraRiskorMthly(or itsUDDorWoolhouse) classes.Examples# SOA FAM-L sample question 5.7 from actuarialmath import Recursion, Woolhouse # initialize Recursion class with actuarial inputs life = Recursion().set_interest(i=0.04)\ .set_A(0.188, x=35)\ .set_A(0.498, x=65)\ .set_p(0.883, x=35, t=30) # modfy the standard results with Woolhouse mthly approximation mthly = Woolhouse(m=2, life=life, three_term=False) # compute the desired temporary annuity value print(1000 * mthly.temporary_annuity(35, t=30)) # solution = 17376.7# SOA FAM-L sample question 7.20 from actuarialmath import SULT, Contract life = SULT() # compute the required FPT policy value S = life.FPT_policy_value(35, t=1, b=1000) # is always 0 in year 1! # input the given policy contract terms contract = Contract(benefit=1000, initial_premium=.3, initial_policy=300, renewal_premium=.04, renewal_policy=30) # compute gross premium using the equivalence principle G = life.gross_premium(A=life.whole_life_insurance(35), **contract.premium_terms) # compute the required policy value R = life.gross_policy_value(35, t=1, contract=contract.set_contract(premium=G)) print(R-S) # solution = -277.19ResourcesJupyter notebookorrun in Colab, to solve all sample SOA FAM-L exam questionsUser Guide, ordownload pdfAPI referenceGithub repoandissues
actuariat-python
actuariat_pythonMaterials for training sessions used at l’Institut des Actuaires.GitHub/actuariat_pythondocumentationBlogThis extension is not actively developed. Most of the content is added toensae_teaching_csandpapierstat. Most of the content is in French.Historycurrent - 2021-01-05 - 0.00Mb0.0.0 - 2021-01-05 - 0.00Mb13: move CI to python 3.9 (2021-01-05)12: move CI to python 3.7 (2019-04-20)10: replace basemap by cartopy (2018-08-26)11: replace flake8 by code_style (2018-04-14)9: écrire la correction de l’examen 2017 (2017-06-15)8: remplacer le lien sur la table de mortalité (2017-02-25)7: fixer le test unitaire sur sur le gerry mandering (2017-02-23)5: concevoir un énoncé pour un examen (2017-02-23)4: terminer la correction du gerry mandering (2017-02-23)3: ajouter une séance sur la manipulation de grosses données avec comme exemple des statistiques d’utilisation de wikipedia (2016-12-27)1: unit test using matplotlib is failing (2016-06-18)
actuary
No description available on PyPI.
actuarydesk
No description available on PyPI.
actuate
No description available on PyPI.
actuate-py
Failed to fetch description. HTTP Status Code: 404
actupac
No description available on PyPI.
act-utils
ACT UtilitiesIntroductionThis repository contains utility scripts for theACT Platform.InstallationThis project requires that you have a running installation of theact-platform.Install from pippipinstallact-utilsact-graph-datamodel usageBuild a graph (graphviz) of the ACT data model.act-graph-datamodel--help usage:act-graph-datamodel[-h][--uidUID][--http_usernameHTTP_USERNAME][--http_passwordHTTP_PASSWORD][--parent_idPARENT_ID][--confluence_urlCONFLUENCE_URL][--confluence_userCONFLUENCE_USER][--confluence_passwordCONFLUENCE_PASSWORD]urlLocal developmentUse pip to install inlocal development mode. act-utils (and act-api) uses namespacing, so it is not compatible with usingsetup.py installorsetup.py develop.In repository, run:pip3install--user-e.
actuwiser-distributions
No description available on PyPI.
act-workers
ACT WorkersIntroductionThis repository contains workers for theACT platform.The source code the workers are available ongithub.Changelog2.0.0Configuration is moved from~/.config/actworkers/actworkers.inito~/.config/act/act.iniThe old scio worker is removed and the new scio-worker (act-scio2) is renamed to act-scioSetupTo use the workers, install from PyPi:sudopip3installact-workersThis will install scripts for all workers:act-argus-caseact-attackact-country-regionsact-fact-chain-helper (see usage below)act-feedact-ip-filteract-misp-feedsact-mnemonic-pdnsact-scioact-search-graphact-shadowserver-asnact-uploaderact-url-shorter-unpackact-ta-helper (see usage below)act-verisact-vtOriginsAll workers support the optional arguments --origin-name and --origin-id. If they are not specified, facts will be added with the origin of the user performing the upload to the platform.For managing origins, use the toolact-origininact-adminpackage on pypi.Access modeBy default, all facts will have access-mode "RoleBased", which means that the user needs access to the organization specified when creating the facts.The access mode can be explicit set with--access-mode, e.g. like this, to set all facts to Public access mode:--access-mode PublicThere are also some workers, e.g.act-scioandact-mnemoninc-pdnsthat will set access-mode based on the input document (TLP green/white -> access-mode=Public), unless you explicit configures it to not do so.OrganizationAll workers support the optional arguments--organizationIf they are not specified, facts will be added with the organization of the origin or the user performing the upload to the platform (if not set by the origin.Worker usageTo print facts to stdout:$act-country-regions{"type":"memberOf","value":"","accessMode":"Public","sourceObject":{"type":"country","value":"Afghanistan"},"destinationObject":{"type":"subRegion","value":"Southern Asia"},"bidirectionalBinding":false}{"type":"memberOf","value":"","accessMode":"Public","sourceObject":{"type":"subRegion","value":"Southern Asia"},"destinationObject":{"type":"region","value":"Asia"},"bidirectionalBinding":false}(...)Or print facts as text representation:$act-country-regions--output-formatstr(country/Afghanistan)-[memberOf]->(subRegion/SouthernAsia)(subRegion/SouthernAsia)-[memberOf]->(region/Asia)(...)To add facts directly to the platform, include the act-baseurl and user-id options:$act-country-regions--act-baseurlhttp://localhost:8888--user-id1ConfigurationAll workers support options specified as command line arguments, environment variables and in a configuration file.A utility to show and start with a default ini file is also included:act-worker-config--help usage:ACTworkerconfig[-h]{show,user,system}positionalarguments:{show,user,system}optionalarguments:-h,--helpshowthishelpmessageandexitshow-Printdefaultconfiguser-Copydefaultconfigto/home/fredrikb/.config/actworkers/actworkers.inisystem-Copydefaultconfigto/etc/actworkers.iniYou can see the default options inact/workers/etc/actworkers.ini.The configuration presedence are (from lowest to highest):Defaults (shown in --help for each worker)INI fileEnvironment variableCommand line argumentINI-fileArguments are parsed in two phases. First, it will look for the argument --config argument which can be used to specify an alternative location for the ini file. If not --config argument is given it will look for an ini file in the following locations:/etc/<CONFIG_FILE_NAME> ~/.config/<CONFIG_ID>/<CONFIG_FILE_NAME> (or directory specified by $XDG_CONFIG_HOME)The ini file contains a "[DEFAULT]" section that will be used for all workers. In addition there are separate sections for each worker which you can use to configure worker-specific options, and override default options.Environment variablesThe configuration step will also look for environment variables in uppercase and with "-" replaced with "_". For the example for the option "cert-file" it will look for the enviornment variable "$CERT_FILE".RequirementsAll workers requires python version >= 3.5 and the act-api library:act-api(act-api onpypi)In addition some of the libraries might have additional requirements. See requirements.txt for a full list of all requirements.Proxy configurationWorkers will honor theproxy-stringoption on the command line when connecting to external APIs. However, if you need to use the proxy to connect to the ACT platform (--act-baseurl), you will need to add the "--proxy-platform" switch:echo-nwww.mnemonic.no|act-vt--proxy-string<PROXY>--user-id<USER-ID>--act-baseurl<ACT-HOST>--proxy-platformLocal developmentUse pip to install inlocal development mode. act-workers (and act-api) uses namespacing, so it is not compatible with usingsetup.py installorsetup.py develop.In repository, run:pip3install--user-e.searchA worker to run graph queries is also included. A sample search config is inscluded inetc/searc_jobs.ini:act-search-graphetc/search_jobs.iniact-feedact-feedcan be used to download feed bundles from a remote uri. Feed uri must have a file, manifest.json, that lists all bundle files that can be download by the feed worker:{"bundles":{"000bd93b8ce1fa2acfa448fa916083d76e64b008e336d6f54ad89f12f4232dcf.gz":1636355543,"0598e898979bc851d58b440a3ea248fe64d9df4d2ab1665da3be73c6d13df411.gz":1636681082,"0c03f87d142345a1bbea0867c145608e70ab2c0ea8282b5e06833f5f2b0f5f04.gz":1636366321,},"updated":1636699081}A local cache will be stored stored to keep track of the last bundle download, so bundles will not be downloaded multiple times.You can either let act-feed download files to a local directory:act-feed--feed-urihttps://act.mnemonic.no/feed--dump-dir[path]output all facts to stdout:act-feed--feed-urihttps://act.mnemonic.no/feedor upload facts directory to the platformact-feed--feed-urihttps://act.mnemonic.no/feed--act-baseurl[platformUR]--user-id[USERID]act-fact-chain-helperThe act-fact-chain-helper is a utility that can be used to add facts chains based on known start and end nodes. This can be usefull when adding information from reports where do not have all the information availiable to you.exampleWe know from a report that the threat actor APT 1 uses the tool Mimikatz$act-fact-chain-helper--output-formatstr--startthreatActor/apt1--endtool/mimikatz--avoidmentions--includecontent(incident/[placeholder[1194f1ba4ea8ded28250a0193327654e5ee305bce03741a3e6f183f03b5c6b35]])-[attributedTo]->(threatActor/apt1)(event/[placeholder[1194f1ba4ea8ded28250a0193327654e5ee305bce03741a3e6f183f03b5c6b35]])-[attributedTo]->(incident/[placeholder[1194f1ba4ea8ded28250a0193327654e5ee305bce03741a3e6f183f03b5c6b35]])(content/[placeholder[1194f1ba4ea8ded28250a0193327654e5ee305bce03741a3e6f183f03b5c6b35]])-[observedIn]->(event/[placeholder[1194f1ba4ea8ded28250a0193327654e5ee305bce03741a3e6f183f03b5c6b35]])(content/[placeholder[1194f1ba4ea8ded28250a0193327654e5ee305bce03741a3e6f183f03b5c6b35]])-[classifiedAs]->(tool/mimikatz)In this case we do need to hint at the tool that it should include the "content" object and avoid "mention" facts.The source and destination is interchangeable. In the example above, the fact chain created by swapping the source and destination is identical.by adding the--act-baseurloption, the data will be stored to the platform.The tool works by finding the shortest path though the datamodel. The options --avoid and --include will modify the cost of traversing certain nodes or edges. This tool is considered experimental.l.act-ta-helperYou can useact-ta-helperto create facts, including placeholders, for typical scenarios where you have some information of TA activity, but not all.This worker should not be used if any of objects listed as placeholders below are known.--taTHREAT-ACTORThreatActorName(Required)--ta-located-inCOUNTRYCountrywherethreatactorislocated--campaignCAMPAIGNCampaign--techniquesTECHNIQUESListoftechniques(commaseparated).SupporttechniqueIDs,e.g.T1002andnamee.g"Valid Accounts"--toolsTOOLSListoftools(commaseparated)--sectorsSECTORSListofSectors(commaseparated)--target-countriesCOUNTRIESTargetCountries(commaseparated)You can run the tool with the options above, and add--output-format strto se a suggested list of facts. You can then add--act-baseurland--user-idto add the facts to a platform instance.You can add all options on a single command line, like this example:act-ta-helper\--output-formatstr\--taHAFNIUM\--sectorspharmaceuticals,education,defense,non-profit\--toolsPsExec,Procdump,7-Zip,Nishang,PowerCat,WinRar,SIMPLESEESHARP,SPORTSBALL,ChinaChopper,ASPXSPY,Covenant\--target-countries"Denmark,United States of America"\--campaign"Operation Exchange Marauder"\--ta-located-inChina\--techniquesT1588,T1003,T1190,T1560,T1583,T1071,T1114,T1567,T1136,T1021The following facts/placeholders will be created based on the options (placeholders are marked with [*]):ta-located-inUse this if you know the threat actor name and country where TA is located, but organization is unknown.threatActor -[attributedTo]-> organization[*] -[locatedIn]-> countrycampaignUse this if you know the threat actor and campaign name, but incident is unknown.threatActor -[attributedTo]-> incident[*] -[attributedTo]-> campaigntechniquesUse this if you know the threat actor and technique, but event and incident is unknown.technique <-[classifiedAs]- event[*] -[attributedTo]-> incident[*] -[attributedTo]-> threatActortoolsUse if tool and threat actor is known, but incident, content and event is unknown.threatActor <-[attributedTo]- incident[*] <-[observedIn]- content[*] -[classifiedAs]-> toolsectorsUse this if threat actor and target sector is known, but organization and incident is unknown.threatActor -[attributedTo]-> incident[*] -[targets]-> organization[*] -[memberOf]-> sectortarget-countriesUse this if threat actor and target country is known, but organization and incident is unknown.threatActor -[attributedTo]-> incident[*] -[targets]-> organization[*] -[locatedIn]-> country
actymath
actymathActuarial formulae and commutation functions for life insurance products (with a fast Pandas backend)Read this firstThis started out as a package to build up the various actuarial formulae using the Pandas backend for speed.The way it works is to create a 'grid' of actuarial calculation vectors in a pandas DataFrame that you can use for a single policy or a single cohort.When you ask for a particular actuarial formula or calculation to be created, it will spawn the columns needed to generate it.Everything is using Pandas in the backend, so you can use any normal Pandas machinery you like.This is very much 'in development'.UsageInstallationInstall using pippip install actymathGetting startedThisgetting started notebookillustrates how to use the package with a simple example.Actuarial formulaThe formula definitions are calledcolumnsin this package as they spawn columns in a pandas DataFrame.These formulae can be explored in theactymath/columns directory.Mortality tablesCurrently only a few old standard mortality tables are implemented, but there is support for 1D and 2D mortality tableshere.New 1D and 2D mortality tables can be loaded in from CSV or pandas DataFrames.ContributingFeel free to contribute or suggest improvements.Add suggested improvements as a GitHub issue on this projectPull requests also welcomed, particularly for any fixes, new tables or useful actuarial formulaeDeveloper setupClone this repository usinggit clone [email protected]:ttamg/actymath.gitDependencies usepoetryso make sure you havepoetry already installedon you development machine.With poetry, you create a new virtual environment for yourself and activate it usingpoetry shellTo install all the dependencies in your new virtual environment, usepoetry installRunning testsWe usepytestfor all testing. Run the test pack usingpytest
actyon
actyonAction with a Y! Why? Causeasyncis in the box.actyonoffers an approach on a multiplexed flux pattern using coroutines (PEP 492).InstallpipinstallactyonDocumentationSeeDocumentationExamplesGithub API(Actyon)Counter(Flux)Traffic Light(State Machine)IdeaAn actyon is defining an isolated execution run.Producers are called onall combinationsof input dependencies.Consumers are called onall results at once.Dependencies are available in any kind of structure.Dependencies are injected according to function signatures.Missing dependencies are ignored, unless no producer can be executed.ImplicationsSynchronization points areStartConclusion of all producersEndProducers are called asynchronously at onceConsumers are called asynchronously at onceTyping is mandatoryCoroutines for producers and consumers are mandatoryPython 3.8+ is requiredNerd SectionGreat, but who needs this?First of all, this is an open source project for everybody to use, distribute, adjust or simply carve out whatever you need. For me it's a case study on dependency injection and coroutines, so don't expect this to be a masterpiece.Are you serious? Python is not Java, we don't need DI.Aside from answer N° 1, I want to make clear I'm not a java developer getting started with python. I love python and its capabilities. So making python even greater and more accessible to the people is the key to me. Maybe DI is a step towards that, maybe it's not. Still, this code may provide other developers with an idea to accomplish exactly that.Gotcha. Why did you decide on this approach?Once you start developing software, you want it to simplify things. That's the whole definition of a software developer by the way: we are lazy by definition. Anyway, this code shows how you can multiplex tasks and sync them on the interface level. Your tasks are executed asynchronously all together, results are gathered and in the end they are being processed further - again, asynchronously all together. The decorator functionality allows for the application of the SOLID principle, which is pretty neat:Single-responsibility principleOpen–closed principleLiskov substitution principleInterface segregation principleDependency inversion principleIn this code the bottom two are quite shallow and not really applicable, but let's not get stuck with this. Another key feature of the functional interface is the simplicity. Define an action, use the decorators on whatever functions you have and just execute it. It even got a nice console output when you addhook=actyon.DisplayHook()to theActyon's constructor. Try it out, but caution: parallel actyon execution will break the rendering.
acudpclient
# ACUDPClient[![Build Status](https://travis-ci.org/joaoubaldo/acudpclient.svg?branch=master)](https://travis-ci.org/joaoubaldo/acudpclient)ACUDPClient is a Python module that can be used to communicate with an Assetto Corsa dedicated server.Using its UDP protocol, real time telemetry, lap timings and session information is pushed to the client.A few actions, like sending/broadcasting messages are also available.## Installation```bash$ python setup.py install```or```bash$ pip install acudpclient```(virtualenv is recommended)## Testing```bash$ nosetests```### Capturing real data for testing purposes1. Start the ACServer with UDP active.2. Capture the data using `tcpdump`:```bash$ tcpdump -w /tmp/ac_out.pcap -s0 -i lo -n udp dst port 10000```3. Extract all udp payload from the pcap file:```bash$ tshark -r /tmp/ac_out.pcap -T fields -e data | tr -d '\n' | perl -pe 's/([0-9a-f]{2})/chr hex $1/gie' > /tmp/ac_out```4. `/tmp/ac_out` contains binary data sent by ACServer.## UsageThe client should be initialized like this:```pythonfrom acudpclient.client import ACUDPClientclient = ACUDPClient(port=10000, remote_port=10001, host='127.0.0.1')client.listen()```* `remote_port` and `host` are used to send data to the AC server* `listen()` will bind the server socket to `port`.Server events can be handled directly or by event subscribers. Inboth cases, `get_next_event()` method must be invoked in theapplication's main loop.When handling events directly, a call to `get_next_event()`might return `None`, meaning there's no event available at thatpoint (the internal `ACUDPClient` socket is non-blocking).When creating a subscriber class, specific events can be handled by creatingmethods with the following naming scheme `on_<event_type>(self, event)`where `event_type` is any of the types found in`acudpclient.protocol.ACUDPConst` class (see Usage).Events passed to `on_<event_type>(self, event)` are dictionaries containingdifferent keys depending on the event's type. Refer to `acudpclient.client import ACUDPClient`to see which keys are available per event type.## ExamplesHandle events directly:```pythonfrom acudpclient.client import ACUDPClientclient = ACUDPClient(port=10000, remote_port=10001)client.listen()client.get_session_info()while True:event = client.get_next_event(call_subscribers=False)print event```Handle events with a subscriber:```pythonfrom acudpclient.client import ACUDPClientclass ACEventHandler(object):def on_ACSP_LAP_COMPLETED(self, event):print eventdef on_ACSP_NEW_SESSION(self, event):print eventdef on_ACSP_NEW_CONNECTION(self, event):print eventhandler = ACEventHandler()client = ACUDPClient(port=10000, remote_port=10001)client.listen()client.subscribe(handler)while True:client.get_next_event()```
acuity
A brief guide to AcuityliteAcuitylite is an end-to-end neural-network deployment tool for embedded systems.Acuitylite support converting caffe/darknet/onnx/tensorflow/tflite models to TIM-VX/TFLite cases. In addition, Acuitylite support asymmetric uint8 and symmetric int8 quantization.System RequirementOS: Ubuntu Linux 20.04 LTS 64-bit (recommend)Python Version: python3.8 (needed)Installpip install acuityliteDocumentReference: https://verisilicon.github.io/acuityliteFramework SupportImporter:Caffe,Darknet,Onnx,Tensorflow,TFLiteExporter:TFLite,TIM-VXTips: You can export a TFLite app and usingtflite-vx-delegateto run on TIM-VX if the exported TIM-VX app does not meet your requirements.How to run TIM-VX caseThe exported TIM-VX case supports both make and cmake.Please set environment for build and run case:TIM_VX_DIR=/path/to/tim-vx/build/installVIVANTE_SDK_DIR=/path/to/tim-vx/prebuilt-sdk/x86_64_linuxLD_LIBRARY_PATH=$TIM_VX_DIR/lib:$VIVANTE_SDK_DIR/libAttention: The TIM_VX_DIR path should include lib and header files of TIM-VX. You can referTIM-VXto build TIM-VX.SupportCreate issue on github or email [email protected]
acuitylite
A brief guide to AcuityliteAcuitylite is an end-to-end neural-network deployment tool for embedded systems.Acuitylite support converting caffe/darknet/onnx/tensorflow/tflite models to TIM-VX/TFLite cases. In addition, Acuitylite support asymmetric uint8 and symmetric int8 quantization.Attention: We have introduced some important changes and updated the APIs that are not compatible with the version before Acuitylite6.20.0(include). Please read the document and demos carefully.System RequirementOS:Ubuntu Linux 20.04 LTS 64-bit(python3.8)Ubuntu Linux 22.04 LTS 64-bit(python3.10)Installpip install acuityliteDocumentReference: https://verisilicon.github.io/acuityliteFramework SupportImporter:Caffe,Darknet,Onnx,Tensorflow,TFLiteExporter:TFLite,TIM-VXTips: You can export a TFLite app and usingtflite-vx-delegateto run on TIM-VX if the exported TIM-VX app does not meet your requirements.How to generate nbg and TIM-VX caseWhen you need generate TIM-VX case and nbg, please set the export() function's param pack_nbg_unify=True. Such as: TimVxExporter(model).export(pack_nbg_unify=True), it will use our default SDK. If you want to use your own SDK and licence, please set the param of export() viv_sdk, licence. Such as: TimVxExporter(model).export(pack_nbg_unify=True, viv_sdk=your_sdk_path, licence=path_of_licence_txt)Attention: your sdk directory structure must strictly follow the directory structure of acuitylib/vsi_sdk!!! your sdk need satisfy the structure of "your_sdk_path/build/install", "your_sdk_path/prebuilt-sdk/x86_64_linux", otherwise the path may have problems. And the licence content is the device target which you want to use.How to run TIM-VX caseThe exported TIM-VX case supports both make and cmake.Please set environment for build and run case:TIM_VX_DIR=/path/to/tim-vx/build/installVIVANTE_SDK_DIR=/path/to/tim-vx/prebuilt-sdk/x86_64_linuxLD_LIBRARY_PATH=$TIM_VX_DIR/lib:$VIVANTE_SDK_DIR/libAttention: The TIM_VX_DIR path should include lib and header files of TIM-VX. You can referTIM-VXto build TIM-VX.How to generate nbg by OvxlibWhen you need generate nbg, please use OvxlibExporter class and set the export() function's param pack_nbg_only=True. Such as: OvxlibExporter(model).export(pack_nbg_only=True), it will use our default SDK. If you want to use your own SDK and licence, please set the "viv_sdk" and "licence" params of export() function. Such as: OvxlibExporter(model).export(pack_nbg_only=True, viv_sdk=your_sdk_path, licence=path_of_licence_txt)Attention: your sdk directory structure must strictly follow the directory structure of acuitylib/vsi_sdk!!! your sdk need satisfy the structure of "your_sdk_path/prebuilt-sdk/x86_64_linux", otherwise the path may have problems. The content of licence is the device target which you want to use.SupportCreate issue on github or email [email protected]
acuitypro
A brief guide to AcuityliteAcuitylite is an end-to-end neural-network deployment tool for embedded systems.Acuitylite support converting caffe/darknet/onnx/tensorflow/tflite models to TIM-VX/TFLite cases. In addition, Acuitylite support asymmetric uint8 and symmetric int8 quantization.System RequirementOS: Ubuntu Linux 20.04 LTS 64-bit (recommend)Python Version: python3.8 (needed)Installpip install acuityliteDocumentReference: https://verisilicon.github.io/acuityliteFramework SupportImporter:Caffe,Darknet,Onnx,Tensorflow,TFLiteExporter:TFLite,TIM-VXTips: You can export a TFLite app and usingtflite-vx-delegateto run on TIM-VX if the exported TIM-VX app does not meet your requirements.How to run TIM-VX caseThe exported TIM-VX case supports both make and cmake.Please set environment for build and run case:TIM_VX_DIR=/path/to/tim-vx/build/installVIVANTE_SDK_DIR=/path/to/tim-vx/prebuilt-sdk/x86_64_linuxLD_LIBRARY_PATH=$TIM_VX_DIR/lib:$VIVANTE_SDK_DIR/libAttention: The TIM_VX_DIR path should include lib and header files of TIM-VX. You can referTIM-VXto build TIM-VX.SupportCreate issue on github or email [email protected]
acumos
Acumos Python Client User Guideacumosis a client library that allows modelers to push their Python models to theAcumos platform.InstallationYou will need a Python 3.6 or 3.7 environment in order to installacumos. Python 3.8 and later can also be used starting with version 0.9.5, some AI framework like Tensor Flow was not supported in Python 3.8 and later. You can useAnaconda(preferred) orpyenvto install and manage Python environments.If you’re new to Python and need an IDE to start developing, we recommend usingSpyderwhich can easily be installed with Anaconda.Theacumospackage can be installed with pip:pipinstallacumosProtocol BuffersTheacumospackage uses protocol buffers andassumes you have the protobuf compilerprotocinstalled. Please visit theprotobuf repositoryand install the appropriateprotocfor your operating system. Installation is as easy as downloading a binary release and adding it to your system$PATH. This is a temporary requirement that will be removed in a future version ofacumos.Anaconda Users: You can easily installprotocfroman Anaconda packagevia:condainstall-canacondalibprotobufAcumos Python Client TutorialThis tutorial provides a brief overview ofacumosfor creating Acumos models. The tutorial is meant to be followed linearly, and some code snippets depend on earlier imports and objects. Full examples are available in theexamples/directory of theAcumos Python client repository.Importing AcumosCreating A SessionA Simple ModelExporting ModelsDefining TypesUsing DataFrames with scikit-learnDeclaring RequirementsDeclaring OptionsKeras and TensorFlowTesting ModelsMore ExamplesImporting AcumosFirst import the modeling and session packages:fromacumos.modelingimportModel,List,Dict,create_namedtuple,create_dataframefromacumos.sessionimportAcumosSessionCreating A SessionAnAcumosSessionallows you to export your models to Acumos. You can either dump a model to disk locally, so that you can upload it via the Acumos website, or push the model to Acumos directly.If you’d like to push directly to Acumos, create a session with thepush_apiargument:session=AcumosSession(push_api="https://my.acumos.instance.com/push")See the onboarding page of your Acumos instance website to find the correctpush_apiURL to use.If you’re only interested in dumping a model to disk, arguments aren’t needed:session=AcumosSession()A Simple ModelAny Python function can be used to define an Acumos model usingPython type hints.Let’s first create a simple model that adds two integers together. Acumos needs to know what the inputs and outputs of your functions are. We can use the Python type annotation syntax to specify the function signature.Below we define a functionadd_numberswithinttype parametersxandy, and anintreturn type. We then build an Acumos model with anaddmethod.Note:Functiondocstringsare included with your model and used for documentation, so be sure to include one!defadd_numbers(x:int,y:int)->int:'''Returns the sum of x and y'''returnx+ymodel=Model(add=add_numbers)Exporting ModelsWe can now export our model using theAcumosSessionobject created earlier. Thepushanddump_zipAPIs are shown below. Thedump_zipmethod will save the model to disk so that it can be onboarded via the Acumos website. Thepushmethod pushes the model directly to Acumos.session.push(model,'my-model')session.dump_zip(model,'my-model','~/my-model.zip')# creates ~/my-model.zipFor more information on how to onboard a dumped model via the Acumos website, see theweb onboarding guide.Note:Pushing a model to Acumos will prompt you for an onboarding token if you have not previously provided one. The interactive prompt can be avoided by exporting theACUMOS_TOKENenvironment variable, which corresponds to an authentication token that can be found in your account settings on the Acumos website.Defining TypesIn this example, we make a model that can read binary images and output some metadata about them. This model makes use of a custom typeImageShape.We first create aNamedTupletype calledImageShape, which is like an ordinarytuplebut with field accessors. We can then useImageShapeas the return type ofget_shape. Note howImageShapecan be instantiated as a new object.importioimportPILImageShape=create_namedtuple('ImageShape',[('width',int),('height',int)])defget_format(data:bytes)->str:'''Returns the format of an image'''buffer=io.BytesIO(data)img=PIL.Image.open(buffer)returnimg.formatdefget_shape(data:bytes)->ImageShape:'''Returns the width and height of an image'''buffer=io.BytesIO(data)img=PIL.Image.open(buffer)shape=ImageShape(width=img.width,height=img.height)returnshapemodel=Model(get_format=get_format,get_shape=get_shape)Note:Starting in Python 3.6, you can alternatively use this simpler syntax:fromacumos.modelingimportNamedTupleclassImageShape(NamedTuple):'''Type representing the shape of an image'''width:intheight:intDefining Unstructured TypesThecreate_namedtuplefunction allows us to create types with structure, however sometimes it’s useful to work with unstructured data, such as plain text, dictionaries or byte strings. Thenew_typefunction allows for just that.For example, here’s a model that takes in unstructured text, and returns the number of words in the text:fromacumos.modelingimportnew_typeText=new_type(str,'Text')defcount(text:Text)->int:'''Counts the number of words in the text'''returnlen(text.split(' '))defcreate_text(x:int,y:int)->Text:'''Returns a string containing ints from x to y'''return" ".join(map(str,range(x,y+1)))defreverse_text(text:Text)->Text:'''Returns an empty image buffer from dimensions'''returntext[::-1]By using thenew_typefunction, you informacumosthatTextis unstructured, and thereforeacumoswill not create any structured types or messages for thecountfunction.You can use thenew_typefunction to create dictionaries or byte string type unstructured data as shown below.fromacumos.modelingimportnew_typeDict=new_type(dict,'Dict')Image=new_type(byte,'Image')Using DataFrames with scikit-learnIn this example, we train aRandomForestClassifierusingscikit-learnand use it to create an Acumos model.When making machine learning models, it’s common to use a dataframe data structure to represent data. To make things easier,acumoscan createNamedTupletypes directly frompandas.DataFrameobjects.NamedTupletypes created frompandas.DataFrameobjects store columns as named attributes and preserve column order. BecauseNamedTupletypes are like ordinarytupletypes, the resulting object can be iterated over. Thus, iterating over aNamedTupledataframe object is the same as iterating over the columns of apandas.DataFrame. As a consequence, note hownp.column_stackcan be used to create anumpy.ndarrayfrom the inputdf.Finally, the model returns anumpy.ndarrayofintcorresponding to predicted iris classes. Theclassify_irisfunction represents this asList[int]in the signature return.importnumpyasnpimportpandasaspdfromsklearn.datasetsimportload_irisfromsklearn.ensembleimportRandomForestClassifieriris=load_iris()X=iris.datay=iris.targetclf=RandomForestClassifier(random_state=0)clf.fit(X,y)# here, an appropriate NamedTuple type is inferred from a pandas DataFrameX_df=pd.DataFrame(X,columns=['sepal_length','sepal_width','petal_length','petal_width'])IrisDataFrame=create_dataframe('IrisDataFrame',X_df)# ==================================================================================# # or equivalently:## IrisDataFrame = create_namedtuple('IrisDataFrame', [('sepal_length', List[float]),# ('sepal_width', List[float]),# ('petal_length', List[float]),# ('petal_width', List[float])])# ==================================================================================defclassify_iris(df:IrisDataFrame)->List[int]:'''Returns an array of iris classifications'''X=np.column_stack(df)returnclf.predict(X)model=Model(classify=classify_iris)Check out thesklearnexamples in the examples directory for full runnable scripts.Declaring RequirementsIf your model depends on another Python script or package that you wrote, you can declare the dependency via theacumos.metadata.Requirementsclass:fromacumos.metadataimportRequirementsNote that only pure Python is supported at this time.Custom ScriptsCustom scripts can be included by givingRequirementsa sequence of paths to Python scripts, or directories containing Python scripts. For example, if the model defined inmodel.pydepended onhelper1.py:model_workspace/ ├── model.py ├── helper1.py └── helper2.pythis dependency could be declared like so:fromhelper1importdo_thingdeftransform(x:int)->int:'''Does the thing'''returndo_thing(x)model=Model(transform=transform)reqs=Requirements(scripts=['./helper1.py'])# using the AcumosSession created earlier:session.push(model,'my-model',reqs)session.dump(model,'my-model','~/',reqs)# creates ~/my-modelAlternatively, all Python scripts withinmodel_workspace/could be included using:reqs=Requirements(scripts=['.'])Custom PackagesCustom packages can be included by givingRequirementsa sequence of paths to Python packages, i.e. directories with an__init__.pyfile. Assuming that the package~/repos/my_pkgcontains:my_pkg/ ├── __init__.py ├── bar.py └── foo.pythen you can bundlemy_pkgwith your model like so:frommy_pkg.barimportdo_thingdeftransform(x:int)->int:'''Does the thing'''returndo_thing(x)model=Model(transform=transform)reqs=Requirements(packages=['~/repos/my_pkg'])# using the AcumosSession created earlier:session.push(model,'my-model',reqs)session.dump(model,'my-model','~/',reqs)# creates ~/my-modelRequirement MappingPython packaging andPyPIaren’t perfect, and sometimes the name of the Python package you import in your code is different than the package name used to install it. One example of this is thePILpackage, which is commonly installed usinga fork called pillow(i.e.pip install pillowwill provide thePILpackage).To address this inconsistency, theRequirementsclass allows you to map Python package names to PyPI package names. When your model is analyzed for dependencies byacumos, this mapping is used to ensure the correct PyPI packages will be used.In the example below, thereq_mapparameter is used to declare a requirements mapping from thePILPython package to thepillowPyPI package:reqs=Requirements(req_map={'PIL':'pillow'})Declaring OptionsTheacumos.metadata.Optionsclass is a collection of options that users may wish to specify along with their Acumos model. If anOptionsinstance is not provided toAcumosSession.push, then default options are applied. See the class docstring for more details.Below, we demonstrate how options can be used to include additional model metadata and influence the behavior of the Acumos platform. For example, a license can be included with a model via thelicenseparameter, either by providing a license string or a path to a license file. Likewise, we can specify whether or not the Acumos platform should eagerly build the model microservice via thecreate_microserviceparameter. Then thanks to thedeployparameter you can specifiy if you want to deploy this microservice automatically. (Please refer to the appropriate documentation on Acumos wiki to use this functionality based on an external jenkins server). ifcreate_microservice``=True,``deploycan be True or False. But ifcreate_microservice``=False,``deploymust be set to False if not,create_microservicewill be force to True to create the micro-service and deploy it.fromacumos.metadataimportOptionsopts=Options(license="Apache 2.0",# "./path/to/license_file" also workscreate_microservice=True,# Build the microservice just after the on-boardingdeploy=True)# Deploy the microservice based on an external Jenkins serversession.push(model,'my-model',options=opts)Keras and TensorFlowCheck out the Keras and TensorFlow examples in theexamples/directory of theAcumos Python client repository.Testing ModelsTheacumos.modeling.Modelclass wraps your custom functions and produces corresponding input and output types. This section shows how to access those types for the purpose of testing. For simplicity, we’ll create a model using theadd_numbersfunction again:defadd_numbers(x:int,y:int)->int:'''Returns the sum of x and y'''returnx+ymodel=Model(add=add_numbers)Themodelobject now has anaddattribute, which acts as a wrapper aroundadd_numbers. Theadd_numbersfunction can be invoked like so:result=model.add.inner(1,2)print(result)# 3Themodel.addobject also has a correspondingwrappedfunction that is generated byacumos.modeling.Model. The wrapped function is the primary way your model will be used within Acumos.We can access theinput_typeandoutput_typeattributes to test that the function works as expected:AddIn=model.add.input_typeAddOut=model.add.output_typeadd_in=AddIn(1,2)print(add_in)# AddIn(x=1, y=2)add_out=AddOut(3)print(add_out)# AddOut(value=3)model.add.wrapped(add_in)==add_out# TrueMore ExamplesBelow are some additional function examples. Note hownumpytypes can even be used in type hints, as shown in thenumpy_sumfunction.fromcollectionsimportCounterimportnumpyasnpdeflist_sum(x:List[int])->int:'''Computes the sum of a sequence of integers'''returnsum(x)defnumpy_sum(x:List[np.int32])->np.int32:'''Uses numpy to compute a vectorized sum over x'''returnnp.sum(x)defcount_strings(x:List[str])->Dict[str,int]:'''Returns a count mapping from a sequence of strings'''returnCounter(x)Acumos Python Client Release Notesv1.0.1, 27 April 2021use acumos-python-client > 0.8.0 with Acumos clioACUMOS-4330v1.0.0, 13 April 2021Fix Type issue with python 3.9ACUMOS-4323v0.9.9, 13 April 2021Take into account “deploy” parameter in acumos python clientACUMOS-4303v0.9.8, 06 November 2020Return docker URI & added an optional flag to replace and existing model when dumpingACUMOS-4298The model bundle can now be dumped directly as a zip fileACUMOS-4273Allow installation on python 3.9ACUMOS-4123v0.9.7, 27 August 2020Add support of python 3.7 & 3.8ACUMOS-4123Display acumos logo on githubACUMOS-4094v0.9.4, 05 April 2020Give image tag URL from python clientACUMOS-3961v0.9.3, 30 Mar 2020Modify unstructured type section in pypiACUMOS-3956Raise an Error when using asymetric typeACUMOS-3956v0.9.2, 31 Jan 2020Remove support for python 3.5Gerrit-6275v0.9.1add raw format supportACUMOS-2712publish content type for long descriptionGerrit-5504v0.8.0(This is the recommended version for the Clio release)EnhancementsUsers may now specify additional options when pushing their Acumos model. See the options section in the tutorial for more information.acumosnow supports Keras models built withtensorflow.kerasSupport changesacumosno longer supports Python 3.4v0.7.2Bug fixesThe deprecated authentication API is now considered optionalA more portable path solution is now used when saving models, to avoid issues with models developed in Windowsv0.7.1AuthenticationUsername and password authentication has been deprecatedUsers are now interactively prompted for an onboarding token, as opposed to a username and passwordv0.7.0RequirementsPython script dependencies can now be specified using a Requirements objectPython script dependencies found during the introspection stage are now included with the modelv0.6.5Bug fixesDon’t attempt to use an empty auth token (avoids blank strings to be set in environment)v0.6.4Bug fixesThe normalized path of the system base prefix is now used for identifying stdlib packagesv0.6.3Bug fixesImproved dependency inspection when using a virtualenvRemoved custom packages from model metadata, as it caused image build failuresFixed Python 3.5.2 ordering bug in wrapped model usagev0.6.2TensorFlowFixed a serialization issue that occurred when using a frozen graphv0.6.1Model uploadThe JWT is now cleared immediately after a failed uploadAdditional HTTP information is now included in the error messagev0.6.0Authentication tokenA new environment variableACUMOS_TOKENcan be used to short-circuit the authentication processExtra headersAcumosSession.pushnow accepts an optionalextra_headersargument, which will allow users and systems to include additional information when pushing models to the onboarding serverv0.5.0ModelingPython 3.6 NamedTuple syntax support now testedUser documentation includes example of new NamedTuple syntaxModel wrapperModel wrapper now has APIs for consuming and producing Python dicts and JSON stringsProtobuf and protocAn explicit check for protoc is now made, which raises a more informative error messageUser documentation is more clear about dependence on protoc, and provides an easier way to install protoc via AnacondaKerasThe active keras backend is now included as a tracked modulekeras_contrib layers are now supportedv0.4.0Replaced library-specific onboarding functions with “new-style” modelsSupport for arbitrary Python functions using type hintsSupport for custom user-defined typesSupport for TensorFlow modelsImproved dependency introspectionImproved object serialization mechanismsAcumos Python Client Developer GuideTestingWe use a combination oftox,pytest, andflake8to testacumos. Code which is not PEP8 compliant (aside from E501) will be considered a failing test. You can use tools likeautopep8to “clean” your code as follows:$pipinstallautopep8$cdacumos-python-client$autopep8-r--in-place--ignoreE501acumos/testing/examples/Run tox directly:$cdacumos-python-client$exportWORKSPACE=$(pwd)# env var normally provided by Jenkins$toxYou can also specify certain tox environments to test:$tox-epy36# only test against Python 3.6$tox-eflake8# only lint codeA set of integration test is also available inacumos-package/testing/integration_tests. To run those, useacumos-package/testing/tox-integration.inias tox config (-c flag), onboarding tests will be ran with python 3.6 to 3.9. You will need to set your user credentials and platform configuration intox-integration.ini.$tox-cacumos-package/testing/integration_testsPackagingThe RST files in the docs/ directory are used to publish HTML pages to ReadTheDocs.io and to build the package long description in setup.py. The symlink from the subdirectory acumos-package to the docs/ directory is required for the Python packaging tools. Those tools build a source distribution from files in the package root, the directory acumos-package. The MANIFEST.in file directs the tools to pull files from directory docs/, and the symlink makes it possible because the tools only look within the package root.
acumos-dcae-model-runner
Acumos DCAE Model RunnerThe Acumos DCAE model runner enables Acumos Python models to be run as if they were DCAE components.Each Acumos model method is mapped to a subscriber and publisher stream, with_subscriberand_publishersuffixes respectively. For example, a model with atransformmethod would havetransform_subscriberandtransform_publisherstreams.The model runner implements DCAE APIs such as health checks and configuration updates.Theacumos_dcae_model_runnerPython package provides a command line utility that can be used to instantiate the model runner. See the tutorial for more information.Theacumos_dcae_model_runnerpackage should be installed in the docker image that is ultimately on-boarded into DCAE. The model runner CLI utility should be the entry point of that Docker image, as shown in the Dockerfile provided inexample/directory in the root of theAcumos DCAE Model Runner repository.InstallationTheacumos_dcae_model_runnerpackage can be installed with pip like so:pipinstallacumos_dcae_model_runnerNote: installingacumos_dcae_model_runnerwill also install the latest version ofdcaeapplib, which is only compatible with DCAE Dublin or later. To useacumos_dcae_model_runnerwith earlier versions of DCAE, be sure to pin or bound the version ofdcaeapplibappropriately. Consult the DCAE documentation for more information.TutorialCLI UsageTo execute the model runner, use the provided CLI:$acumos_dcae_model_runner--helpusage:acumos_dcae_model_runner[-h][--timeoutTIMEOUT][--debug]model_dirpositionalarguments:model_dirDirectorythatcontainseitherthedumpedmodel.ziporitsunzippedcontents.optionalarguments:-h,--helpshowthishelpmessageandexit--timeoutTIMEOUTTimeout(ms)usedwhenfetching.--debugSetsthelogleveltoDEBUGDCAE Onboarding ExampleThepython-dcae-model-runnerrepository has anexample/directory that shows how an Acumos model can be onboarded as a DCAE component.After executing the steps below, the directory should have this structure:example/├──Dockerfile├──dcae-artifacts│├──component.json│├──number-out.json│└──numbers-in.json├──example-model│├──metadata.json│├──model.proto│└──model.zip├──example_model.py└──requirements.txtNote:For this example, therequirements.txtfile should reflect the packages and versions listed inexample-model/metadata.json.Steps1) Create the Acumos modelTheexample_model.pyscript defines a simple Acumos model that can add two integers together. The following will generateexample-model/:pythonexample_model.py2) Build the docker imagedockerbuild-tacumos-python-model-test:0.1.0.3) Onboard the Acumos model to DCAEThe onboarding procedure involves adding the component and data format artifacts provided inexample/dcae-artifactsto the DCAE catalog.Refer to the official DCAE onboarding documentation for the full procedure.Acumos DCAE Model Runner Release Notesv0.1.3Updated major release bound fordcaeapplibv0.1.2Removed dependency link fordcaeapplibv0.1.1Updated dependency link fordcaeapplib. It released a patch that fixed an authentication error. Thedcaeapplibdependency link will be removed oncedcaeapplibis hosted in PyPI.v0.1.0Initial release of the Acumos DCAE Python model runnerContributing GuidelinesTestingWe use a combination oftox,pytest, andflake8to testacumos. Code which is not PEP8 compliant (aside from E501) will be considered a failing test. You can use tools likeautopep8to “clean” your code as follows:$pipinstallautopep8$cdpython-dcae-model-runner$autopep8-r--in-place--ignoreE501acumos_dcae_model_runner/Run tox directly:$cdpython-dcae-model-runner$exportWORKSPACE=$(pwd)# env var normally provided by Jenkins$toxYou can also specify certain tox environments to test:$tox-epy34# only test against Python 3.4$tox-eflake8# only lint code
acumos-model-runner
Acumos Python Model Runner User GuideTheacumos_model_runnerpackage installs a command line toolacumos_model_runnerfor running models created by theAcumos Python client library.The model runner provides an HTTP API for invoking model methods, as well as aSwagger UIfor documentation. See the tutorial for more information on usage.InstallationYou will need a Python 3.4+ environment in order to installacumos_model_runner. You can useAnaconda(preferred) orpyenvto install and manage Python environments.Theacumos_model_runnerpackage can be installed with pip:$pipinstallacumos_model_runnerCommand Line Usageusage:acumos_model_runner[-h][--hostHOST][--portPORT][--workersWORKERS][--timeoutTIMEOUT][--corsCORS]model_dirpositionalarguments:model_dirDirectorycontainingadumpedAcumosPythonmodeloptionalarguments:-h,--helpshowthishelpmessageandexit--hostHOSTTheinterfacetobindto--portPORTTheporttobindto--workersWORKERSThenumberofgunicornworkerstospawn--timeoutTIMEOUTTimetowait(seconds)beforeafrozenworkerisrestarted--corsCORSEnablesCORSifprovided.Canbeadomain,comma-separatedlistofdomains,or*Acumos Python Model Runner TutorialThis tutorial demonstrates how to use the Acumos Python model runner with an example model.Creating A ModelAn Acumos model must first be defined using theAcumos Python client library. For illustrative purposes, a simple model with deterministic methods is defined below.# example_model.pyfromcollectionsimportCounterfromacumos.sessionimportAcumosSessionfromacumos.modelingimportModel,List,Dictdefadd(x:int,y:int)->int:'''Adds two numbers'''returnx+ydefcount(strings:List[str])->Dict[str,int]:'''Counts the occurrences of words in `strings`'''returnCounter(strings)model=Model(add=add,count=count)session=AcumosSession()session.dump(model,'example-model','.')Executingexample_model.pyresults in the following directory:.├──example_model.py└──example-modelRunning A ModelNow theacumos_model_runnercommand line tool can be used to run the saved model.$acumos_model_runnerexample-model/[2018-08-0812:16:57-0400][61113][INFO]Startinggunicorn19.9.0[2018-08-0812:16:57-0400][61113][INFO]Listeningat:http://0.0.0.0:3330(61113)[2018-08-0812:16:57-0400][61113][INFO]Usingworker:sync[2018-08-0812:16:57-0400][61151][INFO]Bootingworkerwithpid:61151Using A ModelThe model HTTP API can be explored via its generated Swagger UI. The Swagger UI ofexample-modelabove can be accessed by navigating tohttp://localhost:3330in your web browser.Below are some screenshots of the Swagger UI forexample-model.Model APIsThe Swagger UI enumerates model method APIs, as well as APIs for accessing model artifacts. Below, the APIs corresponding to theaddandcountmethods are listed under themethodstag.Count Method APIExpanding the documentation for thecountmethod reveals more information on how to invoke the API.Count Method RequestThe Swagger UI provides an input form that can be used to try out thecountAPI with sample data.Count Method ResponseThe response from thecountAPI shows that everything is working as expected!Acumos Python Model Runner Release Notesv0.2.25, 04 June 2020Fix backward compatibility issue with old modelsACUMOS-4164v0.2.24, 15 May 2020Fix OpenAPI spec generation for empty inputsACUMOS-4010Allow the model runner to use raw data typesACUMOS-3956Receive the licence profile from the running micro-serviceACUMOS-3161v0.2.3, 23 January 2020larkparser lark-parser<0.8.0 pinning to prevent errorFixing issue with using 0.6.0 model metadata schema - works with model metadata versions <0.6.0 and 0.6.0python removing 3.4 supportv0.2.2Fixed 404 bug for model artifact resources caused by relative model directoryFixed incorrect media type for protobuf resourcev0.2.1Upgraded Swagger UI from v2 to v3v0.2.0Overhaul of model runner APIAdded support forapplication/jsonviaContent-TypeandAcceptheadersAdded automatic generation ofOpenAPI SpecificationandSwagger UIAdded support for CORSv0.1.0Model runner implementation split off fromAcumos Python clientprojectAcumos Python Model Runner Developer GuideTestingWe use a combination oftox,pytest, andflake8to testacumos_model_runner. Code which is not PEP8 compliant (aside from E501) will be considered a failing test. You can use tools likeautopep8to “clean” your code as follows:$pipinstallautopep8$cdpython-model-runner$autopep8-r--in-place--ignoreE501acumos_model_runner/testing/examples/Run tox directly:$cdpython-model-runner$toxYou can also specify certain tox environments to test:$tox-epy34# only test against Python 3.4$tox-eflake8# only lint codeAnd finally, you can run pytest directly in your environment(recommended starting place):$pytest$pytest-s# verbose output
acunetix
Acunetix Web Vulnerability Scanner API wrapper
acurl
AcurlIt is an asynchronous wrapper aroundlibcurlwhich is built to interface with the Uvloop python library.Using Acurl In MiteThe gateway into Acurl is through the CurlWrapper (discussed inArchitectural Notes) and requires an event loop being passed to its constructor. Below is the mite implementation of acurl:classSessionPool:...def__init__(self):importacurlself._wrapper=acurl.CurlWrapper(asyncio.get_event_loop())...Architectural NotesAcurl uses a single loop maintained within python using UVloop.Acurl surfaces the CurlWrapper interface which takes the asyncio event loop as an argument. The wrapper deals directly with the curl_multi interface from libcurl, defining 2 functions (curl_perform_writeandcurl_perform_read) for checking both read and write availability of file descriptors.There are 2 notable functions within thecore Acurl implementation, notablyhandle_socketandstart_timer:handle_socketis passed as a callback function to the curl_multi interface and upon calls to thecurl_multi_socket_actionfunction, will receive updates regarding the socket status. We then handle those statuses by either adding or removing the aforementioned readers or writers.start_timeris another callback function that is passed to the curl_multi interface and is used as a way to handle timeouts and retries within curl. Upon a timeout, the timeout callback will be called and the transfer can be retried.
acurl-ng
Acurl_NG (Next Generation)Acurl_NG is a Cython refactoring/rewrite of an earlier iteration of the asynchronous curl wrapper concept, which was written in C for the mite project by Tony [email protected]. It is an asynchronous wrapper aroundlibcurlwhich is built to interface with the Uvloop python library.Using Acurl_NG In MiteThe current implementation of Acurl_NG is behind a feature toggle which defaults to using the old implementation of acurl. To switch over to using the new version of Acurl_NG within mite, the flagMITE_CONF_enable_new_acurl_implementation="True"The gateway into Acurl_NG is through the CurlWrapper (discussed inArchitectural Notes) and requires an event loop being passed to it's constructor. Below is the mite implementation of acurl, using the aforementioned flag to switch between versions of acurl:classSessionPool:...def__init__(self,use_new_acurl_implementation=False):ifuse_new_acurl_implementation:importacurl_ngself._wrapper=acurl_ng.CurlWrapper(asyncio.get_event_loop())else:importacurlself._wrapper=acurl.EventLoop()...Architectural NotesIn the old implementation ofacurlthere was two loops in play, UVloop in python and a second loop called AE. This has now been reduced to a single loop maintained within python using UVloop.Acurl_NG surfaces the CurlWrapper interface which takes the asyncio event loop as an argument. The wrapper deals directly with the curl_multi interface from libcurl, defining 2 functions (curl_perform_writeandcurl_perform_read) for checking both read and write availability of file descriptors.There are 2 notable functions within thecore Acurl_NG implementation, notablyhandle_socketandstart_timer:handle_socketis passed as a callback function to the curl_multi interface and upon calls to thecurl_multi_socket_actionfunction, will receive updates regarding the socket status. We then handle those statuses by either adding or removing the aforementioned readers or writers.start_timeris another callback function that is passed to the curl_multi interface and is used as a way to handle timeouts and retries within curl. Upon a timeout, the timeout callback will be called and the transfer can be retried.
acute-dbapi
Welcome to the home page for acute-dbapi, a DB-API compliance test suite. Acute is still in it’s infancy, but it’s reached the level of maturity that it would benefit from community input. It currently contains 71 tests, and many more will be added soon.Comments, suggestions, and patches are all warmly welcome. There are several TODOs listed in the [TODO] file, and many more generously sprinkled throughout the code; if you’d like to help out but don’t know where to begin, feel free to take a crack at one of them!Please read the project’s [README] for an introduction to the suite. You’ll also find usage, architecture, and project philosophy information there.If you just want to see the results, take a look at TestResults, and DriverFeatures on the project wiki.
ac-utils
No description available on PyPI.
acutils-python
acutilsPython library providing a robust pipeline for data processing tasks.The acutils library is designed to facilitate data processing tasks, especially for individuals dealing with custom data preprocessing before building machine learning algorithms. It has been used in various domains, including pathology image processing, custom segmentation, and frame extractions from videos.HERE ARE THEONLINE DOCUMENTATIONAND THEDOCUMENTATION FILES.Key featuresYou only need tocodeonefunctionfor a custom treatment, nothing else.Easyrandomdistribution/split of the data intodatasets.Remember andreproduceyourdistribution/splitby saving it toJSONfiles.Made formultiprocessingand facilitate GPU usage for computation.Works withanykind ofdatafiles.If somefilesarerelated(for example: two medical images from the same patient), you candefinegroupsto ensure that those are in the same dataset (toavoidbiases).Brief exampleimportacutilsasau# Define the handler and linked it to a source directoryhandler=au.handler.DataHandler('./data',allowed_cpus=2)# Load filenames and labelshandler.load_data_fromdatapath()handler.load_labels_fromsheet('./labels.xlsx',idcol="id",labelcol="label")# Even load relations between files through groups (optional for split)handler.load_groups_fromsheet(os.path.join(DIR,'./labels.xlsx'),idcol="id",groupcol="patient")# Randomly split into datasets (dict<filename,label>) and balance themtdata,vdata=handler.split(train_percentage=0.70)# use groups if definedbal_tdata,bal_vdata=handler.balance_datasets(tdata,vdata)#TODOdefyour_custom_treatment(self,src,dstdir,arg1):au.file.tmnt_copyfile_to_dir(src,dstdir)# example# Process the data using your custom function and save the datasets:handler.make_datasets('./train_bal','./val_bal',bal_tdata,bal_vdata,func=your_custom_treatment,# custom functionarg1="very useful argument")# its argumentsInstallationIt should work using any OS, but for now, we only tested using Ubuntu 22.04.It works using any Python version >= 3.8 (maybe lowers too, but not tested yet).From pippipinstallacutils-pythonFrom this repository (still pip though :)pipinstall--upgradebuildpython3-mbuildpipinstalldist/acutils_python-0.1.1-py3-none-any.whlAdditional requirementsPillow,scikit-image,poochandopenslide-python:pathologymodule.opencv-python:image,pathologyandvideomodules.pipinstallopencv-pythonPillowscikit-imagepoochopenslide-pythonFinally, thepathologymodule requires you to installOpenslide,openslide-pythonis just a mapping of it. Maybe reinstall openslide-python after installing Openslide, but it should not be necesarry.GPU computationFor now, this is only used inpathologymodule (because the process takes a while).cupy: numpy but using GPU.cucim: includes cucim.skimage that is skimage (older version) using GPU.pipinstallcupycucimMake your CUDA install locatable from cupy:exportLD_LIBRARY_PATH=/path/to/cudnn/lib:$LD_LIBRARY_PATHChoose at least one device (if multiple, the first is taken):exportCUDA_VISIBLE_DEVICES=0ModulesUse the relevant modules from acutils based on your data processing needs:handler: High-level classes to handle data processing.file: About directories and files.gpu: GPU computation (for now, only used in pathology module).image: Computer Vision tasks on images.multiprocess: Multiprocessing and process management.pathology: Pathology data processing for segmentation and tiling.sheet: Handling pandas DataFrames.video: Computer Vision tasks on Videos.Refer to theonline documentationand codeexamplesfor detailed usage instructions.TBDProvide more code examples.Define a test pipeline to check if all features work.Ensure that acutils works on multiple OS and Python versions.LicenseApache-2.0, see theLICENSE.ContributingThis library is maintained byAcuzle's development team, lead [email protected] welcome and appreciate contributions from the community to enhance acutils. If you have ideas, bug fixes, or new features that can benefit others, we encourage you to contribute to the project. Just fork the project, create a new branch, do whatever you want and create a pull request.
a-cv2-calculate-difference
Calculates the difference between 2 imagesfroma_cv2_calculate_differenceimportcheck_before_afterrect=check_before_after("https://github.com/hansalemaos/screenshots/raw/main/colorfind3.png",r"https://github.com/hansalemaos/screenshots/raw/main/colorfind1.png",show_results=False,return_image=True,color=(255,0,0),)print(rect[0])[(93,150,39,18),(100,137,26,13),(150,117,15,15),(100,100,50,32)]
a-cv2-calculate-simlilarity
Calculate the simlilarity of 2 or more pictures with OpenCVTested against Windows 10 / Python 3.11 / Anacondapip install a-cv2-calculate_simlilarityfroma_cv2_calculate_simlilarityimportadd_similarity_to_cv2add_similarity_to_cv2()#monkeypatch#if you don't want to use a monkey patch:#from a_cv2_calculate_simlilarity import calculate_simlilaritycalculate_simlilarity(im1:Any,im2:Any,width=100,height=100,interpolation=cv2.INTER_LINEAR,with_alpha=False,)->tuple:r"""Calculate structural similarity between two images.This function computes the structural similarity index (SSIM) between two images,which measures the similarity of their structural patterns. The SSIM values rangefrom -1 to 1, where a higher value indicates greater similarity.Parameters:im1: AnyImage 1, which can be provided as a URL, file path, base64 string, numpy array,or PIL image.im2: AnyImage 2, which can be provided as a URL, file path, base64 string, numpy array,or PIL image.width: int, optionalWidth of the resized images for comparison (default is 100).height: int, optionalHeight of the resized images for comparison (default is 100).interpolation: int, optionalInterpolation method for resizing (default is cv2.INTER_LINEAR).with_alpha: bool, optionalWhether to include alpha channel if present (default is False).Returns:tupleA tuple containing four SSIM values in the order (B, G, R, Alpha).Example:resa = calculate_simlilarity(r"https://avatars.githubusercontent.com/u/77182807?v=4",r"https://avatars.githubusercontent.com/u/77182807?v=4",width=100,height=100,interpolation=cv2.INTER_LINEAR,with_alpha=False,)print(resa)resa2 = calculate_simlilarity(r"https://avatars.githubusercontent.com/u/77182807?v=4",r"https://avatars.githubusercontent.com/u/1024025?v=4",width=100,height=100,interpolation=cv2.INTER_LINEAR,with_alpha=False,)print(resa2)resa2 = calculate_simlilarity(r"C:\Users\hansc\Downloads\1633513733_526_Roblox-Royale-High.jpg",r"C:\Users\hansc\Downloads\Roblox-Royale-High-Bobbing-For-Apples (1).jpg",width=100,height=100,interpolation=cv2.INTER_LINEAR,with_alpha=False,)print(resa2)resa2 = calculate_simlilarity(r"C:\Users\hansc\Documents\test1.png",r"C:\Users\hansc\Documents\test2.png",width=100,height=100,interpolation=cv2.INTER_LINEAR,with_alpha=False,)print(resa2)compare_all_images_with_all_images(imagelist,width=100,height=100,interpolation=cv2.INTER_LINEAR,with_alpha=False,delete_cache=True,):r"""Comparealistofimageswitheachotherandreturnasimilaritymatrix.Thisfunctioncomparesalistofimageswitheachotherusingthe`calculate_simlilarity`functionandreturnsasimilaritymatrixasapandasDataFrame.Eachelementinthematrixrepresentsthesimilaritybetweentwoimages.Parameters:imagelist:listListofimagestocompare.EachimagecanbeprovidedasaURL,filepath,base64string,numpyarray,orPILimage.width:int,optionalWidthoftheresizedimagesforcomparison(defaultis100).height:int,optionalHeightoftheresizedimagesforcomparison(defaultis100).interpolation:int,optionalInterpolationmethodforresizing(defaultiscv2.INTER_LINEAR).with_alpha:bool,optionalWhethertoincludealphachannelifpresent(defaultisFalse).delete_cache:bool,optionalWhethertoclearthecacheofpreprocessedimages(defaultisTrue).Returns:pandas.DataFrameADataFramerepresentingthesimilaritymatrixbetweentheimages.Example:add_similarity_to_cv2()df=cv2.calculate_simlilarity_of_all_pics([r"C:\Users\hansc\Downloads\testcompare\10462.7191107_0.png",r"C:\Users\hansc\Downloads\testcompare\10462.7191107_1.png",r"C:\Users\hansc\Downloads\testcompare\10462.7213836_0.png",r"C:\Users\hansc\Downloads\testcompare\10462.7213836_1.png",r"C:\Users\hansc\Downloads\testcompare\10462.7253843_0.png",r"C:\Users\hansc\Downloads\testcompare\10462.7253843_1.png",r"C:\Users\hansc\Downloads\testcompare\10462.7274426_0.png",r"C:\Users\hansc\Downloads\testcompare\10462.7274426_1.png",r"C:\Users\hansc\Downloads\testcompare\10462.7286225_0.png",r"C:\Users\hansc\Downloads\testcompare\10462.7286225_1.png",r"C:\Users\hansc\Downloads\testcompare\10462.7301136_0.png",r"C:\Users\hansc\Downloads\testcompare\10462.7301136_1.png",r"C:\Users\hansc\Downloads\testcompare\10462.7312635_0.png",r"C:\Users\hansc\Downloads\testcompare\10462.7312635_1.png",r"C:\Users\hansc\Downloads\testcompare\10462.7325586_0.png",],width=100,height=100,interpolation=cv2.INTER_LINEAR,with_alpha=False,)
a-cv2-easy-resize
Different ways of resizing pictures in OpenCVpipinstalla-cv2-easy-resizefroma_cv2_easy_resizeimportadd_easy_resize_to_cv2froma_cv_imwrite_imread_plusimportadd_imwrite_plus_imread_plus_to_cv2add_imwrite_plus_imread_plus_to_cv2()add_easy_resize_to_cv2()importcv2pic=cv2.imread_plus(r"https://raw.githubusercontent.com/hansalemaos/screenshots/main/splitted1.jpeg")pic1=cv2.easy_resize_image(pic.copy(),width=200,height=None,percent=None,interpolation=cv2.INTER_AREA)pic2=cv2.easy_resize_image(pic.copy(),width=None,height=200,percent=None,interpolation=cv2.INTER_AREA)pic3=cv2.easy_resize_image(pic.copy(),width=100,height=200,percent=None,interpolation=cv2.INTER_AREA)pic4=cv2.easy_resize_image(pic.copy(),width=None,height=None,percent=40,interpolation=cv2.INTER_AREA)pic5=cv2.easy_resize_image(pic.copy(),width=None,height=None,percent=None,interpolation=cv2.INTER_AREA)# returns originalcv2.imwrite('f:\\papagei\\pic1.png',pic1)cv2.imwrite('f:\\papagei\\pic2.png',pic2)cv2.imwrite('f:\\papagei\\pic3.png',pic3)cv2.imwrite('f:\\papagei\\pic4.png',pic4)cv2.imwrite('f:\\papagei\\pic5.png',pic5)
a-cv2-find-biggest-square
Get the largest blank square area in a picture$pipinstalla-cv2-find-biggest-square#adding to cv2froma_cv2_imshow_threadimportadd_imshow_thread_to_cv2add_imshow_thread_to_cv2()froma_cv2_find_biggest_squareimportadd_find_biggest_square_to_cv2add_find_biggest_square_to_cv2()importcv2bil=r"https://github.com/hansalemaos/screenshots/raw/main/cv2_putTrueTypeText_000015.png"box,resultpic,length=cv2.find_largest_square(image=bil,scale_percent=30,gaussian_blur=6,draw_result=True)cv2.imshow_thread(resultpic)#without adding to cvfroma_cv2_imshow_threadimportadd_imshow_thread_to_cv2add_imshow_thread_to_cv2()froma_cv2_find_biggest_squareimportfind_largest_squareimportcv2bil=r"https://github.com/hansalemaos/screenshots/raw/main/cv2_putTrueTypeText_000015.png"box,resultpic,length=find_largest_square(image=bil,scale_percent=30,gaussian_blur=6,draw_result=True)
a-cv2-imshow-thread
Solution for the "window - not responding" problem with cv2.imshow()pipinstalla-cv2-imshow-threadUsage:importglobimportosfroma_cv2_imshow_threadimportadd_imshow_thread_to_cv2add_imshow_thread_to_cv2()#monkey patchingimportcv2image_background_folder=r'C:\yolovtest\backgroundimages'pics=[cv2.imread(x)forxinglob.glob(f'{image_background_folder}{os.sep}*.png')]cv2.imshow_thread(image=pics[0],window_name='screen1',sleep_time=None,quit_key='q')#single picturecv2.imshow_thread(image=pics,window_name='screen1',sleep_time=.2,quit_key='e')#sequence of pics like a video clipParameters:image:Union[list,np.ndarray]Youcanpassalistofimagesorasingleimagewindow_name:strWindowtitle(default="")sleep_time:Union[float,int,None]=NoneUsefulifyouhaveanimagesequence.IfyoupassNone,youwillhavetopressthequit_keytocontinue(default=None)quit_key:str="q"keytoclosethewindowReturns:None
a-cv2-putTrueTypeText
cv2.putTrueTypeText works just like cv2.putText, but with TTF fonts!You can add putTrueTypeText to cv2, or ...$pipinstalla_cv2_putTrueTypeTextfroma_cv_imwrite_imread_plusimportadd_imwrite_plus_imread_plus_to_cv2fromrandomimportchoicefroma_cv2_imshow_threadimportadd_imshow_thread_to_cv2froma_cv2_putTrueTypeTextimportadd_truetypetext_to_cv2,get_all_ttf_fontsimportcv2add_imshow_thread_to_cv2()add_truetypetext_to_cv2()add_imwrite_plus_imread_plus_to_cv2()url=r"https://raw.githubusercontent.com/hansalemaos/screenshots/main/templatematching1.png"filepath="c:\\temptemptemppic.png"pic=cv2.imread_plus(url)cv2.imwrite_plus(filepath,pic)ttfonts=get_all_ttf_fonts()font1=choice(ttfonts)test1=cv2.putTrueTypeText(img=filepath,text=f"{font1}".lower(),org=(50,120),fontFace=font1,#needs to be a file path!fontScale=46,color=(255,255,0),)font2=choice(ttfonts)test2=cv2.putTrueTypeText(img=url,text=f"{font2}".lower(),org=(50,120),fontFace=font2,fontScale=46,color=(255,0,255),)font3=choice(ttfonts)test3=cv2.putTrueTypeText(img=cv2.imread(filepath),text=f"{font3}".lower(),org=(50,120),fontFace=font3,fontScale=46,color=(123,50,110),)font4=choice(ttfonts)test4=cv2.putTrueTypeText(img=cv2.imread(filepath,cv2.IMREAD_GRAYSCALE),text=f"{font4}".lower(),org=(50,120),fontFace=font4,fontScale=46,color=(255,0,255),)test5=cv2.putTrueTypeText(img=cv2.imread(filepath),text=f"cv2.FONT_HERSHEY_SIMPLEX",org=(50,120),fontFace=cv2.FONT_HERSHEY_SIMPLEX,fontScale=2,color=(255,0,255),)cv2.imshow_thread([test1,test2,test3,test4,test5])... import the functionfroma_cv_imwrite_imread_plusimportadd_imwrite_plus_imread_plus_to_cv2fromrandomimportchoicefroma_cv2_imshow_threadimportadd_imshow_thread_to_cv2froma_cv2_putTrueTypeTextimportputTrueTypeText,get_all_ttf_fontsimportcv2add_imwrite_plus_imread_plus_to_cv2()url=r"https://raw.githubusercontent.com/hansalemaos/screenshots/main/templatematching1.png"filepath="c:\\temptemptemppic.png"pic=cv2.imread_plus(url)cv2.imwrite_plus(filepath,pic)ttfonts=get_all_ttf_fonts()add_imshow_thread_to_cv2()font1=choice(ttfonts)test1=putTrueTypeText(img=filepath,text=f"{font1}".lower(),org=(50,120),fontFace=font1,#needs to be a file path!fontScale=46,color=(255,255,0),)font2=choice(ttfonts)test2=putTrueTypeText(img=url,text=f"{font2}".lower(),org=(50,120),fontFace=font2,fontScale=46,color=(255,0,255),)font3=choice(ttfonts)test3=putTrueTypeText(img=cv2.imread(filepath),text=f"{font3}".lower(),org=(50,120),fontFace=font3,fontScale=46,color=(123,50,110),)font4=choice(ttfonts)test4=putTrueTypeText(img=cv2.imread(filepath,cv2.IMREAD_GRAYSCALE),text=f"{font4}".lower(),org=(50,120),fontFace=font4,fontScale=46,color=(255,0,255),)test5=putTrueTypeText(img=cv2.imread(filepath),text=f"cv2.FONT_HERSHEY_SIMPLEX",org=(50,120),fontFace=cv2.FONT_HERSHEY_SIMPLEX,fontScale=2,color=(255,0,255),)cv2.imshow_thread([test1,test2,test3,test4,test5])
a-cv2-shape-finder
Detecting shapes with OpenCV, and getting all the important information in a DataFrame$pipinstalla-cv2-shape-finderfroma_cv2_shape_finderimportget_shapes_using_ADAPTIVE_THRESH_GAUSSIAN_C,get_shapes_using_ADAPTIVE_THRESH_MEAN_C,get_shapes_using_THRESH_OTSUimportcv2importpandasaspdfroma_cv2_imshow_threadimportadd_imshow_thread_to_cv2froma_cv_imwrite_imread_plusimportadd_imwrite_plus_imread_plus_to_cv2importnumpyasnpadd_imwrite_plus_imread_plus_to_cv2()add_imshow_thread_to_cv2()image2=cv2.imread_plus(r"http://clipart-library.com/img/2000719.png")#method1 (best results)df,bw_pic=get_shapes_using_ADAPTIVE_THRESH_GAUSSIAN_C(im=image2.copy(),method=cv2.CHAIN_APPROX_SIMPLE,approxPolyDPvar=0.02,constant_subtracted=2,block_size=11,return_bw_pic=True,)#method2 (good results)df,bw_pic=get_shapes_using_ADAPTIVE_THRESH_MEAN_C(im=image2.copy(),method=cv2.CHAIN_APPROX_SIMPLE,approxPolyDPvar=0.04,constant_subtracted=2,block_size=11,return_bw_pic=True,)#method3 (not always good results)df,bw_pic=get_shapes_using_THRESH_OTSU(im=image2.copy(),method=cv2.CHAIN_APPROX_SIMPLE,approxPolyDPvar=0.01,kernel=(1,1),start_thresh=50,end_thresh=255,return_bw_pic=True,)aa_arcLengthaa_isContourConvexaa_center_xaa_center_yaa_areaaa_convexHullaa_len_convexHullaa_len_cntsaa_shapeaa_rotated_rectangleaa_minEnclosingCircle_centeraa_minEnclosingCircle_radiusaa_fitEllipseaa_fitLineaa_h0aa_h1aa_h2aa_h3aa_bound_start_xaa_bound_start_yaa_bound_end_xaa_bound_end_yaa_bound_widthaa_bound_height01994.000000True286212242952.0[[[573,0]],[[573,424]],[[0,424]],[[0,0]]]44rectangle[[0,0],[573,0],[573,424],[0,424]](286,212)356<NA>((573,212),(0,212))-1-11-100574425574425117.656854True51139823.0[[[509,397]],[[510,396]],[[513,396]],[[514,397]],[[514,400]],[[513,401]],[[510,401]],[[509,400]]]88circle[[509,396],[514,396],[514,401],[509,401]](511,398)2((511.5,398.5),(5.830951690673828,5.830951690673828),0.0)((573,398),(0,398))2-1-1050939651540266267.213203False402395128.0[[[396,388]],[[406,387]],[[409,399]],[[405,401]],[[397,401]]]57oval[[395,388],[407,386],[409,400],[397,401]](402,394)8((401.52850341796875,394.2469482421875),(13.484291076660156,29.70796012878418),84.56861877441406)((573,1267),(0,-1646))31-103963874104021415388.183766False543396172.5[[[538,386]],[[551,389]],[[550,401]],[[543,401]],[[535,400]]]510oval[[535,400],[538,386],[552,389],[549,403]](542,394)9((540.5974731445312,393.8035888671875),(14.401185989379883,27.078874588012695),93.7881088256836)((573,561),(0,-2567))42-105353865524021716442.384776True530393122.5[[[530,386]],[[532,386]],[[535,389]],[[535,397]],[[533,400]],[[528,401]],[[525,398]],[[525,392]],[[527,388]]]99oval[[525,401],[525,386],[535,386],[535,401]](530,393)7((530.2174682617188,393.71160888671875),(11.07934856414795,15.688089370727539),10.645519256591797)((573,-138),(0,6796))63505253865364021116512.242641False5303937.5[[[531,395]],[[530,396]],[[529,395]],[[529,393]],[[530,391]],[[531,391]]]67oval[[529,391],[531,391],[531,396],[529,396]](530,393)2((530.2325439453125,393.48016357421875),(2.533412456512451,5.182924270629883),12.179041862487793)((573,-82),(0,6120))-1-1-1452939153239736653.941125False520393102.0[[[521,386]],[[525,389]],[[525,396]],[[522,400]],[[519,401]],[[515,398]],[[514,394]],[[516,389]]]812oval[[511,395],[519,384],[528,391],[520,402]](520,393)7((519.7500610351562,393.70819091796875),(10.821428298950195,15.00129222869873),12.044055938720703)((573,263),(0,1652))74-1051438652640212167124.911687False475395252.5[[[471,386]],[[477,386]],[[488,389]],[[487,398]],[[480,410]],[[475,409]],[[466,398]],[[467,392]]]815oval[[463,405],[469,383],[489,389],[483,411]](477,397)13((477.4767761230469,396.48944091796875),(20.678686141967773,25.679677963256836),21.347288131713867)((573,781),(0,-1472))96804663864894112325810.242641True4823937.5[[[484,394]],[[483,395]],[[482,395]],[[481,394]],[[481,393]],[[482,392]],[[484,392]]]77circle[[481,392],[484,392],[484,395],[481,395]](482,393)1((482.6419372558594,393.3580627441406),(3.061401844024658,3.7628066539764404),45.0)((573,302),(0,876))-1-1-1748139248539644981.112698False459399212.5[[[465,386]],[[466,405]],[[463,410]],[[458,410]],[[454,403]],[[455,391]],[[458,387]]]711oval[[453,386],[465,386],[466,409],[454,410]](461,398)12((460.94775390625,397.08856201171875),(13.2537202835083,25.697481155395508),2.9813687801361084)((573,2400),(0,-7608))1171004543864674111325#Let's draw the results from the second picture#There is nothing better than Pandas to process data.image=image2.copy()forname,groupindf.groupby("aa_h3"):ifname==0:continuefabb=(np.random.randint(50,250),np.random.randint(50,250),np.random.randint(50,250),)forkey,itemingroup.loc[(group.aa_area>200)&(group.aa_shape.isin(['rectangle','triangle','circle','pentagon','hexagon']))].iterrows():image=cv2.drawContours(image,item.aa_convexHull,-1,color=fabb,thickness=5,lineType=cv2.LINE_AA)image=cv2.rectangle(image,(item.aa_bound_start_x,item.aa_bound_start_y),(item.aa_bound_end_x,item.aa_bound_end_y),(0,0,0),3,)image=cv2.rectangle(image,(item.aa_bound_start_x,item.aa_bound_start_y),(item.aa_bound_end_x,item.aa_bound_end_y),fabb,2,)image=cv2.putText(image,f'{str(item.aa_shape)}-{name}',(item.aa_bound_start_x,item.aa_bound_start_y),cv2.FONT_HERSHEY_SIMPLEX,0.4,(0,0,0),2,cv2.LINE_AA,)image=cv2.putText(image,f'{str(item.aa_shape)}-{name}',(item.aa_bound_start_x,item.aa_bound_start_y),cv2.FONT_HERSHEY_SIMPLEX,0.4,fabb,1,cv2.LINE_AA,)cv2.imshow_thread([image,bw_pic])
a-cv2-split-images-into-equal-parts
Split an image into equal partsimportcv2froma_cv2_split_images_into_equal_partsimportadd_split_images_to_cv2add_split_images_to_cv2()list_pics,list_files=cv2.split_image_into_equal_parts(img=r"https://github.com/hansalemaos/screenshots/raw/main/splitted1.jpeg",outputfolder="f:\\picsplittedxxx",pixel_width=100,pixel_height=200,colorborder=(255,0,0),text_color_border1=(0,150,0),text_color_border2=(200,0,0),text_height_1=0.4,text_height_2=0.4,)In[3]:list_picsOut[3]:[array([[[145,170,144],[145,170,144],[145,170,144],...,[113,149,119],[112,148,118],[114,150,120]],[[145,170,144],[145,170,144],[145,170,144],...,[112,148,118],[112,148,118],[113,149,119]],....list_filesOut[4]:['f:\\picsplittedxxx\\splitted\\0x0-100x200.png','f:\\picsplittedxxx\\splitted\\0x200-100x400.png','f:\\picsplittedxxx\\splitted\\0x400-100x750.png','f:\\picsplittedxxx\\splitted\\100x0-200x200.png','f:\\picsplittedxxx\\splitted\\100x200-200x400.png','f:\\picsplittedxxx\\splitted\\100x400-200x750.png','f:\\picsplittedxxx\\splitted\\200x0-300x200.png','f:\\picsplittedxxx\\splitted\\200x200-300x400.png','f:\\picsplittedxxx\\splitted\\200x400-300x750.png',....importcv2froma_cv2_split_images_into_equal_partsimportadd_split_images_to_cv2add_split_images_to_cv2()list_pics,list_files=cv2.split_image_into_equal_parts(img=r"https://github.com/hansalemaos/screenshots/raw/main/splitted1.jpeg",outputfolder=None,pixel_width=100,pixel_height=200,colorborder=(255,0,0),text_color_border1=(0,150,0),text_color_border2=(200,0,0),text_height_1=0.4,text_height_2=0.4,)In[3]:list_picsOut[3]:[array([[[145,170,144],[145,170,144],[145,170,144],...,[113,149,119],[112,148,118],[114,150,120]],[[145,170,144],[145,170,144],[145,170,144],...,[112,148,118],[112,148,118],[113,149,119]],....
a-cv2-text-effects
Text effects for OpenCV$pipinstalla-cv2-text-effectsimportosfroma_cv_imwrite_imread_plusimportopen_image_in_cv,save_cv_imagefroma_cv2_text_effectsimport(put_ttf_font_multiline_in_box_at_exact_center_location_with_exact_size,put_ttf_font_multiline_at_exact_center_location_with_exact_size,put_ttf_font_multiline_at_exact_location_with_exact_size,put_ttf_font_multiline_in_box_at_exact_location_with_exact_size,put_ttf_font_in_circle_at_exact_location_with_exact_size,put_ttf_font_in_box_at_exact_location_with_exact_size,put_ttf_font_at_exact_location_with_exact_size,putTrueTypeText,center_text_at_certain_size_at_a_specific_point,center_of_text_at_certain_size_at_a_specific_point_with_boxes,)img=open_image_in_cv("https://raw.githubusercontent.com/hansalemaos/screenshots/main/merg6.png")maxwidth=150maxheight=150(imgresult1,ptLowerLeftTextOriginX2,ptLowerLeftTextOriginY2,intFontFace2,fltFontScale2,intFontThickness2,textSize2,)=center_of_text_at_certain_size_at_a_specific_point_with_boxes(img,"Number 1",maxwidth,maxheight,wheretoput=(200,200),color=(255,255,0),add_thickness_each=10,rectangle_border_size=5,rectangle_border_colors=((244,255,0),(244,0,255)),)(imgresult2,ptLowerLeftTextOriginX,ptLowerLeftTextOriginY,intFontFace,fltFontScale,intFontThickness,textSize,)=center_text_at_certain_size_at_a_specific_point(img,"Number 2",maxwidth,maxheight,wheretoput=(100,100),color=(255,255,0),add_thickness_each=10,)imgresult3=putTrueTypeText(img=img,text="Number 3",org=(100,100),fontFace=r"C:\Windows\Fonts\ANTQUAB.TTF",fontScale=56,color=(255,255,0),)ia=put_ttf_font_at_exact_location_with_exact_size(image=img,text="Number 4",coords=(59,300),color=(100,0,100),font=r"C:\Windows\Fonts\ANTQUAB.TTF",maxwidth=300,maxheight=100,fonttransparency=100,)ia1=put_ttf_font_in_box_at_exact_location_with_exact_size(image=img,text="Number 5",coords=(59,300),color=(100,0,100),font=r"C:\Windows\Fonts\ANTQUAB.TTF",maxwidth=300,maxheight=100,fonttransparency=0,boxtransparency=0.7,boxcolor=(255,0,0),)ia2=put_ttf_font_in_circle_at_exact_location_with_exact_size(image=img,text="Number 6",coords=(59,300),color=(100,0,100),font=r"C:\Windows\Fonts\ANTQUAB.TTF",maxwidth=300,maxheight=100,fonttransparency=50,circletransparency=0.2,circlecolor=(255,0,0),)ia3=put_ttf_font_multiline_in_box_at_exact_location_with_exact_size(image=img,textwithnewline="Number 7\nNumber 7\nNumber 7",coords=(59,10),color=(100,0,100),font=r"C:\Windows\Fonts\ANTQUAB.TTF",maxwidth=600,maxheight=600,fonttransparency=50,boxtransparency=0.2,boxcolor=(255,0,0),boxborder=20,)ia4=put_ttf_font_multiline_at_exact_location_with_exact_size(image=img,textwithnewline="Number 8\nNumber 8\nNumber 8",coords=(59,10),color=(100,0,100),font=r"C:\Windows\Fonts\ANTQUAB.TTF",maxwidth=600,maxheight=600,fonttransparency=50,)ia5=put_ttf_font_multiline_at_exact_center_location_with_exact_size(image=img,textwithnewline="Number 9\nNumber 9\nNumber 9",coords=(300,300),color=(255,255,210),font=r"C:\Windows\Fonts\ANTQUAB.TTF",maxwidth=300,maxheight=100,fonttransparency=-1,)ia6=put_ttf_font_multiline_in_box_at_exact_center_location_with_exact_size(image=img,textwithnewline="Number 10\nNumber 10\nNumber 10",coords=(300,300),color=(255,255,210),font=r"C:\Windows\Fonts\ANTQUAB.TTF",maxwidth=300,maxheight=100,fonttransparency=50,boxtransparency=0.2,boxcolor=(255,0,0),boxborder=20,)allimgs=[imgresult1,imgresult2,imgresult3,ia["result"],ia1["result"],ia2["result"],ia3["result"],ia4["result"],ia5["result"],ia6["result"],]fori,binenumerate(allimgs):save_cv_image(os.path.join('f:\\alltextimgs',str(i).zfill(6)+'.png'),b)
acv-dev
Active Coalition of Variables (ACV):ACV is a python library that aims to explainany machine learning modelsordata.It giveslocal rule-basedexplanations for any model or data.It providesa better estimation of Shapley Values for tree-based model(more accurate thanpath-dependent TreeSHAP). It also proposes new Shapley Values that have better local fidelity.and thecorrect way of computing Shapley Values of categorical variables after encoding (eg., One Hot or Dummy, etc.)We can regroup the different explanations in two groups:Agnostic ExplanationsandTree-based Explanations.See the papershere.InstallationRequirementsPython 3.7-3.9OSX: ACV uses Cython extensions that need to be compiled with multi-threading support enabled. The default Apple Clang compiler does not support OpenMP. To solve this issue, obtain the lastest gcc version with Homebrew that has multi-threading enabled: see for examplepysteps installation for OSX.Windows: Install MinGW (a Windows distribution of gcc) or Microsoft’s Visual CInstall the acv package:$ pip install acv-expA. Agnostic explanationsThe Agnostic approaches explain any data (X,Y) or model (X,f(X)) using the following explanation methods:Same Decision Probability (SDP) andSufficient ExplanationsSufficient RulesSee the paperConsistent Sufficient Explanations and Minimal Local Rules for explaining regression and classification modelsfor more details.I. First, we need to fit our explainer (ACXplainers) to input-output of the data(X, Y)or model(X, f(X))if we want to explain the data or the model respectively.fromacv_explainersimportACXplainer# It has the same params as a Random Forest, and it should be tuned to maximize the performance.acv_xplainer=ACXplainer(classifier=True,n_estimators=50,max_depth=5)acv_xplainer.fit(X_train,y_train)roc=roc_auc_score(acv_xplainer.predict(X_test),y_test)II. Then, we can load all the explanations in a webApp as follow:importacv_appimportos# compile the ACXplaineracv_app.compile_ACXplainers(acv_xplainer,X_train,y_train,X_test,y_test,path=os.getcwd())# Launch the webAppacv_app.run_webapp(pickle_path=os.getcwd())III. Or we can compute each explanation separately as follow:Same Decision Probability (SDP)The main tool of our explanations is the Same Decision Probability (SDP). Given, the same decision probabilityof variablesis the probabilty that the prediction remains the same when we fixed variablesor when the variablesare missing.How to compute?sdp=acv_xplainer.compute_sdp_rf(X,S,data_bground)# data_bground is the background dataset that is used for the estimation. It should be the training samples.Minimal Sufficient ExplanationsThe Sufficient Explanations is the Minimal Subset S such that fixing the valuespermit to maintain the prediction with high probability. See the paperherefor more details.How to compute the Minimal Sufficient Explanation?The following code return the Sufficient Explanation with minimal cardinality.sdp_importance,min_sufficient_expl,size,sdp=acv_xplainer.importance_sdp_rf(X,y,X_train,y_train,pi_level=0.9)How to compute all the Sufficient Explanations ?Since the Minimal Sufficient Explanation may not be unique for a given instance, we can compute all of them.sufficient_expl,sdp_expl,sdp_global=acv_xplainer.sufficient_expl_rf(X,y,X_train,y_train,pi_level=0.9)Local Explanatory ImportanceFor a given instance, the local explanatory importance of each variable corresponds to the frequency of apparition of the given variable in the Sufficient Explanations. See the paperherefor more details.How to compute the Local Explanatory Importance ?lximp=acv_xplainer.compute_local_sdp(d=X_train.shape[1],sufficient_expl)Local rule-based explanationsFor a given instance(x, y)and its Sufficient Explanation S such that, we compute a local minimal rule which containsxsuch that every observationzthat satisfies this rule has. See the paperherefor more detailsHow to compute the local rule explanations ?sdp,rules,_,_,_=acv_xplainer.compute_sdp_maxrules(X,y,data_bground,y_bground,S)# data_bground is the background dataset that is used for the estimation. It should be the training samples.B. Tree-based explanationsACV gives Shapley Values explanations for XGBoost, LightGBM, CatBoostClassifier, scikit-learn and pyspark tree models. It provides the following Shapley Values:Classic local Shapley Values (The value function is the conditional expectation)Active Shapley values (Local fidelity and Sparse by design)Swing Shapley Values (The Shapley values are interpretable by design)(Coming soon)In addition, we use the coalitional version of SVto properly handle categorical variables in the computation of SV.See the papershereTo explain the tree-based models above, we need to transform our model into ACVTree.fromacv_explainersimportACVTreeforest=XGBClassifier()# or any Tree Based models#...trained the modelacvtree=ACVTree(forest,data_bground)# data_bground is the background dataset that is used for the estimation. It should be the training samples.Accurate Shapley Valuessv=acvtree.shap_values(X)Note that it provides a better estimation of thetree-path dependent of TreeSHAPwhen the variables are dependent.Accurate Shapley Values with encoded categorical variablesLet assume we have a categorical variable Y with k modalities that we encoded by introducing the dummy variables. As shown in the paper, we must take the coalition of the dummy variables to correctly compute the Shapley values.# cat_index := list[list[int]] that contains the column indices of the dummies or one-hot variables grouped# together for each variable. For example, if we have only 2 categorical variables Y, Z# transformed into [Y_0, Y_1, Y_2] and [Z_0, Z_1, Z_2]cat_index=[[0,1,2],[3,4,5]]forest_sv=acvtree.shap_values(X,C=cat_index)In addition, we can compute the SV given any coalitions. For example, let assume we have 10 variablesand we want the following coalitioncoalition=[[0,1,2],[3,4],[5,6]]forest_sv=acvtree.shap_values(X,C=coalition)How to computefor tree-based classifier ?Recall that theis the probability that the prediction remains the same when we fixed variablesgiven the subset S.sdp=acvtree.compute_sdp_clf(X,S,data_bground)# data_bground is the background dataset that is used for the estimation. It should be the training samples.How to compute the Sufficient Coalitionand the Global SDP importance for tree-based classifier ?Recall that the Minimal Sufficient Explanations is the Minimal Subset S such that fixing the valuespermit to maintain the prediction with high probability.sdp_importance,sdp_index,size,sdp=acvtree.importance_sdp_clf(X,data_bground)# data_bground is the background dataset that is used for the estimation. It should be the training samples.Active Shapley valuesThe Active Shapley values is a SV based on a new game defined in the Paper (Accurate and robust Shapley Values for explaining predictions and focusing on local important variablessuch that null (non-important) variables has zero SV and the "payout" is fairly distribute among active variables.How to compute Active Shapley values ?importacv_explainers# First, we need to compute the Active and Null coalitionsdp_importance,sdp_index,size,sdp=acvtree.importance_sdp_clf(X,data_bground)S_star,N_star=acv_explainers.utils.get_active_null_coalition_list(sdp_index,size)# Then, we used the active coalition found to compute the Active Shapley values.forest_asv_adap=acvtree.shap_values_acv_adap(X,C,S_star,N_star,size)Remarks for tree-based explanations:If you don't want to use multi-threaded (due to scaling or memory problem), you have to add "_nopa" to each function (e.g. compute_sdp_clf ==> compute_sdp_clf_nopa). You can also compute the different values needed in cache by setting cache=True in ACVTree initialization e.g. ACVTree(model, data_bground, cache=True).Examples and tutorials (a lot more to come...)We can find a tutorial of the usages of ACV indemo_acvand the notebooks below demonstrate different use cases for ACV. Look inside the notebook directory of the repository if you want to try playing with the original notebooks yourself.
acv-exp
Active Coalition of Variables (ACV):ACV is a python library that aims to explainany machine learning modelsordata.It giveslocal rule-basedexplanations for any model or data.It providesa better estimation of Shapley Values for tree-based model(more accurate thanpath-dependent TreeSHAP). It also proposes new Shapley Values that have better local fidelity.and thecorrect way of computing Shapley Values of categorical variables after encoding (eg., One Hot or Dummy, etc.)We can regroup the different explanations in two groups:Agnostic ExplanationsandTree-based Explanations.See the papershere.InstallationRequirementsPython 3.6+OSX: ACV uses Cython extensions that need to be compiled with multi-threading support enabled. The default Apple Clang compiler does not support OpenMP. To solve this issue, obtain the lastest gcc version with Homebrew that has multi-threading enabled: see for examplepysteps installation for OSX.Windows: Install MinGW (a Windows distribution of gcc) or Microsoft’s Visual CInstall the acv package:$ pip install acv-expA. Agnostic explanationsThe Agnostic approaches explain any data (X,Y) or model (X,f(X)) using the following explanation methods:Same Decision Probability (SDP) andSufficient ExplanationsSufficient RulesSee the paperConsistent Sufficient Explanations and Minimal Local Rules for explaining regression and classification modelsfor more details.I. First, we need to fit our explainer (ACXplainers) to input-output of the data(X, Y)or model(X, f(X))if we want to explain the data or the model respectively.fromacv_explainersimportACXplainer# It has the same params as a Random Forest, and it should be tuned to maximize the performance.acv_xplainer=ACXplainer(classifier=True,n_estimators=50,max_depth=5)acv_xplainer.fit(X_train,y_train)roc=roc_auc_score(acv_xplainer.predict(X_test),y_test)II. Then, we can load all the explanations in a webApp as follow:importacv_appimportos# compile the ACXplaineracv_app.compile_ACXplainers(acv_xplainer,X_train,y_train,X_test,y_test,path=os.getcwd())# Launch the webAppacv_app.run_webapp(pickle_path=os.getcwd())III. Or we can compute each explanation separately as follow:Same Decision Probability (SDP)The main tool of our explanations is the Same Decision Probability (SDP). Given, the same decision probabilityof variablesis the probabilty that the prediction remains the same when we fixed variablesor when the variablesare missing.How to compute?sdp=acv_xplainer.compute_sdp_rf(X,S,data_bground)# data_bground is the background dataset that is used for the estimation. It should be the training samples.Minimal Sufficient ExplanationsThe Sufficient Explanations is the Minimal Subset S such that fixing the valuespermit to maintain the prediction with high probability. See the paperherefor more details.How to compute the Minimal Sufficient Explanation?The following code return the Sufficient Explanation with minimal cardinality.sdp_importance,min_sufficient_expl,size,sdp=acv_xplainer.importance_sdp_rf(X,y,X_train,y_train,pi_level=0.9)How to compute all the Sufficient Explanations ?Since the Minimal Sufficient Explanation may not be unique for a given instance, we can compute all of them.sufficient_expl,sdp_expl,sdp_global=acv_xplainer.sufficient_expl_rf(X,y,X_train,y_train,pi_level=0.9)Local Explanatory ImportanceFor a given instance, the local explanatory importance of each variable corresponds to the frequency of apparition of the given variable in the Sufficient Explanations. See the paperherefor more details.How to compute the Local Explanatory Importance ?lximp=acv_xplainer.compute_local_sdp(d=X_train.shape[1],sufficient_expl)Local rule-based explanationsFor a given instance(x, y)and its Sufficient Explanation S such that, we compute a local minimal rule which containsxsuch that every observationzthat satisfies this rule has. See the paperherefor more detailsHow to compute the local rule explanations ?sdp,rules,_,_,_=acv_xplainer.compute_sdp_maxrules(X,y,data_bground,y_bground,S)# data_bground is the background dataset that is used for the estimation. It should be the training samples.B. Tree-based explanationsACV gives Shapley Values explanations for XGBoost, LightGBM, CatBoostClassifier, scikit-learn and pyspark tree models. It provides the following Shapley Values:Classic local Shapley Values (The value function is the conditional expectation)Active Shapley values (Local fidelity and Sparse by design)Swing Shapley Values (The Shapley values are interpretable by design)(Coming soon)In addition, we use the coalitional version of SVto properly handle categorical variables in the computation of SV.See the papershereTo explain the tree-based models above, we need to transform our model into ACVTree.fromacv_explainersimportACVTreeforest=XGBClassifier()# or any Tree Based models#...trained the modelacvtree=ACVTree(forest,data_bground)# data_bground is the background dataset that is used for the estimation. It should be the training samples.Accurate Shapley Valuessv=acvtree.shap_values(X)Note that it provides a better estimation of thetree-path dependent of TreeSHAPwhen the variables are dependent.Accurate Shapley Values with encoded categorical variablesLet assume we have a categorical variable Y with k modalities that we encoded by introducing the dummy variables. As shown in the paper, we must take the coalition of the dummy variables to correctly compute the Shapley values.# cat_index := list[list[int]] that contains the column indices of the dummies or one-hot variables grouped# together for each variable. For example, if we have only 2 categorical variables Y, Z# transformed into [Y_0, Y_1, Y_2] and [Z_0, Z_1, Z_2]cat_index=[[0,1,2],[3,4,5]]forest_sv=acvtree.shap_values(X,C=cat_index)In addition, we can compute the SV given any coalitions. For example, let assume we have 10 variablesand we want the following coalitioncoalition=[[0,1,2],[3,4],[5,6]]forest_sv=acvtree.shap_values(X,C=coalition)How to computefor tree-based classifier ?Recall that theis the probability that the prediction remains the same when we fixed variablesgiven the subset S.sdp=acvtree.compute_sdp_clf(X,S,data_bground)# data_bground is the background dataset that is used for the estimation. It should be the training samples.How to compute the Sufficient Coalitionand the Global SDP importance for tree-based classifier ?Recall that the Minimal Sufficient Explanations is the Minimal Subset S such that fixing the valuespermit to maintain the prediction with high probability.sdp_importance,sdp_index,size,sdp=acvtree.importance_sdp_clf(X,data_bground)# data_bground is the background dataset that is used for the estimation. It should be the training samples.Active Shapley valuesThe Active Shapley values is a SV based on a new game defined in the Paper (Accurate and robust Shapley Values for explaining predictions and focusing on local important variablessuch that null (non-important) variables has zero SV and the "payout" is fairly distribute among active variables.How to compute Active Shapley values ?importacv_explainers# First, we need to compute the Active and Null coalitionsdp_importance,sdp_index,size,sdp=acvtree.importance_sdp_clf(X,data_bground)S_star,N_star=acv_explainers.utils.get_active_null_coalition_list(sdp_index,size)# Then, we used the active coalition found to compute the Active Shapley values.forest_asv_adap=acvtree.shap_values_acv_adap(X,C,S_star,N_star,size)Remarks for tree-based explanations:If you don't want to use multi-threaded (due to scaling or memory problem), you have to add "_nopa" to each function (e.g. compute_sdp_clf ==> compute_sdp_clf_nopa). You can also compute the different values needed in cache by setting cache=True in ACVTree initialization e.g. ACVTree(model, data_bground, cache=True).Examples and tutorials (a lot more to come...)We can find a tutorial of the usages of ACV indemo_acvand the notebooks below demonstrate different use cases for ACV. Look inside the notebook directory of the repository if you want to try playing with the original notebooks yourself.
a-cv-imwrite-imread-plus
Less trouble reading/writing images with OpenCV$pipinstalla-cv-imwrite-imread-plusfroma_cv_imwrite_imread_plusimportadd_imwrite_plus_imread_plus_to_cv2add_imwrite_plus_imread_plus_to_cv2()cv2.imwrite_plus("f:\\ö\\ö\\ö\\öädssdzß.jpg",base64img2cv)#or:froma_cv_imwrite_imread_plusimportsave_cv_imagesave_cv_image("f:\\ö\\ö\\ö\\öädssdzß.jpg",base64img2cv)Parameters:filepath:strfolderswillbecreatediftheydon't existimage:np.ndarrayimageasnpReturns:filepath:strfroma_cv_imwrite_imread_plusimportadd_imwrite_plus_imread_plus_to_cv2add_imwrite_plus_imread_plus_to_cv2()importbase64fromPILimportImageimportcv2#Base64base64img=r"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABgAAAAYCAYAAADgdz34AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAAApgAAAKYB3X3/OAAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPBoAAANCSURBVEiJtZZPbBtFFMZ/M7ubXdtdb1xSFyeilBapySVU8h8OoFaooFSqiihIVIpQBKci6KEg9Q6H9kovIHoCIVQJJCKE1ENFjnAgcaSGC6rEnxBwA04Tx43t2FnvDAfjkNibxgHxnWb2e/u992bee7tCa00YFsffekFY+nUzFtjW0LrvjRXrCDIAaPLlW0nHL0SsZtVoaF98mLrx3pdhOqLtYPHChahZcYYO7KvPFxvRl5XPp1sN3adWiD1ZAqD6XYK1b/dvE5IWryTt2udLFedwc1+9kLp+vbbpoDh+6TklxBeAi9TL0taeWpdmZzQDry0AcO+jQ12RyohqqoYoo8RDwJrU+qXkjWtfi8Xxt58BdQuwQs9qC/afLwCw8tnQbqYAPsgxE1S6F3EAIXux2oQFKm0ihMsOF71dHYx+f3NND68ghCu1YIoePPQN1pGRABkJ6Bus96CutRZMydTl+TvuiRW1m3n0eDl0vRPcEysqdXn+jsQPsrHMquGeXEaY4Yk4wxWcY5V/9scqOMOVUFthatyTy8QyqwZ+kDURKoMWxNKr2EeqVKcTNOajqKoBgOE28U4tdQl5p5bwCw7BWquaZSzAPlwjlithJtp3pTImSqQRrb2Z8PHGigD4RZuNX6JYj6wj7O4TFLbCO/Mn/m8R+h6rYSUb3ekokRY6f/YukArN979jcW+V/S8g0eT/N3VN3kTqWbQ428m9/8k0P/1aIhF36PccEl6EhOcAUCrXKZXXWS3XKd2vc/TRBG9O5ELC17MmWubD2nKhUKZa26Ba2+D3P+4/MNCFwg59oWVeYhkzgN/JDR8deKBoD7Y+ljEjGZ0sosXVTvbc6RHirr2reNy1OXd6pJsQ+gqjk8VWFYmHrwBzW/n+uMPFiRwHB2I7ih8ciHFxIkd/3Omk5tCDV1t+2nNu5sxxpDFNx+huNhVT3/zMDz8usXC3ddaHBj1GHj/As08fwTS7Kt1HBTmyN29vdwAw+/wbwLVOJ3uAD1wi/dUH7Qei66PfyuRj4Ik9is+hglfbkbfR3cnZm7chlUWLdwmprtCohX4HUtlOcQjLYCu+fzGJH2QRKvP3UNz8bWk1qMxjGTOMThZ3kvgLI5AzFfo379UAAAAASUVORK5CYII="base64img2=r"iVBORw0KGgoAAAANSUhEUgAAABgAAAAYCAYAAADgdz34AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAAApgAAAKYB3X3/OAAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPBoAAANCSURBVEiJtZZPbBtFFMZ/M7ubXdtdb1xSFyeilBapySVU8h8OoFaooFSqiihIVIpQBKci6KEg9Q6H9kovIHoCIVQJJCKE1ENFjnAgcaSGC6rEnxBwA04Tx43t2FnvDAfjkNibxgHxnWb2e/u992bee7tCa00YFsffekFY+nUzFtjW0LrvjRXrCDIAaPLlW0nHL0SsZtVoaF98mLrx3pdhOqLtYPHChahZcYYO7KvPFxvRl5XPp1sN3adWiD1ZAqD6XYK1b/dvE5IWryTt2udLFedwc1+9kLp+vbbpoDh+6TklxBeAi9TL0taeWpdmZzQDry0AcO+jQ12RyohqqoYoo8RDwJrU+qXkjWtfi8Xxt58BdQuwQs9qC/afLwCw8tnQbqYAPsgxE1S6F3EAIXux2oQFKm0ihMsOF71dHYx+f3NND68ghCu1YIoePPQN1pGRABkJ6Bus96CutRZMydTl+TvuiRW1m3n0eDl0vRPcEysqdXn+jsQPsrHMquGeXEaY4Yk4wxWcY5V/9scqOMOVUFthatyTy8QyqwZ+kDURKoMWxNKr2EeqVKcTNOajqKoBgOE28U4tdQl5p5bwCw7BWquaZSzAPlwjlithJtp3pTImSqQRrb2Z8PHGigD4RZuNX6JYj6wj7O4TFLbCO/Mn/m8R+h6rYSUb3ekokRY6f/YukArN979jcW+V/S8g0eT/N3VN3kTqWbQ428m9/8k0P/1aIhF36PccEl6EhOcAUCrXKZXXWS3XKd2vc/TRBG9O5ELC17MmWubD2nKhUKZa26Ba2+D3P+4/MNCFwg59oWVeYhkzgN/JDR8deKBoD7Y+ljEjGZ0sosXVTvbc6RHirr2reNy1OXd6pJsQ+gqjk8VWFYmHrwBzW/n+uMPFiRwHB2I7ih8ciHFxIkd/3Omk5tCDV1t+2nNu5sxxpDFNx+huNhVT3/zMDz8usXC3ddaHBj1GHj/As08fwTS7Kt1HBTmyN29vdwAw+/wbwLVOJ3uAD1wi/dUH7Qei66PfyuRj4Ik9is+hglfbkbfR3cnZm7chlUWLdwmprtCohX4HUtlOcQjLYCu+fzGJH2QRKvP3UNz8bWk1qMxjGTOMThZ3kvgLI5AzFfo379UAAAAASUVORK5CYII="base64imgcv=cv2.imread_plus(base64img)base64img2cv=cv2.imread_plus(base64img2)base64imgcv=cv2.imread_plus(base64img,channels_in_output=4)base64img2cv=cv2.imread_plus(base64img2,channels_in_output=4)base64imgcv=cv2.imread_plus(base64img,channels_in_output=2)base64img2cv=cv2.imread_plus(base64img2,channels_in_output=2)#urlspininterestlogo="https://camo.githubusercontent.com/7f81f312b05694ccc8cd29e3c3466945ff8e73a13320d3fd0f90c6915bbb4ffb/68747470733a2f2f63646e2e6a7364656c6976722e6e65742f67682f646d68656e647269636b732f7369676e61747572652d736f6369616c2d69636f6e732f69636f6e732f726f756e642d666c61742d66696c6c65642f353070782f70696e7465726573742e706e67"linkcv1=cv2.imread_plus(pininterestlogo)linkcv2=cv2.imread_plus(pininterestlogo,channels_in_output=4)linkcv3=cv2.imread_plus(pininterestlogo,channels_in_output=2)linkcv4=cv2.imread_plus(pininterestlogo,channels_in_output=3)#bytes/raw databyteimage=base64.b64decode(base64img2)byteimage1=cv2.imread_plus(byteimage)byteimage2=cv2.imread_plus(byteimage,channels_in_output=4)byteimage3=cv2.imread_plus(byteimage,channels_in_output=2)byteimage4=cv2.imread_plus(byteimage,channels_in_output=3)#PILpilimage=Image.fromarray(byteimage2)pilimage1=cv2.imread_plus(pilimage)pilimage2=cv2.imread_plus(pilimage,channels_in_output=4)pilimage3=cv2.imread_plus(pilimage,channels_in_output=2)pilimage4=cv2.imread_plus(pilimage,channels_in_output=3)#float images to np.uint8floatimage=pilimage4.astype(float)floatimage1=cv2.imread_plus(floatimage)floatimage2=cv2.imread_plus(floatimage,channels_in_output=4)floatimage3=cv2.imread_plus(floatimage,channels_in_output=2)floatimage4=cv2.imread_plus(floatimage,channels_in_output=3)#filepathfilepath="c:\\testestestes.png"pilimage.save(filepath)filepath1=cv2.imread_plus(filepath,bgr_to_rgb=True)filepath2=cv2.imread_plus(filepath,channels_in_output=4,bgr_to_rgb=True)filepath3=cv2.imread_plus(filepath,channels_in_output=2,bgr_to_rgb=True)filepath4=cv2.imread_plus(filepath,channels_in_output=3,bgr_to_rgb=True)#or:froma_cv_imwrite_imread_plusimportopen_image_in_cv#Base64base64img=r"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABgAAAAYCAYAAADgdz34AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAAApgAAAKYB3X3/OAAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPBoAAANCSURBVEiJtZZPbBtFFMZ/M7ubXdtdb1xSFyeilBapySVU8h8OoFaooFSqiihIVIpQBKci6KEg9Q6H9kovIHoCIVQJJCKE1ENFjnAgcaSGC6rEnxBwA04Tx43t2FnvDAfjkNibxgHxnWb2e/u992bee7tCa00YFsffekFY+nUzFtjW0LrvjRXrCDIAaPLlW0nHL0SsZtVoaF98mLrx3pdhOqLtYPHChahZcYYO7KvPFxvRl5XPp1sN3adWiD1ZAqD6XYK1b/dvE5IWryTt2udLFedwc1+9kLp+vbbpoDh+6TklxBeAi9TL0taeWpdmZzQDry0AcO+jQ12RyohqqoYoo8RDwJrU+qXkjWtfi8Xxt58BdQuwQs9qC/afLwCw8tnQbqYAPsgxE1S6F3EAIXux2oQFKm0ihMsOF71dHYx+f3NND68ghCu1YIoePPQN1pGRABkJ6Bus96CutRZMydTl+TvuiRW1m3n0eDl0vRPcEysqdXn+jsQPsrHMquGeXEaY4Yk4wxWcY5V/9scqOMOVUFthatyTy8QyqwZ+kDURKoMWxNKr2EeqVKcTNOajqKoBgOE28U4tdQl5p5bwCw7BWquaZSzAPlwjlithJtp3pTImSqQRrb2Z8PHGigD4RZuNX6JYj6wj7O4TFLbCO/Mn/m8R+h6rYSUb3ekokRY6f/YukArN979jcW+V/S8g0eT/N3VN3kTqWbQ428m9/8k0P/1aIhF36PccEl6EhOcAUCrXKZXXWS3XKd2vc/TRBG9O5ELC17MmWubD2nKhUKZa26Ba2+D3P+4/MNCFwg59oWVeYhkzgN/JDR8deKBoD7Y+ljEjGZ0sosXVTvbc6RHirr2reNy1OXd6pJsQ+gqjk8VWFYmHrwBzW/n+uMPFiRwHB2I7ih8ciHFxIkd/3Omk5tCDV1t+2nNu5sxxpDFNx+huNhVT3/zMDz8usXC3ddaHBj1GHj/As08fwTS7Kt1HBTmyN29vdwAw+/wbwLVOJ3uAD1wi/dUH7Qei66PfyuRj4Ik9is+hglfbkbfR3cnZm7chlUWLdwmprtCohX4HUtlOcQjLYCu+fzGJH2QRKvP3UNz8bWk1qMxjGTOMThZ3kvgLI5AzFfo379UAAAAASUVORK5CYII="base64img2=r"iVBORw0KGgoAAAANSUhEUgAAABgAAAAYCAYAAADgdz34AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAAApgAAAKYB3X3/OAAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPBoAAANCSURBVEiJtZZPbBtFFMZ/M7ubXdtdb1xSFyeilBapySVU8h8OoFaooFSqiihIVIpQBKci6KEg9Q6H9kovIHoCIVQJJCKE1ENFjnAgcaSGC6rEnxBwA04Tx43t2FnvDAfjkNibxgHxnWb2e/u992bee7tCa00YFsffekFY+nUzFtjW0LrvjRXrCDIAaPLlW0nHL0SsZtVoaF98mLrx3pdhOqLtYPHChahZcYYO7KvPFxvRl5XPp1sN3adWiD1ZAqD6XYK1b/dvE5IWryTt2udLFedwc1+9kLp+vbbpoDh+6TklxBeAi9TL0taeWpdmZzQDry0AcO+jQ12RyohqqoYoo8RDwJrU+qXkjWtfi8Xxt58BdQuwQs9qC/afLwCw8tnQbqYAPsgxE1S6F3EAIXux2oQFKm0ihMsOF71dHYx+f3NND68ghCu1YIoePPQN1pGRABkJ6Bus96CutRZMydTl+TvuiRW1m3n0eDl0vRPcEysqdXn+jsQPsrHMquGeXEaY4Yk4wxWcY5V/9scqOMOVUFthatyTy8QyqwZ+kDURKoMWxNKr2EeqVKcTNOajqKoBgOE28U4tdQl5p5bwCw7BWquaZSzAPlwjlithJtp3pTImSqQRrb2Z8PHGigD4RZuNX6JYj6wj7O4TFLbCO/Mn/m8R+h6rYSUb3ekokRY6f/YukArN979jcW+V/S8g0eT/N3VN3kTqWbQ428m9/8k0P/1aIhF36PccEl6EhOcAUCrXKZXXWS3XKd2vc/TRBG9O5ELC17MmWubD2nKhUKZa26Ba2+D3P+4/MNCFwg59oWVeYhkzgN/JDR8deKBoD7Y+ljEjGZ0sosXVTvbc6RHirr2reNy1OXd6pJsQ+gqjk8VWFYmHrwBzW/n+uMPFiRwHB2I7ih8ciHFxIkd/3Omk5tCDV1t+2nNu5sxxpDFNx+huNhVT3/zMDz8usXC3ddaHBj1GHj/As08fwTS7Kt1HBTmyN29vdwAw+/wbwLVOJ3uAD1wi/dUH7Qei66PfyuRj4Ik9is+hglfbkbfR3cnZm7chlUWLdwmprtCohX4HUtlOcQjLYCu+fzGJH2QRKvP3UNz8bWk1qMxjGTOMThZ3kvgLI5AzFfo379UAAAAASUVORK5CYII="base64imgcv=open_image_in_cv(base64img)base64img2cv=open_image_in_cv(base64img2)base64imgcv=open_image_in_cv(base64img,channels_in_output=4)base64img2cv=open_image_in_cv(base64img2,channels_in_output=4)base64imgcv=open_image_in_cv(base64img,channels_in_output=2)base64img2cv=open_image_in_cv(base64img2,channels_in_output=2)#urlspininterestlogo="https://camo.githubusercontent.com/7f81f312b05694ccc8cd29e3c3466945ff8e73a13320d3fd0f90c6915bbb4ffb/68747470733a2f2f63646e2e6a7364656c6976722e6e65742f67682f646d68656e647269636b732f7369676e61747572652d736f6369616c2d69636f6e732f69636f6e732f726f756e642d666c61742d66696c6c65642f353070782f70696e7465726573742e706e67"linkcv1=open_image_in_cv(pininterestlogo)linkcv2=open_image_in_cv(pininterestlogo,channels_in_output=4)linkcv3=open_image_in_cv(pininterestlogo,channels_in_output=2)linkcv4=open_image_in_cv(pininterestlogo,channels_in_output=3)#bytes/raw databyteimage=base64.b64decode(base64img2)byteimage1=open_image_in_cv(byteimage)byteimage2=open_image_in_cv(byteimage,channels_in_output=4)byteimage3=open_image_in_cv(byteimage,channels_in_output=2)byteimage4=open_image_in_cv(byteimage,channels_in_output=3)#PILpilimage=Image.fromarray(byteimage2)pilimage1=open_image_in_cv(pilimage)pilimage2=open_image_in_cv(pilimage,channels_in_output=4)pilimage3=open_image_in_cv(pilimage,channels_in_output=2)pilimage4=open_image_in_cv(pilimage,channels_in_output=3)#float images to np.uint8floatimage=pilimage4.astype(float)floatimage1=open_image_in_cv(floatimage)floatimage2=open_image_in_cv(floatimage,channels_in_output=4)floatimage3=open_image_in_cv(floatimage,channels_in_output=2)floatimage4=open_image_in_cv(floatimage,channels_in_output=3)#filepathfilepath="c:\\testestestes.png"pilimage.save(filepath)filepath1=open_image_in_cv(filepath,bgr_to_rgb=True)filepath2=open_image_in_cv(filepath,channels_in_output=4,bgr_to_rgb=True)filepath3=open_image_in_cv(filepath,channels_in_output=2,bgr_to_rgb=True)filepath4=open_image_in_cv(filepath,channels_in_output=3,bgr_to_rgb=True)froma_cv2_imshow_threadimportadd_imshow_thread_to_cv2add_imshow_thread_to_cv2()cv2.imshow_thread([base64imgcv,base64img2cv,linkcv1,linkcv2,linkcv3,linkcv4,byteimage1,byteimage2,byteimage3,byteimage4,pilimage1,pilimage2,pilimage3,pilimage4,floatimage1,floatimage2,floatimage3,floatimage4,filepath1,filepath2,filepath3,filepath4,])Parameters:image:AnyCanbeURL,bytes,base64,filepath,np.ndarray,PILchannels_in_output:Union[int,None]None(originalimagewon't be changed)2(GRAY)3(BGR)4(BGRA)(default=None)bgr_to_rgb:bool=FalseConvertsBGRAtoRGBA/BGRtoRGBReturns:image:np.ndarray(Alwaysasnp.uint8)
acvl-utils
No description available on PyPI.
a-cv-sift-detection
Detecting objects using openCV and siftfroma_cv_sift_detectionimportSiftMatchingOnScreenneedle_images=[r"C:\detectiontest\media_manager_icon--744x194--1250x738.png",r"C:\detectiontest\chrome_icon--643x199--1140x734.png",r"C:\detectiontest\einstellungen_icon--537x200--1038x735.png",r"C:\detectiontest\kamera_icon--426x203--931x738.png",r"C:\detectiontest\spiele_und_gewinne_icon--1347x0--1920x452.png",r"C:\detectiontest\bluestacks_x_icon--1101x0--1643x449.png",r"C:\detectiontest\roblox_icon--833x0--1342x448.png",r"C:\detectiontest\systemapps_icon--528x0--1067x448.png",r"C:\detectiontest\gamecenter_icon--244x0--781x451.png",r"C:\detectiontest\playstore_icon--0x0--478x451.png",]siftdetect=SiftMatchingOnScreen()siftdetect.configure_monitor(monitor=1)siftdetect.get_needle_images(needle_images,scale_percent=100)whileTrue:siftdetect.get_screenshot_and_start_detection(checks=50,trees=5,debug=False,max_distance=100,minimum_matches_per_group=5,show_results=True,sleep_time_for_results=0.1,quit_key_for_results="q",scale_percent=100,)sleep(5)whileTrue:siftdetect.get_screenshot_and_start_detection(checks=50,trees=5,debug=False,max_distance=100,minimum_matches_per_group=5,show_results=False,sleep_time_for_results=0.1,quit_key_for_results="q",scale_percent=100,)print(siftdetect.df)0956.575073...C:\detectiontest\media_manager_icon--744x194--...1960.470520...C:\detectiontest\media_manager_icon--744x194--...2961.128601...C:\detectiontest\media_manager_icon--744x194--...3964.380310...C:\detectiontest\media_manager_icon--744x194--...4965.097656...C:\detectiontest\media_manager_icon--744x194--..............420257.061249...C:\detectiontest\playstore_icon--0x0--478x451.png421263.247589...C:\detectiontest\playstore_icon--0x0--478x451.png422263.265106...C:\detectiontest\playstore_icon--0x0--478x451.png423263.682190...C:\detectiontest\playstore_icon--0x0--478x451.png424267.074219...C:\detectiontest\playstore_icon--0x0--478x451.png[425rowsx7columns]x...needle0762.877319...C:\detectiontest\media_manager_icon--744x194--...1781.889160...C:\detectiontest\media_manager_icon--744x194--...2816.500549...C:\detectiontest\media_manager_icon--744x194--...3816.339050...C:\detectiontest\media_manager_icon--744x194--...4816.586853...C:\detectiontest\media_manager_icon--744x194--..............344257.061249...C:\detectiontest\playstore_icon--0x0--478x451.png345263.247589...C:\detectiontest\playstore_icon--0x0--478x451.png346263.265106...C:\detectiontest\playstore_icon--0x0--478x451.png347263.682190...C:\detectiontest\playstore_icon--0x0--478x451.png348267.074219...C:\detectiontest\playstore_icon--0x0--478x451.png[349rowsx7columns]Noobjectstoconcatenatex...needle0881.638245...C:\detectiontest\media_manager_icon--744x194--...1935.431885...C:\detectiontest\media_manager_icon--744x194--...2955.275879...C:\detectiontest\media_manager_icon--744x194--...3941.385254...C:\detectiontest\media_manager_icon--744x194--...4817.367920...C:\detectiontest\media_manager_icon--744x194--..............302392.155945...C:\detectiontest\playstore_icon--0x0--478x451.png303428.587921...C:\detectiontest\playstore_icon--0x0--478x451.png304431.010773...C:\detectiontest\playstore_icon--0x0--478x451.png305476.075165...C:\detectiontest\playstore_icon--0x0--478x451.png306323.301575...C:\detectiontest\playstore_icon--0x0--478x451.png[307rowsx7columns]x...needle0881.638245...C:\detectiontest\media_manager_icon--744x194--...1937.933167...C:\detectiontest\media_manager_icon--744x194--...2935.431885...C:\detectiontest\media_manager_icon--744x194--...3955.450562...C:\detectiontest\media_manager_icon--744x194--...4941.385254...C:\detectiontest\media_manager_icon--744x194--..............303417.567749...C:\detectiontest\playstore_icon--0x0--478x451.png304429.982422...C:\detectiontest\playstore_icon--0x0--478x451.png305438.710541...C:\detectiontest\playstore_icon--0x0--478x451.png306445.110016...C:\detectiontest\playstore_icon--0x0--478x451.png307458.563171...C:\detectiontest\playstore_icon--0x0--478x451.png[308rowsx7columns]x...needle0996.570251...C:\detectiontest\media_manager_icon--744x194--...11011.908936...C:\detectiontest\media_manager_icon--744x194--...21015.692566...C:\detectiontest\media_manager_icon--744x194--...31056.677734...C:\detectiontest\media_manager_icon--744x194--...41071.151245...C:\detectiontest\media_manager_icon--744x194--..............345257.061249...C:\detectiontest\playstore_icon--0x0--478x451.png346263.247589...C:\detectiontest\playstore_icon--0x0--478x451.png347263.265106...C:\detectiontest\playstore_icon--0x0--478x451.png348263.682190...C:\detectiontest\playstore_icon--0x0--478x451.png349267.074219...C:\detectiontest\playstore_icon--0x0--478x451.png[350rowsx7columns]x...needle0996.570251...C:\detectiontest\media_manager_icon--744x194--...11011.908936...C:\detectiontest\media_manager_icon--744x194--...21015.692566...C:\detectiontest\media_manager_icon--744x194--...31056.677734...C:\detectiontest\media_manager_icon--744x194--...41071.151245...C:\detectiontest\media_manager_icon--744x194--..............352257.061249...C:\detectiontest\playstore_icon--0x0--478x451.png353263.247589...C:\detectiontest\playstore_icon--0x0--478x451.png354263.265106...C:\detectiontest\playstore_icon--0x0--478x451.png355263.682190...C:\detectiontest\playstore_icon--0x0--478x451.png356267.074219...C:\detectiontest\playstore_icon--0x0--478x451.png[357rowsx7columns]x...needle0762.877319...C:\detectiontest\media_manager_icon--744x194--...1781.889160...C:\detectiontest\media_manager_icon--744x194--...2816.500549...C:\detectiontest\media_manager_icon--744x194--...3816.339050...C:\detectiontest\media_manager_icon--744x194--...4816.586853...C:\detectiontest\media_manager_icon--744x194--..............343257.061249...C:\detectiontest\playstore_icon--0x0--478x451.png344263.247589...C:\detectiontest\playstore_icon--0x0--478x451.png345263.265106...C:\detectiontest\playstore_icon--0x0--478x451.png346263.682190...C:\detectiontest\playstore_icon--0x0--478x451.png347267.074219...C:\detectiontest\playstore_icon--0x0--478x451.png[348rowsx7columns]
acvsn-checker
No description available on PyPI.
acvt45
Active Coalition of Variables (ACV):ACV is a python library that aims to explainany machine learning modelsordata.It giveslocal rule-basedexplanations for any model or data.It providesa better estimation of Shapley Values for tree-based model(more accurate thanpath-dependent TreeSHAP). It also proposes new Shapley Values that have better local fidelity.We can regroup the different explanations in two groups:Agnostic ExplanationsandTree-based Explanations.InstallationRequirementsPython 3.6+OSX: ACV uses Cython extensions that need to be compiled with multi-threading support enabled. The default Apple Clang compiler does not support OpenMP. To solve this issue, obtain the lastest gcc version with Homebrew that has multi-threading enabled: see for examplepysteps installation for OSX.Windows: Install MinGW (a Windows distribution of gcc) or Microsoft’s Visual CInstall the acv package:$ pip install acv-expA. Agnostic explanationsThe Agnostic approaches explain any data (X,Y) or model (X,f(X)) using the following explanation methods:Same Decision Probability (SDP) andSufficient ExplanationsSufficient RulesSee the paper [Consistent Sufficient Explanations and Minimal Local Rules for explaining regression and classification models] for more details.I. First, we need to fit our explainer (ACXplainers) to input-output of the data(X, Y)or model(X, f(X))if we want to explain the data or the model respectively.fromacv_explainersimportACXplainer# It has the same params as a Random Forest, and it should be tuned to maximize the performance.acv_xplainer=ACXplainer(classifier=True,n_estimators=50,max_depth=5)acv_xplainer.fit(X_train,y_train)roc=roc_auc_score(acv_xplainer.predict(X_test),y_test)II. Then, we can load all the explanations in a webApp as follow:importacv_appimportos# compile the ACXplaineracv_app.compile_ACXplainers(acv_xplainer,X_train,y_train,X_test,y_test,path=os.getcwd())# Launch the webAppacv_app.run_webapp(pickle_path=os.getcwd())III. Or we can compute each explanation separately as follow:Same Decision Probability (SDP)The main tool of our explanations is the Same Decision Probability (SDP). Given, the same decision probabilityof variablesis the probabilty that the prediction remains the same when we fixed variablesor when the variablesare missing.How to compute?sdp=acv_xplainer.compute_sdp_rf(X,S,data_bground)# data_bground is the background dataset that is used for the estimation. It should be the training samples.Minimal Sufficient ExplanationsThe Sufficient Explanations is the Minimal Subset S such that fixing the valuespermit to maintain the prediction with high probability. See the paper [ref] for more details.How to compute all the Sufficient Explanations ?Since the Sufficient Explanation is not unique for a given instance, we can compute all of them.sufficient_expl,sdp_expl,sdp_global=acv_xplainer.sufficient_expl_rf(X,y,X_train,y_train,pi_level=0.9)How to compute the Minimal Sufficient Explanation?The following code return the Sufficient Explanation with minimal cardinality.sdp_importance,sufficient_expl,size,sdp=acv_xplainer.importance_sdp_rf(X,y,X_train,y_train,pi_level=0.9)Local Explanatory ImportanceFor a given instance, the local explanatory importance of each variable corresponds to the frequency of apparition of the given variable in the Sufficient Explanations. See the paper [ref] for more details.How to compute the Local Explanatory Importance ?lximp=acv_xplainer.compute_local_sdp(d=X_train.shape[1],sufficient_expl)Local rule-based explanationsFor a given instance(x, y)and its Sufficient Explanation S, we compute a local minimal rule which containsxsuch that every observationzthat satisfies this rule hasHow to compute the local rule explanations ?sdp,rules,_,_,_=acv_xplainer.compute_sdp_maxrules(X,y,data_bground,y_bground,S)# data_bground is the background dataset that is used for the estimation. It should be the training samples.B. Tree-based explanationsACV gives Shapley Values explanations for XGBoost, LightGBM, CatBoostClassifier, scikit-learn and pyspark tree models. It provides the following Shapley Values:Classic local Shapley Values (The value function is the conditional expectation)Active Shapley values (Local fidelity and Sparse by design)Swing Shapley Values (The Shapley values are interpretable by design)(Coming soon)In addition, we use the coalitional version of SV to properly handle categorical variables in the computation of SV.See the papershereTo explain the tree-based models above, we need to transform our model into ACVTree.fromacv_explainersimportACVTreeforest=XGBClassifier()# or any Tree Based models#...trained the modelacvtree=ACVTree(forest,data_bground)# data_bground is the background dataset that is used for the estimation. It should be the training samples.Accurate Shapley Valuessv=acvtree.shap_values(X)Note that it provides a better estimation of thetree-path dependent of TreeSHAPwhen the variables are dependent.Accurate Shapley Values with encoded categorical variablesLet assume we have a categorical variable Y with k modalities that we encoded by introducing the dummy variables. As shown in the paper, we must take the coalition of the dummy variables to correctly compute the Shapley values.# cat_index := list[list[int]] that contains the column indices of the dummies or one-hot variables grouped# together for each variable. For example, if we have only 2 categorical variables Y, Z# transformed into [Y_0, Y_1, Y_2] and [Z_0, Z_1, Z_2]cat_index=[[0,1,2],[3,4,5]]forest_sv=acvtree.shap_values(X,C=cat_index)In addition, we can compute the SV given any coalitions. For example, let assume we have 10 variablesand we want the following coalitioncoalition=[[0,1,2],[3,4],[5,6]]forest_sv=acvtree.shap_values(X,C=coalition)How to computefor tree-based classifier ?Recall that theis the probability that the prediction remains the same when we fixed variablesgiven the subset S.sdp=acvtree.compute_sdp_clf(X,S,data_bground)# data_bground is the background dataset that is used for the estimation. It should be the training samples.How to compute the Sufficient Coalitionand the Global SDP importance for tree-based classifier ?Recall that the Minimal Sufficient Explanations is the Minimal Subset S such that fixing the valuespermit to maintain the prediction with high probability.sdp_importance,sdp_index,size,sdp=acvtree.importance_sdp_clf(X,data_bground)# data_bground is the background dataset that is used for the estimation. It should be the training samples.Active Shapley valuesThe Active Shapley values is a SV based on a new game defined in the Paper (Accurate and robust Shapley Values for explaining predictions and focusing on local important variablessuch that null (non-important) variables has zero SV and the "payout" is fairly distribute among active variables.How to compute Active Shapley values ?importacv_explainers# First, we need to compute the Active and Null coalitionsdp_importance,sdp_index,size,sdp=acvtree.importance_sdp_clf(X,data_bground)S_star,N_star=acv_explainers.utils.get_active_null_coalition_list(sdp_index,size)# Then, we used the active coalition found to compute the Active Shapley values.forest_asv_adap=acvtree.shap_values_acv_adap(X,C,S_star,N_star,size)Remarks for tree-based explanations:If you don't want to use multi-threaded (due to scaling or memory problem), you have to add "_nopa" to each function (e.g. compute_sdp_clf ==> compute_sdp_clf_nopa). You can also compute the different values needed in cache by setting cache=True in ACVTree initialization e.g. ACVTree(model, data_bground, cache=True).Examples and tutorials (a lot more to come...)We can find a tutorial of the usages of ACV indemo_acvand the notebooks below demonstrate different use cases for ACV. Look inside the notebook directory of the repository if you want to try playing with the original notebooks yourself.
acvutils
Failed to fetch description. HTTP Status Code: 404
acwater
Atmospheric correction of EnMAP hyperspectral data for water surfacesACwaterimplements a class to load an EnMAP object and execute Polymer atmospheric correction for water surfaces. ACwater requiresEnPTfor the EnMAP data processing andPolymerfor the atmospheric correction algorithm.Operating systemfor installation isLinux, tested on Debian GNU/Linux 9.9 (stretch) - Linux 4.9.0-9-amd64 x86_64.RequirementsareEnPTandPolymer.Installationof EnPT followsEnPT instructions.InstallationTheinstructionsinvolve cloning the package and installing its dependencies using a Python package manager. Note that both ACwater and Polymer must be installed with EnPT in the same environment.How to use command lineFirst, download an EnMAP Level 1B image fromhttps://eoweb.dlr.de/egp/.Please refer to the list of arguments that can be used in EnPT, which is available athttps://enmap.git-pages.gfz-potsdam.de/GFZ_Tools_EnMAP_BOX/EnPT/doc/usage.html.Here is an example of what the command might look like: enpt –CPUs 4 –auto_download_ecmwf True –average_elevation 0 –blocksize 100 –deadpix_P_algorithm spectral –deadpix_P_interp_spatial linear –deadpix_P_interp_spectral linear –disable_progress_bars True –drop_bad_bands True –enable_ac True –mode_ac water –polymer_additional_results True –ortho_resampAlg gauss –output_dir /your output dir/ –output_format GTiff –path_l1b_enmap_image /path to your EnMAP L1B zip file/ENMAP01-____L1B-DT0000002037_20220801T105350Z_026_V010111_20230223T123717Z.ZIP –polymer_root /Polymer path/polymer-v4.14 –run_deadpix_P True –scale_factor_boa_ref 10000 –scale_factor_toa_ref 10000 –target_epsg 4326 –threads -3 –vswir_overlap_algorithm vnir_onlyFeaturesLevel 1 class for connecting EnPT and Polymer.LicenseThis software is underGNU General Public License v3CreditsCredits are with Phytooptics Group at AWI. This software was developed within the context of the EnMAP project supported by the DLR Space Administration with funds of the German Federal Ministry of Economic Affairs and Energy (on the basis of a decision by the German Bundestag: 50 EE 1923 and 50 EE 1915) and contributions from GFZ and Hygeos.This package was created withCookiecutterand theaudreyrproject template.History0.1.0 (2020-04-23) 0.2.6 (2021-05-04) 0.3.0 (2023-02-10) ——————Development at AWI, Bremerhaven.
ac-websocket-server
AC-WEBSOCKETS-SERVERThe ac-websockets-server is a python based server to control a local Assetto Corsa dedicated server via a websockets connection from a remote host.InstallationYou can install ac-websockets-server from PyPi:pip install ac-websockets-serverThe module is only supported in python3.How to useWebsocket CommandsThe client protocol consists of single line commands which receive a Google style JSON object response.shutdownThe following ACWS related commands are supported:shutdown nowshutdown the ACWS serverserverThe following server related commands are supported:server driversshows a summary of the active drivers on the serverserver entriesshows a summary of the entry_list.ini contentsserver infoshows a summary of the serverserver sessionsshows a summary of configured sessionsserver startstarts the AC serverserver stopstops the AC serverserver restartstops and starts the AC serverExcerts from the responses to these commands are shown below.server drivers# server drivers { "data": { "drivers": { "Mark Hannon": { "name": "Mark Hannon", "host": "192.168.1.1", "port": 50834, "car": "bmw_m3_e30", "guid": "9993334455599", "ballast": 0, "msg": "joining" }, "Boof Head": { "name": "Boof Head", "host": "192.168.2.1", "port": 50834, "car": "bmw_m3_e30", "guid": "123456768", "ballast": 0, "msg": "joining" }, "Crazy Guy": { "name": "Crazy Guy", "host": "192.168.3.1", "port": 50834, "car": "bmw_m3_e30", "guid": "7777777777777", "ballast": 0, "msg": "joining" } } } }server entries# server entries { "data": { "entries": { "CAR_0": { "car_id": "CAR_0", "model": "dj_skipbarber_f2000", "skin": "The9GAG", "spectator_mode": "0", "drivername": "", "team": "", "guid": "76561198102064903", "ballast": "0", "restrictor": "0" }server sessions# server sessions { "Practice": { "type": "Practice", "laps": 0, "time": 120, "msg": "" }, "Qualify": { "type": "Qualify", "laps": 0, "time": "10", "msg": "" }, "Race": { "type": "Race", "laps": 20, "time": 0, "msg": "" } }server start# server start { "data": { "msg": "Assetto Corsa server started" } } # { "data": { "serverInfo": { "version": "v1.15", "timestamp": "2022-07-22 10:42:32.8776464 +1000 AEST m=+0.007426800", "track": "rt_autodrom_most", "cars": "[\"ks_mazda_mx5_cup\"]", "msg": "" } } }gridThe following grid related commands are supported:grid finishsets grid order based on latest race finishing ordergrid reversesets grid order based on latest race REVERSED ordergrid ordershows a summary of the current/updated grid ordergrid entriesshows a summary of the all slots for/from entry_list.inigrid savewrite the changes to the grid to the entry_list.ini fileSetting reverse grid and then writing the result are shown below:# grid reverse { "data": { "msg": "test/results/2020_12_20_20_58_RACE.json parse SUCCESS" } } # grid finish { "data": { "grid": { "1": "Keith", "2": ".SNRL.shille", "3": "Wayne", "4": "Russ S", "5": "Mark Hannon", "6": "RussG", "7": "ab156" } } } # grid write { "data": { "msg": "entry_list.ini file update SUCCESS" } }lobbyThe following lobby related commands are supported:lobby infoshows the lobby infolobby restartre-registers to the lobbytrackerThe following tracker related commands are supported:tracker startstarts the AC servertracker stopstops the AC servertracker restartstops and starts the AC serverAll commands require stracker.ini to be stored in the cfg directory and stracker.exe in the server root.
acwrite
acwritingIntroductionHave you ever felt frustrated with your English writing skills? If yes, you are in the right place! Acwriting is actually what you want!What is acwriting?Short answer is, it is a Python package for the English writings, designed especially for academic writings. It can:Translate Chinese from/to English ( Using py-googletrans package, original project link:https://github.com/ssut/py-googletrans)Find Synonym / Antonym of the given english word ( Using PyDictionary package, original project link:https://github.com/geekpradd/PyDictionary)Transfer Python math formulas into Latex format expression (Using latexify package, original project link:https://github.com/odashi/latexify_py)Find most suitable example sentence of the given english word/phraseGiven the writing intention,e.g. "introduce something","state the shortcoming of something","write conclusion",etc, the system outputs the most suitable phrases and sentence templates.Given the English sentence you intend to express, the system automatically corrects the sentence's errors and transfer it into formal & academic style expression.So, let's get started!TODOHow to realize autocorrectAutocomplete funcitonEmbed Autocomplete function into the systemSimple exampleFind most suitable examples sentence of the given english word/phrase>>>fromacwriting.phafindimportPhafind>>>p=Phafind()>>>phase="best knowledge">>>result=p.find(phase)>>>print(result[0:5])['1. To our best knowledge, it is still an open challenge.','2. In the old days, one sought a fatwa from the sheikh who had the best knowledge.','3. I feel now that we have the best knowledge to help people.','4. So what'sourbestknowledge?','5. This, to our best knowledge, was never been investigated before.']Given the writing intention,e.g. "introduce something","state the shortcoming of something","write conclusion",etc, the system outputs the most suitable phrases and sentence templates.>>>fromacwriting.senfindimportSenfind>>>s=Senfind()>>>intention="compare two things">>>result=s.find(intention)>>>print(result[0:5])['1. X is different from Y in a number of respects.','2. X differs from Y in a number of important ways.','3. Both X and Y share a number of key features.','4. These results are similar to those reported by xxx','5. In contrast to earlier findings, however, no evidence of X was detected.']Given the English sentence you intend to express, the system automatically corrects the sentence's errors and transfer it into formal & academic style expression. (The precision of the results will come later.)>>>fromacwriting.autotransimportAutotrans>>>my_auto=Autotrans()>>>style="acdemic style">>>text="There has many problems">>>result=my_auto.transfer(text,style)>>>print("original sentence":,text)"There has many problems">>>print("after corrected:",result)['It has a lot of defects','It has proven to be problematic','It has a lot of issues,','There are many problems,']You can also use command line to use it.InstallationIf you use pip, install the latest version of acwriting by:$ pip3 install acwriteIf you use conda, install the latest version of acwriting by:$ conda install acwriteOr, you can install it manually:$ git clone https://github.com/yiyualt/acwriting.git $ cd acwriting $ python3 setup.py installLicenseThe MIT License (MIT)Copyright (c) <2020> Yi Yu & Chunyang MoPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
acwriting
Failed to fetch description. HTTP Status Code: 404
acw-sc-v2-py
Python requests.HTTPAdapter foracw_sc__v2acw_sc__v2is a cookie used by some websites to prevent crawlers. When the website detects that the request is sent by a crawler, it returns a javascript challenge. The crawler needs to solve the challenge and resend the request with the cookie set to the challenge value. This project provides a Python requests.HTTPAdapter to resolve the challenge automatically.Usagepipinstallacw-sc-v2-pyimportrequestssession=requests.Session()# add the following code to your original requests codefromacw_sc_v2_pyimportacw_sc__v2adapter=acw_sc__v2.AcwScV2Adapter()session.mont("http://",adapter)session.mount("https://",adapter)response=session.get("https://www.example.com/")print(response.text)Use CaseBefore usingacw-sc-v2-pyimportrequestssession=requests.Session()foriinrange(8):response=session.get("https://www.example.com/")print(response.text)Usually, you will get blocked after sending 2 consecutive requests to the same website. The response will be like the following HTML code which requires you to solve a javascript challenge. If the web page is opened in a browser, the browser will automatically solve the challenge.<html><script>vararg1='70D9569CD5E5895C84F284A09503B1598C5762A1';var_0x4818=['\x63\x73\x4b\x48\x77\x71\x4d\x49,...functionsetCookie(name,value){varexpiredate=newDate();...functionreload(x){setCookie("acw_sc__v2",x);...</script></html>After usingacw-sc-v2-pyimportrequests# step 1: importfromacw_sc_v2_pyimportacw_sc__v2session=requests.Session()# step 2: create adapteradapter=acw_sc__v2.AcwScV2Adapter()# step 3: mount adaptersession.mont("http://",adapter)session.mount("https://",adapter)foriinrange(8):response=session.get("https://www.example.com/")print(response.text)By usingacw-sc-v2-py, you will get the normal response. Theacw_sc__v2will handle the javascript challenge and automatically update the cookie.Enable loggingPrepend the following code to enable detailed log foracw-sc-v2-py.importlogginglogger=logging.getLogger("acw_sc_v2_py.acw_sc__v2")logger.setLevel(logging.DEBUG)logger.propagate=Trueimportrequestsfromacw_sc_v2_pyimportacw_sc__v2session=requests.Session()adapter=acw_sc__v2.AcwScV2Adapter()session.mont("http://",adapter)session.mount("https://",adapter)foriinrange(8):response=session.get("https://www.example.com/")print(response.text)The log will be like the following.[2024-01-26 22:01:26] INFO:root:detected anti spam is triggered [2024-01-26 22:01:28] INFO:root:cookie generated acw_sc__v2=65b3bb3601fe9ab002c5c1ff58fc71a1115e8322 [2024-01-26 22:01:28] INFO:root:resending the origin request <!DOCTYPE html></html> [2024-01-26 22:01:29] INFO:root:cookie set acw_sc__v2=65b3bb3601fe9ab002c5c1ff58fc71a1115e8322 [2024-01-26 22:01:30] INFO:root:anti spam is not triggered <!DOCTYPE html></html> ...
acycling-digraph-problem
acycling digraph probleminstallpipinstallacycling-digraph-problemusageusage:main.py[-h][--showSHOW]file_path positionalarguments: file_pathpathtoinputfile options: -h,--helpshowthishelpmessageandexit--showSHOWshowgraph(default:False)
ad
OverviewTheadpackage allows you toeasilyandtransparentlyperformfirst and second-order automatic differentiation. Advanced math involving trigonometric, logarithmic, hyperbolic, etc. functions can also be evaluated directly using theadmathsub-module.All base numeric types are supported(int,float,complex, etc.). This package is designed so that the underlying numeric types will interact with each otheras they normally dowhen performing any calculations. Thus, this package acts more like a “wrapper” that simply helps keep track of derivatives whilemaintaining the original functionalityof the numeric calculations.From the Wikipedia entry onAutomatic differentiation(AD):“AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, and accurate to working precision.”See thepackage documentationfor details and examples.Main FeaturesTransparent calculations with derivatives: no or little modification of existing codeis needed, including when using theNumpymodule.Almost all mathematical operationsare supported, including functions from the standardmathmodule (sin, cos, exp, erf, etc.) andcmathmodule (phase, polar, etc.) with additional convenience trigonometric, hyperbolic, and logarithmic functions (csc, acoth, ln, etc.). Comparison operators follow thesame rules as the underlying numeric types.Real and complexarithmetic handled seamlessly. Treat objects as you normally would using themathandcmathfunctions, but with their newadmathcounterparts.Automatic gradient and hessian function generatorfor optimization studies usingscipy.optimizeroutines withgh(your_func_here).Compatible Linear Algebra Routinesin thead.linalgsubmodule, similar to those found in NumPy’slinalgsubmodule, that are not dependent on LAPACK. There are currently:Decompositionschol: Cholesky Decompositionlu: LU Decompositionqr: QR DecompositionSolving equations and inverting matricessolve: General solver for linear systems of equationslstsq: Least-squares solver for linear systems of equationsinv: Solve for the (multiplicative) inverse of a matrixInstallationYou have several easy, convenient options to install theadpackage (administrative privileges may be required):Download the package files below, unzip to any directory, and runpython setup.py installfrom the command-line.Simply copy the unzippedad-XYZdirectory to any other location that python can find it and rename itad.Ifsetuptoolsis installed, runeasy_install--upgradeadfrom the command-line.Ifpipis installed, runpip install--upgradeadfrom the command-line.Download thebleeding-edgeversion onGitHubContactPlease sendfeature requests, bug reports, or feedbacktoAbraham Lee.AcknowledgementsThe author expresses his thanks to :Eric O. LEBIGOT (EOL), author of theuncertaintiespackage, for providing code insight and inspirationStephen Marks, professor at Pomona College, for useful feedback concerning the interface with optimization routines inscipy.optimize.Wendell Smith, for updating testing functionality and numerous other useful function updatesJonathan Terhorst, for catching a bug that made derivatives of logarithmic functions (base != e) give the wrong answers.GitHub userfhgdfor catching a mis-calculation inadmath.atan2
ad1459
ad1459IRC ClientAD1459 is an IRC client written in Python and GTK3. It aims to be a modern IRC client with features that make sense for IRC today. It has an interface which has been loosely inspired by Hexchat and mIRC.AD1459 is currently inALPHA, and should not be considered ready for everyday use. That being said, it is a relatively capable client for basic functionality even in its currently incomplete state.Current AbilitiesMultiple-network supportChatting over IRC.Joining/Parting channelsChanging nickTab-completionSave and recall serversSecure password storage within system keyringNotificationsUser listTopicSome commands for doing IRC ThingsLast message recallCompacted server messagesTODOSCurrently planned features include:CTCPLoggingKnown IssuesThese are problems that have been currently identified:Large buffers make the application unresponsiveCTCP ACTION messages sent from the client also highlight the client.Connecting to IRCTo connect to a server/network, click on the server button (in the top left) and enter the server details in the text entries. You can alternatively enter a server as a single line of text, for which the format is:none|sasl|pass name host port username (tls) (password)none|sasl|passThis specifies the connection type. If you need to authenticate to the server with a server password, then usepass. If the network supports using SASL, useSASL.nameThis is the name for the network in the list. (e.g.freenode,Esper)hostThe hostname of the server to connect to, e.g.chat.freenode.netportThe port to connect with, e.g.7070. Default is 6697.usernameThe username/ident for your connection to the server. This will also be your initial nickname (Separate nickname support is planned for a future release)tlsIf present, AD1459 will use TLS to connect to the server. Otherwise, a plaintext connection will be used.passwordThe password to use to authenticate with the server. This option is required if the authentication method specified wassaslorpass. It should be omitted otherwise.Example connection linessasl Esper irc.esper.net 6697 jeans tls hunter2none freenode chat.freenode.net 6666 g4vr0chepass My-Private-Network my.private-network.com 12345 secret_username tls hunter3
ad2openldap
Why ad2openldap ?ad2openldap is a lightweight replicator for user, group and netgroup information from Microsoft Active Directory into an OpenLDAP server to serve as Unix IDMAP solution. The original version was developed at Fred Hutch in 2010 to overcome frustrations with slow and unreliable Linux LDAP connectivity to Active Directory and to isolate badly behaving HPC scripts (“fork bombs”) from critical AD infrastructure.ad2openldap in 2018 and beyondIn 2017 we observed that newer solutions have grown in complexity (SSSD, Centrify) but have not been able to match ad2openldap in performance and reliability (SSSD). As we are migrating more services to cloud we continue to benefit from LDAP caches/replicas that provide low latency ldap services and ad2openldap continues to be a critical piece of infrastructure on more than 2000 servers/compute nodes on premise and in AWS and Google cloud. We decided to port the tool to Python3, add an easy installer via pip3 and test it on newer OS. We hope it will be as useful to others as it is to us.InstallationUbuntuOn Ubuntu you will be prompted for an new LDAP Administrator password. please remember this password.sudo apt install -y python3-pip slapd sudo pip3 install ad2openldapRHEL/CentOS 6 (untested in 2018)sudo yum -y install epel-release sudo yum -y install python34 python34-setuptools python34-devel gcc slapd sudo easy_install-3.4 pip sudo pip3 install ad2openldapRHEL/CentOS 7 (untested in 2018)sudo yum -y install python?? sudo pip3 install ad2openldapConfiguration/etc/ad2openldap/ad2openldap.conf requires these minimum settings:# openldap adimistrator password (you set this during installation) bind_dn_password: ChangeThisLocalAdminPassword12345 # AD service account (userPrincipalName aka UPN) ad_account: [email protected] # password for AD service account ad_account_password: ChangeThisPassword # AD LDAP URL of one of your domain controllers ad_url: ldap://dc.example.com # The base DN to use from Active Directory, under which objects are retrieved. ad_base_dn: dc=example,dc=comexecute the setup script and enter items when promptedad2openlap3 setupthen create a cronjob in file /etc/cron.d/ad2openldap that runs ca. every 15 minSHELL=/bin/bash [email protected] */15 * * * * root /usr/local/bin/ad2openldap3 deltasync --dont-blame-ad2openldap -v >>/var/log/ad2openldap/ad2openldap.log 2>&1 ; /usr/local/bin/ad2openldap3 healthcheck -N usernameIt is strongly recommended to up the default open files limit for slapd to at least 8192echo “ulimit -n 8192” >> /etc/default/slapd (or /etc/defaults/slapd depending on distribution)TroubleshootingUse the –verbose flag to log to STDOUT/STDERR.The AD dumps and diffs are in /tmp by default:ad_export.ldif - current dump ad_export.ldif.0 - last dump ad_export_delta.diff - computed differences between these filesPossible failure modes are:LDAP server failure - needs restart, possibly followed by forced full update if corrupt or incompleteFirewall block still improperly active - look at update script for removal syntax (this failure is very unlikely given the current process)Bad or conflicting AD entities - a forced full update should remedy thisIn the event that an incremental update is not possible or bypassed using the command line parameter ‘–fullsync’, a full update will instead occur.A full update:Dumps groups, users and NIS group entities from ADLocks out remote access to the LDAP server via the firewallShuts down the LDAP serverWrites a new blank database using the LDIF templateDirectly imports AD dump into databaseRestarts LDAP serverRemoves firewall block on LDAP server
ad3
No description available on PyPI.
ad9546
ADI-AD9546Set of tools to interact & program AD9546/45 integrated circuits, by Analog Devices.Usethese toolsto interact with AD9548/47 older chipsets.These scripts are not Windows compatible.These scripts expect a/dev/i2c-Xentry, they do not manage the device through SPI at the moment.Installpythonsetup.pyinstallDependenciespython-smbusInstall requirements withpip3install-rrequirements.txtAPIEach application comes with an-hhelp menu.Refer to help menu for specific information.i2cbus must always be specifiedi2cslave address must always be specified--flagis optionnal: action will not be performed if not requestedFor complex flag values (basically involving white spaces), for exampleref-input --coupling, don't forget to encapsulate with inverted commas:ref-input.py\00x48\# bus #0 slave address is 0x48--refa\# simple, one word--coupling"AC 1.2V"# 'complex' but meaningful valueref-input.py\10x4A\# bus #1 slave address is 0x4A--refaa\# simple, one word--coupling"internal pull-up"# 'complex' but meaningful valueFlag values are case sensitive and must be exactly matched. It is not possible to pass a non supported / unknown flag value, scripts will reject those with a runtime error.AD9545 / 46These scripts are developped and tested with an AD9546 chip.AD9545 is pin & register compatible, so it should work.It is up to the user to restrain to restricted operations in that scenario.Utilitiescalib.py: calibrates core portions of the clock. Typically required when booting or a new setup has just been loaded.distrib.py: controls clock distribution and output signals. Includes signal paths and output pins muting operations.irq.py: IRQ clearing & masking operationsmisc.py: miscellaneous operationsmx-pin.py: Mx programmable I/O managementpll.py: APLLx and DPLLx cores management. Includes free running + holdover manual forcing operationpower-down.py: power saving and management utilityref-input.py: reference & input signals managementregmap.py: load or dump a register map presetregmap-diff.py: loaded / dumped regmap differentiator (debug tool)reset.py: device reset operationsstatus.py: status monitoring, includes IRQ flag reports and onboard temperature readingsysclk.py: sys clock control & management toolSee at the bottom of this page for typical configuration flows.Register mapregmap.pyallows the user to quickly load an exported register map from the official A&D graphical tool.Input/output isjson--quietto disable the stdout progress barregmap.py-h# load a register map (on bus #0 @0x48)regmap.py00x48--loadtest.jsonExport current register map to open it in A&D graphical tools:regmap.py--dump/tmp/output.json00x48Register mapdiffIt is possible to use theregmap-diff.pytool to differentiate (bitwise) an official A&D registermap (created with their GUI) and a dumped one (--dumpedwith regmap.py).# order is always:# 1) official (from A&D GUi)# 2) then dumped fileregmap-diff.pyofficial_ad.json/tmp/output.jsonThis script is mainly used for debugging purposes.It is equivalent to adiff -q -Z official_ad.json /tmp/output.jsonfocused on the "RegisterMap" field. That command being impossible to use, because --dump does not replicate 100% of the official A&D file content (too complex), and is not focused on the "RegisterMap" field.Status scriptstatus.pyis a read only tool to monitor the chipset status current status. That includes IRQ status reports, calibration reports, integrated sensors and measurement readings..status.py -hto figure all known keysOutput format isjsonand is streamed tostdout. Each--flagcan be cumulated which increases the status report size/verbosity:# Grab general / high level info (bus=0, 0x4A):status.py00x4A\--info--serial# general info \--pll# pll core (timing general info)status.py10x48\--info\--pll--sysclk# timing cores info \--ref-input# input / ref. signals infostatus.py00x4A\--irq# IRQ status registerDump status report from stdout into a filestatus.py--info--serial--pll00x4A>/tmp/status.jsonOutput is ajsonstructure. That means it can be directly interprated into another python script. Here's an example on how to do that:importsubprocessargs=['status.py','--distrib','0','0x4A']# interprate filtered stdout content directlyret=subprocess.run(args)ifret.exitcode==0:# syscall OK# direct interpratationstruct=eval(ret.stdout.decode('utf-8'))print(struct["distrib"]["ch0"]["a"]["q-div"])Status report depicts a lot of information depending on the targeted internal cores. Status.py supports filtering operations, we we'll later describe how an efficient filter can make things easier when grabbing data from another scriptStatus report filteringFilters are described by comma separated values. It is possible to cummulate filter of the same kind and of different kind. Filters are applied in order of appearance / description. Identifier filter is applied priori Value filter.--filter-by-key: filters result by identification keyword. This is useful to retain fields of interests# grab vendor fieldstatus.py00x48\--info--filter-by-keyvendor# single field filter# zoom in on temperature infostatus.py00x48\--misc--filter-by-keytemperature# single field filter# only care about CH0status.py00x48\--distrib--filter-by-keych0# single field filter# only care about AA path(s)# [CH0:AA ; CH1:AA] in this casestatus.py00x48\--distrib--filter-by-keyaa# single field filterExample of cummulated filters:# grab (vendor + chiptype) fieldsstatus.py00x48\--info--filter-by-keychip-type,vendor# comma separated# zoom in on temperature readingstatus.py00x48\--misc--filter-by-keytemperature,value# zoom in# Retain `aa` path from CH0# Filter by order of appearance,# specifying CH0 then AA ;)status.py00x48\--distrib--filter-by-keych0,aaBy default, if requested keyword is not found (non effective filter), fulldata set is preserved.# non effective filter example:status.py--info--filter-by-keysomething00x48filter-by-value: it is possible to filter status reports by matching values# Return `0x456` <=> vendor fieldstatus.py10x48\--info\--filter-by-value0x456# Return only deasserted valuesstatus.py10x48\--distrib\--filter-by-valuedisabled# Event better `deasserted` value filterstatus.py10x48\--distrib\--filter-by-valuedisabled,false,inactiveIt is possible to combinekeyandvaluefilters:# from CH0 return only deasserted valuesstatus.py10x48\--distrib\--filter-by-valuech0\--filter-by-valuedisabled,false,inactiveExtract raw data from status reportThe--unpackoption allows convenient data reductionif the requested filter has reduced the dataset to a single value, we expose the raw data:status.py00x4A\--info--filter-by-keyvendor# extract vendor info \--unpack# raw valuestatus.py00x4A\--misc--filter-by-keytemperature,value# extract t° reading \--unpack# raw value# extract temperature alarm bitstatus.py00x4A\`--misc--filter-by-keytemperature,alarm# extract t° alarm bit \--unpack# raw valueThis is very convenient when importing data into an external script. Here's an example in python once again:importsubprocessargs=['status.py','0','0x4A','--misc','--filter-by-key','temperature,alarm'# extract raw bit]ret=subprocess.run(args)ifret.exitcode==0:# syscall OK# bool() direct casthas_alarm=bool(ret.stdout.decode('utf-8'))If the status report comprises several value, then--unpacksimply reduces the structure to 1D. That means we lose data because we can only have a unique value per identifierstatus.py00x4A\--misc--filter-by-keytemperature# extract temperature fields \--unpackSys clockSysclock compensation is a new feature introduced in AD9546.sysclock.pyallows quick and easy access to these features.To determine currentsysclockrelated settings, use status.py with--sysclockoption.--freq: to program input frequency [Hz]--sel: to select the input path (internal crystal or external XOA/B pins)--div: set integer division ratio on input frequency--doubler: enables input frequency doublerCalibration scriptcalib.pyallows chipset (re)calibration.It is required to perform a calibration at boot time.It is required to perform an analog Pll (re)calibration anytime we recover from a sys clock power down.Perform complete (re)calibrationcalib.py--all00x4APerform only a sys clock (re)calibration (1st step in application note)calib.py--sysclk00x4AMonitor internal calibration process withstatus.py10x4A\-pll--sysclk--filter-by-keycalibrating status.py10x4A\--sysclk--irq--filter-by-keycalibrationClock distributiondistrib.pyis an important utility.It helps configure the clock path, control output signals and their behavior.To determine the chipset current configuration related to clock distribution, one should use the status script with--distriboption.Control flags:--channel(optionnal) describes which targetted channel. Defaults toall, meaning if--channelis not specified, both channels (CH0/CH1) are assigned the same value. This script only suppports a single--channelassignment.--path(optionnal) describes desired signal path. Defaults toallmeaning, all paths are assigned the same value (if feasible).This script only suppports a single--pathassignment at a time.Refer to help menu for list of accepted values.--pin(optionnal) describes desired pin, when controlling an output pin. Defaults toallmeaning, all pins (+ and -) are assigned the same value when feasible.Refer to help menu for list of accepted values.Action flags: the script supports as manyactionflags as desired, see the list down below.--modeset OUTxy output pin as single ended or differential--formatsets OUTxy current sink/source format--currentsets OUTxy pin output current [mA], where x = channel# set channel 0 as HCSL default formatdistrib.py--formathcsl--channel0# set channel 1 as CML formatdistrib.py--formathcsl--channel1# set channel 0+1 as HCSL default formatdistrib.py--formathcsl# set Q0A, Q0B as differntial outputdistrib.py--modediff--channel0# set Q1A, as single ended pindistrib.py--modese--channel1--pina# set Q0A Q0B to output 12.5 mA, default output currentdistrib.py--current12.5--channel0# set Q1A to output 7.5 mA, minimal currentdistrib.py--current7.5--channel1--pina--sync-all: sends a SYNC order to all distribution dividers. It is required to run async-allin case the current output behavior is not set toimmediate.# send a SYNC all# SYNC all is required depending on previous actions and current configurationdistrib.py--sync-all00x48--autosync: control given channel so called "autosync" behavior.# set both Pll CH0 & CH1 to "immediate" behaviordistrib.py--autosyncimmediate00x48# set both Pll CH0 to "immediate" behaviordistrib.py--autosyncimmediate--channel000x48# and Pll CH1 to "manual" behaviordistrib.py--autosyncmanual--channel100x48In the previous example, CH1 is set to manual behavior.One must either perform async-alloperation, aq-syncoperation on channel 1, or an Mx-pin operation with dedicated script, to enable this output signal.--q-sync: initializes a Qxy Divider synchronization sequence manually. When x is thechannelandyis desired path.# triggers Q0A Q0B Q1A Q1B SYNCdistrib.py--q-sync00x48# triggers Q0A Q0B SYNCdistrib.py--q-sync--channel000x48# triggers Q0B Q1B SYNC because --channel `all` is implieddistrib.py--q-sync--pathb00x48--unmute: controls QXY unmuting opmode, where x is thechannelandydesired path.# Q0A Q0B + Q1A Q1B `immediate` unmutingdistrib.py--unmuteimmediate00x48# Q0A Q1A `phase locked` unmutingdistrib.py--unmutephase--patha00x48# Q0B Q1B `freq locked` unmutingdistrib.py--unmutefreq--pathb00x48# Q0A + Q1B `immediate` unmutingdistrib.py--unmuteimmediate--patha00x48 distrib.py--unmuteimmediate--pathb00x48--pwm-enableand--pwm-disable: constrols PWM modulator for OUTxy where x is thechannelandythe desired path.--divider: control integer division ratio at Qxy stage# Sets R=48 division ratio,# for Q0A,AA,B,BB,C,CC and Q1A,AA,B,BB# because --channel=`all` and --path=`all` is implieddistrib.py--divider4800x48# Sets Q1A,AA,B,BB R=64 division ratio# because --path=`all` is implieddistrib.py--divider64--channel100x48# Q0A & Q0B R=23 division ratio# requires dual assignment, because --pin {a,b} is not feasible at oncedistrib.py--divider23--channel0--pina00x48 distrib.py--divider23--channel0--pinb00x48--half-divider: enables "half divider" feature @ QXY path--phase-offsetapplies instantaneous phase offset to desired output path. Maximal value is 2*D-1 where D is previous--dividerratio for given channel + pin.# Apply Q0A,AA,B,BB,C,CC + Q1A,AA,B,BB# TODO--unmuting: controls "unmuting" behavior, meaning, output signal can be exposed automatically depending on clock state.--muteand--unmuteto manually enable/disable an output pinReset scriptTo quickly reset the device--soft: performs a soft reset--sans: same thing but maintains current registers value--watchdog: resets internal watchdog timer-hfor more infos# Resets (factory default)reset.py--soft10x48 regmap.py--loadsettings.json10x48reset.py--sans10x48# settings are maintainedRef input scriptref-input.pyto control the reference input signal, signal quality constraints, switching mechanisms and the general clock state.--freqset REFxy input frequency [Hz]--couplingcontrol REFx input couplinglockmust be previously acquired.freq-lock-thresh: frequency locking mechanism constraint.phase-lock-thresh: phase locking mechanism constraint.phase-step-thresh: inst. phase step thresholdphase-skew: phase skewPLL scriptpll.pyto control both analog and digital internal PLL cores.pll.pyalso allows to set the clock to free run or holdover state.--type: to specify whether we are targetting an Analog PLL (APLLx) or a Digital PLL (APLLx). This field is only required for operations where it is ambiguous (can be performed on both cores).allis the default value.--type alltargets both APLLx and DPLLx core(s).--channel: setxin DPLLx or APLLx targeted cores.--channel all: is the default behavior, targets both channel 0 and 1 of the desired type.--free-run: forces clock to free run state,--typeis disregarded becausedigitalis implied.--holdover: forces clock to holdover state,--typeis disregarded becausedigitalis implied.Power down scriptpower-down.pyperform and recover power down operations.Useful to power down non needed channels and internal cores.The--allflag addresses all internal cores.Otherwise, select internal units with related flagPower down device entirelypower-down.py00x4A--allRecover a complete power down operationpower-down.py00x4A--all--clearWakeAreference up and putAA,B,BBreferences to sleep:power-down.py00x4A--refb--refbb--refaapower-down.py00x4A--clear--refaCCDPLL : Digitized Clocking Common Clock SynchronizerCCDPLL status report:status.py--ccdpll10x48CCDPLL must be configured and monitored for UTS & IUTS related operationsUser Time Stamping coresUTS cores allow the user to timestamp input data against a reference signal. UTS requires the CCDPPL that is part of the Digitized clocking core to be configured.uts.pycontrols both the UTS core and the inverse UTS core. This is controlled by the--type inverseoption.The default--typeis "normal" for UTS management by default.Therefore it is mandatory to specifyinversefor IUTS management.UTS and IUTS status reports are reported by the status.py script:status.py10x4A\--uts\--iuts status.py00x48\--uts\--filter-by-keyfifo,0It is useful to combine this status report to the digitized clocking status report as they are closely relatedstatus.py10x4A\--ccdpll\--utsSome UTS/IUTS raw data are signed 24 or 48 bit values, this portion of the status script should interprate those values correctly, but it has to be confirmed / verified.It is not clear at the moment which UTSx core (8 cores) is fed to the UTS FIFO (unique fifo). Therefore it is not clear to me which scaling should apply when interprating the data contained in the UTS FIFO.At the moment, I hardcoded Core #0 (1st one) as the frequency source ➜ to clarify and improve.Inverse UTS managementTODOIRQ eventsstatus.py --irqallows reading the current asserted IRQ flags.Clear them withirq.py:--all: clear all flags--pll: clear all PLL (PLL0 + PLL1 + digital + analog) related events--pll0: clear PLL0 (digital + analog) related events--pll1: clear PLL1 (digital + analog) related events--other: clear events that are not related to the pll subgroup--sysclk: clear all sysclock related events-h: for other known flagsMiscstatus.py --miscreturns (amongst other infos) the internal temperature sensor reading.Get current reading :status.py--misc10x48# Filter on field of interest like thisstatus.py--misc10x48--filter-by-keytemperature,value--unpack# Is temperature range currently exceededstatus.py--misc10x48--filter-by-keytemperature,alarm--unpackProgram a temperature range :misc.py--temp-thres-low-10# [°C]misc.py--temp-thres-high80# [°C]misc.py--temp-thres-low-30--temp-thres-high90status.py--temp00x48# current reading [°C]Related warning events are then retrieved with theirq.pyutility, refer to related section.Typical configuration flowsload a profile preset, calibrate and get startedregmap.py--loadprofile.json--quiet00x48 status.py--pll--distrib--filter-by-keych000x48 calib.py--all00x48 status.py--pll--distrib--filter-by-keych000x48distrib operation: mute / unmute + powerdown (TODO)using integrated signal quality monitoring (TODO)
ad9xdds
No description available on PyPI.
ada-assistant
To be added when author is free…
adab
Adab | أدبمكتبة بايثون مبنية على موقعadab.com، موقع الاشعار والمواضيع الادبيةالتنزيل•المميزات•الاستخدام•الرخصة•تنويهاتالتنزيلسوف يتم استخدامPyPiلتنزيل المكتبةpip3installadabالمميزاتالبحث في موقعأدباستخراج محتوى البوست والمواضيع المشابها له عبر الايدي الخاص بهاستخراج بيانات انواع الكتابات او عبر الايدي الخاص بالنوعاستخراج الطرق الكاتبية التي يمكن البحث عبرها في الموقعاستخراج بيانات العصور (العصر الاسلامي الخ) التي يمكن البحث عبرها في الموقعاستخراج بيانات الدول التي يمكن البحث عبرها في الموقعاستخراج انواع المستخدمين اللذين يمكنك البحث عبرهم في الموقعتنويهاتلقد تم استخدام في الامثلة كائن افتراضي، يمكنك انشاء كان خاص عبر كلاس Adabالاستخدامالبحث في موقع أدب :fromadabimportadab# البحث العامresult=adab.search()print("General Search",result,sep="\n\n",end="\n\n")# تخصيص البحثresult=adab.search(page=23,genres=[1,2],era=[2,3,1],user_type=[3,2],gender=['f'],writing_types=[15])print("Custom Search",result,sep="\n\n",end="\n\n")المخرجاتGeneralsearch{'page':0,'text':'','post_count':'75634','result':[{'username':'أبو فراس الحمداني','user_url':'https://adab.com/Abu_Firas_Alhamdani','user_img':'https://adab.com/assets/uploads/images/daba776289f67907b34241ae437bc76c.png','post_url':'https://adab.com/post/view_post/16557','post_id':'16557','post_title':'أرَاكَ عَصِيَّ الدّمعِ شِيمَتُكَ الصّبرُ','post_views':'1701995','post_short_text':'أرَاكَ عَصِيَّ الدّمعِ شِيمَتُكَ الصّبرُ،\nأما للهوى نهيٌّ عليكَ ولا أمرُ ؟\nبلى أنا مشتاقٌ وعنديَ لوع...'},... CustomSearch{'page':23,'text':'','post_count':'246','result':[{'username':'علية بنت المهدي','user_url':'https://adab.com/Ulayya_Bint_Almahdi','user_img':None,'post_url':'https://adab.com/post/view_post/17697','post_id':'17697','post_title':'بني الحبُّ على الجورِ فلو','post_views':'7464','post_short_text':'بني الحبُّ على الجورِ فلو\nأنصَفَ المعشوقُ فيهِ لَسَمَجْ\nليسَ يستحسنُ في وصفِ الهوى\nعاشقٌ يَعْرِفُ تَ...'},{'username':'ليلى الأخيلية','user_url':'https://adab.com/Layla_AlAkheeliyya','user_img':None,'post_url':'https://adab.com/post/view_post/15107','post_id':'15107','post_title':'جَزَى اللُّه شَرّا قابِضاً بصنيعه','post_views':'7036','post_short_text':'جَزَى اللُّه شَرّا قابِضاً بصنيعه\nوكل امرىء يجزى بما كان ساعيا\nدعا قابضاً والمرهفات يردنه\nفقُبحْتَ م...'},...استخراج محتوى البوست والمواضيع المشابها له عبر الايدي الخاص به :fromadabimportadabresult=adab.post(post_id=15107)print(result)المخرجات{"username":"ليلى الأخيلية","user_url":"https://adab.com/Layla_AlAkheeliyya","user_img":null,"post_id":15107,"title":"جَزَى اللُّه شَرّا قابِضاً بصنيعه","post_content":"جَزَى اللُّه شَرّا قابِضاً بصنيعه\nوكل امرىء يجزى بما كان ساعيا\nدعا قابضاً والمرهفات يردنه\nفقُبحْتَ مدعّوا، ولبّيك داعيَا\nفَليْتَ عُبيدَ اللِّه كانَ مكانَه\nصَرِيعا؛ولم أسمَعْ لتوبة َ ناعِيَا\n","releted_posts":[{"id":"76128","title":"لن أرثيَ للشجر"},{"id":"76127","title":"العشب.."},{"id":"76126","title":"محاولة للبوح"},{"id":"76125","title":"لوجة الصرخة"},{"id":"76124","title":"بلا عنوان..."}]}استخراج انواع الكتابات:fromadabimportadab# جميعهاresult=adab.genres()print("All",result,sep="\n\n",end="\n\n")# عبر الايديresult=adab.genres(genre_id=1)print("By id",result,sep="\n\n",end="\n\n")المخرجاتAll[{'id':1,'arabic_title':'شعر','post_count':'74635'},{'id':2,'arabic_title':'مقال','post_count':'507'},{'id':3,'arabic_title':'سرد','post_count':'488'}]Byid[{'id':1,'arabic_title':'شعر','post_count':'74635'}]استخراج الطرق الكاتبية:fromadabimportadab# جميعهاresult=adab.writing_types()print("All",result,sep="\n\n",end="\n\n")# عبر الايديresult=adab.writing_types(type_id=15)print("By id",result,sep="\n\n",end="\n\n")المخرجاتAll[{'id':15,'arabic_title':'فصحى','post_count':'61509'},{'id':16,'arabic_title':'عامّي','post_count':'10730'},{'id':17,'arabic_title':'مترجم للعربية','post_count':'2829'},{'id':20,'arabic_title':'مترجم للإنجليزية','post_count':'566'}]Byid[{'id':15,'arabic_title':'فصحى','post_count':'61509'}]استخراج العصور:fromadabimportadab# جميعهاresult=adab.era()print("All",result,sep="\n\n",end="\n\n")# عبر الايديresult=adab.era(era_id=3)print("By id",result,sep="\n\n",end="\n\n")المخرجاتAll[{'id':2,'arabic_title':'العصر الجاهلي','post_count':'1473'},{'id':3,'arabic_title':'العصر الإسلامي','post_count':'3977'},{'id':1,'arabic_title':'العصر العباسي','post_count':'18023'},{'id':4,'arabic_title':'العصر الأندلسي','post_count':'6350'},{'id':55,'arabic_title':'عصرالدول المتتابعة','post_count':'1572'},{'id':29,'arabic_title':'العصر الحديث','post_count':'44551'}]Byid[{'id':3,'arabic_title':'العصر الإسلامي','post_count':'3977'}]استخراج الدول التي يمكن البحث من خلالها :fromadabimportadab# جميعهاresult=adab.country()print("All",result,sep="\n\n",end="\n\n")# عبر الايديresult=adab.country(country_id=191)print("By id",result,sep="\n\n",end="\n\n")المخرجاتAll[{'id':1,'name':'Afghanistan','arabic_name':'أفغانستان','sortname':'AF'},{'id':3,'name':'Algeria','arabic_name':'الجزائر','sortname':'DZ'},{'id':6,'name':'Angola','arabic_name':'أنغولا','sortname':'AO'},{'id':10,'name':'Argentina','arabic_name':'الأرجنتين','sortname':'AR'},{'id':11,'name':'Armenia','arabic_name':'أرمينيا','sortname':'AM'},...Byid[{'id':191,'name':'Saudi Arabia','arabic_name':'المملكة العربية السعودية','sortname':'SA'}]استخراج المستخدمين اللذين يمكنك البحث عبرهم :fromadabimportadab# جميعهاresult=adab.user_type()print("All",result,sep="\n\n",end="\n\n")# عبر الايديresult=adab.user_type(type_id=3)print("By id",result,sep="\n\n",end="\n\n")المخرجاتAll[{'id':3,'name':'موثق'},{'id':2,'name':'معتمد'},{'id':1,'name':'مشارك'}]Byid[{'id':3,'name':'موثق'}]الرخصةرخصة جنو العمومية الاصدار 3
adabelief-pytorch
PyTorch implementation of AdaBelief Optimizer
adabelief-slim
AdaBelief SlimThis repository contains the code for theadabelief-slimPython package, from which you can use a Pytorch implementation of the AdaBelief optimizer.InstallationUsing Python 3.6 or higher:pipinstalladabelief-slimUsagefromadabeliefimportAdaBeliefmodel=...kwargs=...optimizer=AdaBelief(model.parameters(),**kwargs)The following hyperparameters can be passed as keyword arguments:lr: learning rate (default:1e-3)betas: 2-tuple of coefficients used for computing the running averages of the gradient and its "variance" (see paper) (default:(0.9, 0.999))eps: term added to the denominator to improve numerical stability (default:1e-8)weight_decay: weight decay coefficient (default:1e-2)amsgrad: whether to use the AMSGrad variant of the algorithm (default:False)rectify: whether to use the RAdam variant of the algorithm (default:False)weight_decouple: whether to use the AdamW variant of this algorithm (default:True)Be aware that the AMSGrad and RAdam variantscan'tbe used simultaneously.MotivationAs you're probably aware, one of the paper's main authors (Juntang Zhuang) released his code in thisrepository, which is used to maintain theadabelief_pytorchpackage. Thus, you may be wondering why this repository exists, and how it differs with his. The reason is actually pretty simple: the author made some decisions regarding his code which made it an unsuitable option for me. While it wasn't the only thing that bugged me, my main issue was with adding unnecessary packages as dependencies.Regarding differences, the main ones are:I removed thefixed_decayoption, as the author's experiments showed it wasn't greatI removed thedegenerate_to_sgdoption, as the author copied the RAdam codebase, but it seems recommended to always use itI removed all logging related features, along with theprint_change_logoptionI removed all code specific to older version of Pytorch (I think all versions above1.4should work), as I don't care for themI changed the flow of the code to be closer to the official implementation of AdamWI removed all usage of the.dataproperty as it isn't recommended, and can be avoided with thetorch.no_graddecoratorI moved the code specific to AMSGrad so that it isn't executed if the RAdam variant is selectedI added an exception if both RAdam and AMSGrad are selected, as they can't both be used (in the official repository RAdam is used if both RAdam and AMSGrad are selected)I removed half-precision support, as I don't care for itReferencesCodebasesOfficial AdaBelief implementationOfficial RAdam implementationOfficial AdamW implementationPytorch OptimizersPapersAdam: A Method for Stochastic Optimization: proposed AdamDecoupled Weight Decay Regularization: proposed AdamWOn the Convergence of Adam and Beyond: proposed AMSGradOn the Variance of the Adaptive Learning Rate and Beyond: proposed RAdamAdaBelief Optimizer, adapting stepsizes by the belief in observed gradients: proposed AdaBeliefLicenseMIT
adabelief-tf
Tensorflow implementation of AdaBelief Optimizer
ada-boost
No description available on PyPI.
adaboost-model
Academic Performance Prediction using Adaboost Model
adabound
AdaBoundAn optimizer that trains as fast as Adam and as good as SGD, for developing state-of-the-art deep learning models on a wide variety of pupolar tasks in the field of CV, NLP, and etc.Based on Luo et al. (2019).Adaptive Gradient Methods with Dynamic Bound of Learning Rate. InProc. of ICLR 2019.Quick LinksWebsiteDemosInstallationAdaBound requires Python 3.6.0 or later. We currently provide PyTorch version and AdaBound for TensorFlow is coming soon.Installing via pipThe preferred way to install AdaBound is viapipwith a virtual environment. Just runpipinstalladaboundin your Python environment and you are ready to go!Using source codeAs AdaBound is a Python class with only 100+ lines, an alternative way is directly downloadingadabound.pyand copying it to your project.UsageYou can use AdaBound just like any other PyTorch optimizers.optimizer=adabound.AdaBound(model.parameters(),lr=1e-3,final_lr=0.1)As described in the paper, AdaBound is an optimizer that behaves like Adam at the beginning of training, and gradually transforms to SGD at the end. Thefinal_lrparameter indicates AdaBound would transforms to an SGD with this learning rate. In common cases, a default final learning rate of0.1can achieve relatively good and stable results on unseen data. It is not very sensitive to its hyperparameters. See Appendix G of the paper for more details.Despite of its robust performance, we still have to state that,there is no silver bullet. It does not mean that you will be free from tuning hyperparameters once using AdaBound. The performance of a model depends on so many things including the task, the model structure, the distribution of data, and etc.You still need to decide what hyperparameters to use based on your specific situation, but you may probably use much less time than before!DemosThanks to the awesome work by the GitHub team and the Jupyter team, the Jupyter notebook (.ipynb) files can render directly on GitHub. We provide several notebooks (likethis one) for better visualization. We hope to illustrate the robust performance of AdaBound through these examples.For the full list of demos, please refer tothis page.CitingIf you use AdaBound in your research, please citeAdaptive Gradient Methods with Dynamic Bound of Learning Rate.@inproceedings{Luo2019AdaBound, author = {Luo, Liangchen and Xiong, Yuanhao and Liu, Yan and Sun, Xu}, title = {Adaptive Gradient Methods with Dynamic Bound of Learning Rate}, booktitle = {Proceedings of the 7th International Conference on Learning Representations}, month = {May}, year = {2019}, address = {New Orleans, Louisiana} }LicenseApache 2.0
adacat
No description available on PyPI.
adachi-resource-assistant
adachi-resource-assistant弃用警告( Deprecation Warning )从原神 3.1 版本开始此模块已经弃用,功能已经整合到Adachi-BOT,使用说明详见《资源文件制作指引》This module has been deprecated since Genshin Impact 3.1 version, and the function has been integrated intoAdachi-BOT. For details, please refer to《资源文件制作指引》.说明A resource assistant forAdachi-BOT.安装pip3install-Uadachi_resource_assistant使用生成角色卡池图片根据角色信息或者配置文件生成角色卡池图片。adachi_resource_get_gacha_image刻晴keqing_042 adachi_resource_get_gacha_image第二个参数来源于Honey Impact的角色页面链接,如https://genshin.honeyhunterworld.com/keqing_042/?lang=CHS中的keqing_042。不给出参数则获取所有可用的角色。填充素材图片根据素材图片自身信息进行填充。adachi_resource_fill_image/path/to/image.png adachi_resource_fill_image/path/*.png adachi_resource_fill_image参数列表为文件列表,不给出参数时,则递归扫描当前目录中的所有.png文件。许可MIT License。
ada-cli
Ada CLIFeaturesTODORequirementsTODOInstallationYou can installAda CLIviapipfromPyPI:$pipinstallada-cliUsagePlease see theCommand-line Referencefor details.ContributingContributions are very welcome. To learn more, see theContributor Guide.LicenseDistributed under the terms of theMIT license,Ada CLIis free and open source software.IssuesIf you encounter any problems, pleasefile an issuealong with a detailed description.CreditsThis project was generated from@cjolowicz'sHypermodern Python Cookiecuttertemplate.
ada-client
ada-clientAstro Data Archive Client
adacord
Adacord CLIInstallationpipinstalladacordUsageCreate a new useradacordusercreateLoginadacordlogin--emailme@my-email.comCreate endpointadacordbucketcreate--description"A fancy bucket"List endpointsadacordbucketlistQuery endpointadacordbucketquery'select * from `my-bucket`'Contributingpoetryinstall
adacord-cli
No description available on PyPI.
ada-core
No description available on PyPI.
adacs-django-playwright
ADACS Playwright ClassUsageUse this class instead of the django StaticLiveServerTestCase.It adds 2 useful class properties:self.browser = A browser object from playwright used for accessing the page. self.playwright = The return from sync_playwright().start()This class only supports chronium and synchronous tests.Examplefrom adacs_django_playwright.adacs_django_playwright import PlaywrightTestCase class MyTestCase(PlaywrightTestCase): def awesome_test(self): page = self.browser.new_page() page.goto(f"{self.live_server_url}")
adactivexlsx2html
adactivexlsx2htmlA simple export from xlsx format to html tables with keep cell formattingported from xlsx2html with some changes, like bulma on css, and adding carousel on the rows.Installpipinstalladactivexlsx2htmlUsageSimple usagefromadactivexlsx2htmlimportxlsx2htmlout_stream=xlsx2html('path/to/example.xlsx')out_stream.seek(0)print(out_stream.read())or pass filepathfromadactivexlsx2htmlimportxlsx2htmlxlsx2html('path/to/example.xlsx','path/to/output.html')or use file like objectsimportiofromadactivexlsx2htmlimportxlsx2html# must be binary modexlsx_file=open('path/to/example.xlsx','rb')out_file=io.StringIO()xlsx2html(xlsx_file,out_file,locale='en')out_file.seek(0)result_html=out_file.read()or from shellpython-madactivexlsx2htmlpath/to/example.xlsxpath/to/output.html
adacut
adacutA tool to "cut" versions out of an Ada source file.The "cuts" are defined using lines that start with a--$comments marker. Those lines are in the form--$ {begin, end, line} {question, answer, cut} [{comment, code}]ExampleThe filetiti_toto.ads--$ begin question-- Titi?--$ end question--$ begin answer-- Toto--$ end answercan be turned into two different sources using adacut:adacut -m question titi_toto.ads > titi.ads-- Titi?adacut -m answer titi_toto.ads > toto.ads-- TotoThe-c <CUT>switch allows to cut the given block.cuttable.ads--$ begin cutAnswer:Integer:=1;--$ end cut--$ begin cutAnswer:Integer:=2;--$ end cutadacut -c 1 > cut.adsAnswer:Integer:=1;Test AdacutYou need pytest to be installed then simply run$ pytestIf you want to perform more exhaustive testing, you can usetox$ tox
adadamp
adadampSee the documentation for more detail,https://stsievert.com/adadampInstallhttps://stsievert.com/adadamp/basics.html
adadjust
StatusCompatibilitiesContactAdAdjustPackage allowing to fit any mathematical function to (for now 1-D only) data.InstallationpipinstalladadjustUsagefromadadjustimportFunctionimportnumpyasnpimportmatplotlib.pyplotaspltplt.rcParams.update({"text.usetex":True})# Needs texlive installednsamples=1000a=0.3b=-10xstart=0xend=1noise=0.01x=np.linspace(xstart,xend,nsamples)y=a*x**2+b+np.random.normal(0,noise,nsamples)deflinfunc(xx,p):returnxx*p[0]+p[1]defsquare(xx,p):returnxx**2*p[0]+p[1]func=Function(linfunc,"$a\\times p[0] + p[1]$")func2=Function(square,"$a^2\\times p[0] + p[1]$")params=func.fit(x,y,np.array([0,0]))[0]rr=func.compute_rsquared(x,y,params)params2=func2.fit(x,y,np.array([0,0]))[0]rr2=func2.compute_rsquared(x,y,params2)table=Function.make_table([func,func2],[params,params2],[rr,rr2],caption="Linear and Square fit",path_output="table.pdf")table.compile()Function.plot(x,[func,func2],[params,params2],y=y,rsquared=[rr,rr2])plt.gcf().savefig("plot.pdf")NOTE: to have pretty gaphs, put the lineplt.rcParams.update({"text.usetex": True})just after you imported adadjust. This requiers that you have TexLive full installed on your computer.The result will be :
adadmire
adadmireFunctions for detecting anomalies in molecular data sets using Mixed Graphical Models.InstallationEnter the following commands in a shell likebash,zshorpowershell:pipinstall-UadadmireUsageThe usage example in this section requires that you first download the data files from thedatafolder. For a description of the contents of this folder, see sectionDataof the adadmire documentation site.fromadadmireimportadmire,penaltyimportnumpyasnp# Load example dataX=np.load('data/Feist_et_al/scaled_data_raw.npy')# continuous dataD=np.load('data/Feist_et_al/pheno.npy')# discrete datalevels=np.load('data/Feist_et_al/levels.npy')# levels of discrete variables# Define lambda sequence of penalty valueslam=penalty(X,D,min=-2.25,max=-1.5,step=0.25)# Get anomalies in continuous and discrete dataX_cor,n_cont,position_cont,D_cor,n_disc,position_disc=admire(X,D,levels,lam)print(X_cor)# corrected Xprint(n_cont)# number of continuous anomaliesprint(position_cont)# position in Xprint(D_cor)# corrected Dprint(n_disc)# number of discrete anomaliesprint(position_disc)# position in DYou can find more usage examples in theUsagesection of adadmire's documentation site.DocumentationYou can find the full documentation for adadmire atspang-lab.github.io/adadmire. Amongst others, it includes chapters about:InstallationUsageModulesContributingTestingDocumentation
adaendra-immutable-dict
Python Immutable DictThis library contains theImmutableDict, a Python dictionary which can't be updated.How to use itInstall it frompippip install adaendra-immutable-dictImport theAdaendreImmutableDictfrom adaendra_immutable_dict.AdaendraImmutableDict import AdaendraImmutableDictUse it like in the following examples.Examplesfromadaendra_immutable_dict.AdaendraImmutableDictimportAdaendraImmutableDict# Empty Immutable Dictimmutable_dict=AdaendraImmutableDict({})# Non empty Immutable Dictimmutable_dict=AdaendraImmutableDict({"hello":"world"})# Get a valueimmutable_dict["hello"]#> world# Copy dictimmutable_dict_copy=immutable_dict.copy()# Create an immutable dict from "fromkeys" methodimmutable_dict=AdaendraImmutableDict.fromkeys(["a","b"],"1")DocumentationPyPiCreditsThis project is based on Marco Sulla's project:frozen-dict.
adaendra-python-config-loader
Python Config LoaderThe objective of this library is to easily load external configs for a Python project and use it for anywhere in your project.How does it work?By default, it will load a config file called "application.yaml" stored in "/app/resources".But you can override :the name of the config filethe extension to JSONthe path to the directory with the configs filesAlso you can define an environment, then 2 files will be loaded :the "common" config file -application.yamlthe "environment" config file -application-[environment].yamlHow to use itInstall with pippip install adaendra-python-config-loaderImport the configs and use it!fromAdaendraConfigsimportAdaendraConfigsprint(AdaendraConfigs.configs.abc)Configuration environment variablesNameDescriptionDefault valueCONFIG_ENVIRONMENTEnvironment to loadNoneCONFIG_FOLDERPath to the config files'/app/resources'CONFIG_FILE_EXTENSIONFile extensions of your config file. Allow : '.yml'/'.yaml'/'.json''.yaml'CONFIG_PROJECT_NAMEName of your project(which is generally the name of the config files)'application'DocumentationPyPi
adafair
AdaFair: Cumulative Fairness BoostingOverviewThis GitHub repository presents an extended version of the AdaFair algorithm, initially introduced in the paper titled "AdaFair: Cumulative Fairness Adaptive Boosting" (CIKM 2019). This extension incorporates various parity-based fairness notions, enabling a more comprehensive and adaptive approach to fairness in machine learning models.Key FeaturesEnsemble Approach:Introduces an ensemble method for fairness, modifying the data distribution throughout boosting rounds to emphasize misclassified instances of minority groups.Fairness Cost Calculation:Implements fairness cost, evaluating performance disparities between protected and non-protected groups, ensuring fairness adaptations during model training.Cumulative Notion of Fairness:Evaluates fairness based on the partial ensemble, providing a cumulative perspective rather than individual boosting rounds.Beneficial for Different Notions:Demonstrates the effectiveness of cumulative fairness for various parity-based fairness notions, enhancing model fairness across different dimensions.UpdatesThe repository has recently been updated to enhance its functionality and usability:Improved Weak Learner Selection:The selection of weak learners (\theta parameter) is now performed on a dedicated validation set, optimizing the algorithm's performance.Additional Fairness Notions:Introduces two new fairness notions, namely "Statistical Parity" (AdaFairSP.py) and "Equal Opportunity" (AdaFairEQOP.py), expanding the algorithm's applicability to different fairness criteria.How to UseTo utilize this extended AdaFair algorithm in your machine learning projects, follow these steps:pip install adafairPreprintThis is an extension of our AdaFair algorithm (KAIS 2022,"Parity-based cumulative fairness-aware boosting") to other parity-based fairness notions. We propose an ensemble approach to fairness that alters the data distribution over the boosting rounds “forcing” the model to pay more attention to misclassified instances of the minority. This is done using the so-called fairness cost which assesses performance differences between the protected and non-protected groups. The performance is evaluated based on the partial ensemble rather than on the weak model of each boosting round. We show that this cumulative notion of fairness is beneficiary for different parity-based notions of fairness. Interestingly, the fairness costs also help with the performance on the minority class (if there is imbalance). Imbalance is also directly tackled at post-processing by selecting the partial ensemble with the lowest balanced error.Contributions and IssuesContributions and feedback are welcome. If you encounter any issues or have suggestions for improvement, please feel free to create an issue in the repository or submit a pull request.Note:This repository is actively maintained and updated to ensure the highest standards of fairness and performance in machine learning models. Thank you for considering AdaFair for your fairness-aware machine learning tasks.See jupyter notebook (run_example.ipynb) on how to train and use the model.
adafdr
AdaFDRA fast and covariate-adaptive method for multiple hypothesis testing.Software accompanying the paper "AdaFDR: a Fast, Powerful and Covariate-Adaptive Approach to Multiple Hypothesis Testing", 2018.RequirementAdaFDR runs on python 3.6Installationpip install adafdrUsageImport packageadafdr.methodcontains all methods whileadafdr.data_loadercontains the data. They can be imported asimportadafdr.methodasmdimportadafdr.data_loaderasdlOther ways of importing are usually compatible. For example, one can import the package withimport adafdrand call methodxxxin the method module viaadafdr.method.xxx()Input formatFor a set of N hypotheses, the input data includes the p-valuespand the d-dimensional covariatex, with the following format:p: (N,) numpy.ndarray.x: (N,d) numpy.ndarray.When d=1,xis allowed to be either (N,) numpy.ndarray or (N,1) numpy.ndarray.Covariate visualizationThe covariate visualization methodadafdr_explorecan be used asadafdr.method.adafdr_explore(p,x,output_folder=None,covariate_type=None)If theoutput_folderis a filepath (str) instead ofNone, the covariate visualization figures will be saved inoutput_folder. Otherwise, they will show up in the console.covariate_type: a length-d python list with values 0/1. It specifies the type of each covariate: 0 means numerical/ordinal while 1 means categorical. For example,covariate_type=[0,1]means there are 2 covariates, the first is numerical/ordinal and the second is categorical. If not specified, a covariate with more than 75 distinct values is regarded as numerical/ordinal and otherwise categorical.See alsodocfor more details.Multiple testingThe multiple hypothesis testing methodadafdr_testcan be used asfast version (default):res=adafdr.method.adafdr_test(p,x,alpha=0.1,covariate_type=None)regular version:res=adafdr.method.adafdr_test(p,x,alpha=0.1,fast_mode=False,covariate_type=None)regular version with multi-core:res=adafdr.method.adafdr_test(p,x,alpha=0.1,fast_mode=False,single_core=False,covariate_type=None)resis a dictionary containing the results, including:res['decision']: a (N,) boolean vector, decision for each hypothesis with value 1 meaning rejection.res['threshold']: a (N,) float vector, threshold for each hypothesis.If theoutput_folderis a filepath (str) instead ofNone, the logfiles and some intermediate results will be saved inoutput_folder. Otherwise, they will show up in the console.covariate_type: a length-d python list with values 0/1. It specifies the type of each covariate: 0 means numerical/ordinal while 1 means categorical. For example,covariate_type=[0,1]means there are 2 covariates, the first is numerical/ordinal and the second is categorical. If not specified, a covariate with more than 75 distinct values is regarded as numerical/ordinal and otherwise categorical.See alsodocfor more details.Example on airway RNA-seq dataThe following is an example on the airway RNA-seq data used in the paper.Import package and load dataHere we load theairwaydata used in the paper. Seevignettesfor other data accompanied with the package.importadafdr.methodasmdimportadafdr.data_loaderasdlp,x=dl.data_airway()Covariate visualization usingadafdr_exploremd.adafdr_explore(p,x,output_folder=None)Here, the left is a scatter plot of each hypothesis with p-values (y-axis) plotted against the covariate (x-axis). The right panel shows the estimated null hypothesis distribution (blue) and the estimated alternative hypothesis distribution (orange) with respect to the covariate. Here we can conclude that a hypothesis is more likely to be significant if the covariate (gene expression) value is higher.Multiple hypothesis testing usingadafdr_testres=md.adafdr_test(p,x,fast_mode=True,output_folder=None)Here, the learned thresholdres['threshold']looks as follows.Each orange dot corresponds to the threhsold to one hypothesis. The discrepancy at the right is due to the difference between the thresholds learned by the two folds.Quick TestHere is a quick test. First check if the package can be successfully imported:importadafdr.methodasmdimportadafdr.data_loaderasdlNext, run a small example which should take a few seconds:importnumpyasnpp,x,h,_,_=dl.load_1d_bump_slope()res=md.adafdr_test(p,x,alpha=0.1)t=res['threshold']D=np.sum(p<=t)FD=np.sum((p<=t)&(~h))print('# AdaFDR successfully finished!')print('# D=%d, FD=%d, FDP=%0.3f'%(D,FD,FD/D))It runsAdaFDR-faston a 1d simulated data. If the package is successfully imported, the result should look like:# AdaFDR successfully finished! # D=837, FD=79, FDP=0.094R API of python packageR API of this package can be foundhere.Citation informationZhang, Martin J., Fei Xia, and James Zou. "AdaFDR: a Fast, Powerful and Covariate-Adaptive Approach to Multiple Hypothesis Testing." bioRxiv (2018): 496372.Xia, Fei, et al. "Neuralfdr: Learning discovery thresholds from hypothesis features." Advances in Neural Information Processing Systems. 2017.
adaflow
Failed to fetch description. HTTP Status Code: 404
adaflow-python
adaflow-pythonPython API for Adaflow. Please refer toAdaFlowfor more information.