package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
abvdget
No description available on PyPI.
ab-versions-py
This library is getting the version number and locked/protected status from file generated by Rockwell Software FactoryTalk View.Right now it can get the version number from MER and APA files generated by FactoryTalk View ME, as well as tell you if they are locked, and strip the protection.It might work on APA files generated from FactoryTalk View SE, but I have not tested this or looked into it. At some point I plan on testing that, as well as adding some support for other Allen Bradley / Rockwell Software file formats, such as the newer APB files from FactoryTalk View SE.This library is simply python bindings to the main library written in Rust which can be foundhere
abv-py
Bilibili-av-bv-converter用户手册(中文版)本转换器开源,可以自行把代码拿去研究首先将avbv.py下载到python路径中的site-packages文件夹下(我不会pypi,所以没有弄到pypi)使用方法:导入avbv模块,就可以用av2bv和bv2av来进行互转了示例代码:fromabv_pyimportav2bv,bv2avprint(av2bv(170001))# BV17x411w7KCprint(bv2av('BV17x411w7KC'))# 170001当在bv2av中输入不合法bv号(如bv1)会报错(TypeError)但是下面的代码不会报错:fromabv_pyimportbv2avprint(bv2av('BV17x411w7KC'))# 170001
abx24
Async Bitrix24 client for Python 3.6+Клиент разработан на основе синхронного клиента Bitrix24-rest -https://pypi.org/project/bitrix24-rest/UsagefromaiohttpimportClientSessionfromabx24importBitrix24bx24=Bitrix24('')awaitbx24.call_method('crm.customer.add',ClientSession(),fields={"NAME":'Niel',"SECOND_NAME":'Ketov'})
abxnester
No description available on PyPI.
abxscd
Slowly Changing Dimensions implemenation with Databricks Delta LakeWhat is Slowly Changing DimensionSlowly Changing Dimensions (SCD) are dimensions which change over time and in Data Warehuse we need to track the changes of the attributes keep the accuracy of the report.And typically there are three types of SCDType 1: SCD1, No history preservationType 2: SCD2, Unlimited history preservation and new rowsType 3: SCD3, Limited history preservationFor example we have a datasetShortNameFruitColorPriceFAFiji AppleRed3.6BNBananaYellow1GGGreen GrapeGreen2RGRed GrapeRed2If we change the price of "Fiji Apple" into 3.5, the dataset will bewith SCD1:ShortNameFruitColorPriceFAFiji AppleRed3.5BNBananaYellow1GGGreen GrapeGreen2RGRed GrapeRed2with SCD2:ShortNameFruitColorPriceis_lastFAFiji AppleRed3.5YFAFiji AppleRed3.6NBNBananaYellow1YGGGreen GrapeGreen2YRGRed GrapeRed2Ywith SCD3:ShortNameFruitColorPriceColor_oldPrice_oldFAFiji AppleRed3.5Red3.6BNBananaYellow1NULLNULLGGGreen GrapeGreen2NULLNULLRGRed GrapeRed2NULLNULLSCD implementation in DatabricksIn this repository, there areimplementationsof SCD1, SCD2 and SCD3 in python and Databricks Delta Lake.SCD1SCD1(df,target_table_name,target_partition_keys,key_cols,current_time):Parameters:df: source dataframetarget_table_name: target table nametarget_partition_keys: partition key of the target tablekey_cols: columns of the key for each rowcurrent_time: current timestampHere is the code to show an example of SCD1frompyspark.sqlimportfunctionsasFfrompyspark.sqlimportDataFrameimportdatetime# create sample datasetdf1=spark.createDataFrame([('FA','Fiji Apple','Red',3.5),('BN','Banana','Yellow',1.0),('GG','Green Grape','Green',2.0),('RG','Red Grape','Red',2.0)],['ShortName','Fruit','Color','Price'])# prepare parameterscurrent_time=datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S")target_partition_keys=['ShortName']key_cols="ShortName,Fruit"target_table_name_scd1='default.table_scd1'# call the SCD1 functionSCD1(df1,target_table_name_scd1,target_partition_keys,key_cols,current_time)# display the resultdisplay(spark.sql(f"select * from{target_table_name_scd1}"))Change the price of "Fiji Apple" into 3.5 and run SCD1 againdf2=spark.createDataFrame([('FA','Fiji Apple','Red',3.6),('BN','Banana','Yellow',1.0),('GG','Green Grape','Green',2.0),('RG','Red Grape','Red',2.0)],['ShortName','Fruit','Color','Price'])# call the SCD1 function againSCD1(df2,target_table_name_scd1,target_partition_keys,key_cols,current_time)display(spark.sql(f"select * from{target_table_name_scd1}"))SCD2SCD2(df,target_table_name,target_partition_keys,key_cols,current_time):Parameters:df: source dataframetarget_table_name: target table nametarget_partition_keys: partition key of the target tablekey_cols: columns of the key for each rowcurrent_time: current timestampHere is the code to show an example of SCD2frompyspark.sqlimportfunctionsasFfrompyspark.sqlimportDataFrameimportdatetime# create sample datasetdf1=spark.createDataFrame([('FA','Fiji Apple','Red',3.5),('BN','Banana','Yellow',1.0),('GG','Green Grape','Green',2.0),('RG','Red Grape','Red',2.0)],['ShortName','Fruit','Color','Price'])# prepare parameterscurrent_time=datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S")target_partition_keys=['ShortName']key_cols="ShortName,Fruit"target_table_name_scd1='default.table_scd2'# call the SCD1 functionSCD2(df1,target_table_name_scd1,target_partition_keys,key_cols,current_time)# display the resultdisplay(spark.sql(f"select * from{target_table_name_scd2}"))Change the price of "Fiji Apple" into 3.5 and run SCD2 againdf2=spark.createDataFrame([('FA','Fiji Apple','Red',3.6),('BN','Banana','Yellow',1.0),('GG','Green Grape','Green',2.0),('RG','Red Grape','Red',2.0)],['ShortName','Fruit','Color','Price'])# call the SCD1 function againSCD2(df2,target_table_name_scd2,target_partition_keys,key_cols,current_time)display(spark.sql(f"select * from{target_table_name_scd2}"))SCD3SCD3(df,target_table_name,target_partition_keys,key_cols,current_time):Parameters:df: source dataframetarget_table_name: target table nametarget_partition_keys: partition key of the target tablekey_cols: columns of the key for each rowcurrent_time: current timestampHere is the code to show an example of SCD3frompyspark.sqlimportfunctionsasFfrompyspark.sqlimportDataFrameimportdatetime# create sample datasetdf1=spark.createDataFrame([('FA','Fiji Apple','Red',3.5),('BN','Banana','Yellow',1.0),('GG','Green Grape','Green',2.0),('RG','Red Grape','Red',2.0)],['ShortName','Fruit','Color','Price'])# prepare parameterscurrent_time=datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S")target_partition_keys=['ShortName']key_cols="ShortName,Fruit"target_table_name_scd1='default.table_scd3'# call the SCD1 functionSCD3(df1,target_table_name_scd3,target_partition_keys,key_cols,current_time)# display the resultdisplay(spark.sql(f"select * from{target_table_name_scd3}"))Change the price of "Fiji Apple" into 3.5 and run SCD3 againdf2=spark.createDataFrame([('FA','Fiji Apple','Red',3.6),('BN','Banana','Yellow',1.0),('GG','Green Grape','Green',2.0),('RG','Red Grape','Red',2.0)],['ShortName','Fruit','Color','Price'])# call the SCD1 function againSCD3(df2,target_table_name_scd3,target_partition_keys,key_cols,current_time)display(spark.sql(f"select * from{target_table_name_scd3}"))
abx-scd
Slowly Changing Dimensions implemenation with Databricks Delta LakeWhat is Slowly Changing DimensionSlowly Changing Dimensions (SCD) are dimensions which change over time and in Data Warehuse we need to track the changes of the attributes keep the accuracy of the report.And typically there are three types of SCDType 1: SCD1, No history preservationType 2: SCD2, Unlimited history preservation and new rowsType 3: SCD3, Limited history preservationFor example we have a datasetShortNameFruitColorPriceFAFiji AppleRed3.6BNBananaYellow1GGGreen GrapeGreen2RGRed GrapeRed2If we change the price of "Fiji Apple" into 3.5, the dataset will bewith SCD1:ShortNameFruitColorPriceFAFiji AppleRed3.5BNBananaYellow1GGGreen GrapeGreen2RGRed GrapeRed2with SCD2:ShortNameFruitColorPriceis_lastFAFiji AppleRed3.5YFAFiji AppleRed3.6NBNBananaYellow1YGGGreen GrapeGreen2YRGRed GrapeRed2Ywith SCD3:ShortNameFruitColorPriceColor_oldPrice_oldFAFiji AppleRed3.5Red3.6BNBananaYellow1NULLNULLGGGreen GrapeGreen2NULLNULLRGRed GrapeRed2NULLNULLSCD implementation in DatabricksIn this repository, there areimplementationsof SCD1, SCD2 and SCD3 in python and Databricks Delta Lake.SCD1SCD1(df,target_table_name,target_partition_keys,key_cols,current_time):Parameters:df: source dataframetarget_table_name: target table nametarget_partition_keys: partition key of the target tablekey_cols: columns of the key for each rowcurrent_time: current timestampHere is the code to show an example of SCD1frompyspark.sqlimportfunctionsasFfrompyspark.sqlimportDataFrameimportdatetime# create sample datasetdf1=spark.createDataFrame([('FA','Fiji Apple','Red',3.5),('BN','Banana','Yellow',1.0),('GG','Green Grape','Green',2.0),('RG','Red Grape','Red',2.0)],['ShortName','Fruit','Color','Price'])# prepare parameterscurrent_time=datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S")target_partition_keys=['ShortName']key_cols="ShortName,Fruit"target_table_name_scd1='default.table_scd1'# call the SCD1 functionSCD1(df1,target_table_name_scd1,target_partition_keys,key_cols,current_time)# display the resultdisplay(spark.sql(f"select * from{target_table_name_scd1}"))Change the price of "Fiji Apple" into 3.5 and run SCD1 againdf2=spark.createDataFrame([('FA','Fiji Apple','Red',3.6),('BN','Banana','Yellow',1.0),('GG','Green Grape','Green',2.0),('RG','Red Grape','Red',2.0)],['ShortName','Fruit','Color','Price'])# call the SCD1 function againSCD1(df2,target_table_name_scd1,target_partition_keys,key_cols,current_time)display(spark.sql(f"select * from{target_table_name_scd1}"))SCD2SCD2(df,target_table_name,target_partition_keys,key_cols,current_time):Parameters:df: source dataframetarget_table_name: target table nametarget_partition_keys: partition key of the target tablekey_cols: columns of the key for each rowcurrent_time: current timestampHere is the code to show an example of SCD2frompyspark.sqlimportfunctionsasFfrompyspark.sqlimportDataFrameimportdatetime# create sample datasetdf1=spark.createDataFrame([('FA','Fiji Apple','Red',3.5),('BN','Banana','Yellow',1.0),('GG','Green Grape','Green',2.0),('RG','Red Grape','Red',2.0)],['ShortName','Fruit','Color','Price'])# prepare parameterscurrent_time=datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S")target_partition_keys=['ShortName']key_cols="ShortName,Fruit"target_table_name_scd1='default.table_scd2'# call the SCD1 functionSCD2(df1,target_table_name_scd1,target_partition_keys,key_cols,current_time)# display the resultdisplay(spark.sql(f"select * from{target_table_name_scd2}"))Change the price of "Fiji Apple" into 3.5 and run SCD2 againdf2=spark.createDataFrame([('FA','Fiji Apple','Red',3.6),('BN','Banana','Yellow',1.0),('GG','Green Grape','Green',2.0),('RG','Red Grape','Red',2.0)],['ShortName','Fruit','Color','Price'])# call the SCD1 function againSCD2(df2,target_table_name_scd2,target_partition_keys,key_cols,current_time)display(spark.sql(f"select * from{target_table_name_scd2}"))SCD3SCD3(df,target_table_name,target_partition_keys,key_cols,current_time):Parameters:df: source dataframetarget_table_name: target table nametarget_partition_keys: partition key of the target tablekey_cols: columns of the key for each rowcurrent_time: current timestampHere is the code to show an example of SCD3frompyspark.sqlimportfunctionsasFfrompyspark.sqlimportDataFrameimportdatetime# create sample datasetdf1=spark.createDataFrame([('FA','Fiji Apple','Red',3.5),('BN','Banana','Yellow',1.0),('GG','Green Grape','Green',2.0),('RG','Red Grape','Red',2.0)],['ShortName','Fruit','Color','Price'])# prepare parameterscurrent_time=datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S")target_partition_keys=['ShortName']key_cols="ShortName,Fruit"target_table_name_scd1='default.table_scd3'# call the SCD1 functionSCD3(df1,target_table_name_scd3,target_partition_keys,key_cols,current_time)# display the resultdisplay(spark.sql(f"select * from{target_table_name_scd3}"))Change the price of "Fiji Apple" into 3.5 and run SCD3 againdf2=spark.createDataFrame([('FA','Fiji Apple','Red',3.6),('BN','Banana','Yellow',1.0),('GG','Green Grape','Green',2.0),('RG','Red Grape','Red',2.0)],['ShortName','Fruit','Color','Price'])# call the SCD1 function againSCD3(df2,target_table_name_scd3,target_partition_keys,key_cols,current_time)display(spark.sql(f"select * from{target_table_name_scd3}"))
abxy
Welcome to Abxy!Abxy is a third party library for the package namedinkster. Abxy is a chatbot package with random interpreters such as ChatPreter the chatbot interpreter.InstallationTo install abxy just run this in a command prompt:pipinstallabxyGetting StartedIf you would like to use abxy just do this:Abxy's interpreters starts up fromfrom (some import) import (some import).Here is a example of running a interpreter:fromabxyimportchatpreterAs soon as you run your file it will run the interpreter right away!If you would like to see all interpreters available just do this in your file:fromabxyimporthelpAnd once you run the file it should print out all interpreters available in your console!If you would like to run the chatbot just do this in your file:fromabxyimportchatbotAnd once you run the file it should startup the chatbot!Links & ContactsDiscord Server:https://discord.gg/HUFtMszLicenseCopyright 2020 AbxyPlayzPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
aby
aby
abydos
AbydosCI & Test StatusCode QualityDependenciesLocal AnalysisUsageContributionPyPIconda-forgeAbydos NLP/IR libraryCopyright 2014-2020 by Christopher C. LittleAbydos is a library of phonetic algorithms, string distance measures & metrics, stemmers, and string fingerprinters including:Phonetic algorithmsRobert C. Russell’s IndexAmerican SoundexRefined SoundexDaitch-Mokotoff SoundexKölner PhonetikNYSIISMatch Rating AlgorithmMetaphoneDouble MetaphoneCaverphoneAlpha Search Inquiry SystemFuzzy SoundexPhonexPhonemPhonixSfinxBisphonetStandardized Phonetic Frequency CodeStatistics CanadaLeinRoger RootOxford Name Compression Algorithm (ONCA)Eudex phonetic hashHaase PhonetikReth-Schek PhonetikFONEMParmar-KumbharanaDavidson’s Consonant CodeSoundDPSHP Soundex/Viewex Codingan early version of Henry CodeNorphoneDolby CodePhonetic SpanishSpanish MetaphoneMetaSoundexSoundexBRNRL English-to-phonemeBeider-Morse Phonetic MatchingString distance metricsLevenshtein distanceOptimal String Alignment distanceLevenshtein-Damerau distanceHamming distanceTversky indexSørensen–Dice coefficient & distanceJaccard similarity coefficient & distanceoverlap similarity & distanceTanimoto coefficient & distanceMinkowski distance & similarityManhattan distance & similarityEuclidean distance & similarityChebyshev distancecosine similarity & distanceJaro distanceJaro-Winkler distance (incl. the strcmp95 algorithm variant)Longest common substringRatcliff-Obershelp similarity & distanceMatch Rating Algorithm similarityNormalized Compression Distance (NCD) & similarityMonge-Elkan similarity & distanceMatrix similarityNeedleman-Wunsch scoreSmith-Waterman scoreGotoh scoreLength similarityPrefix, Suffix, and Identity similarity & distanceModified Language-Independent Product Name Search (MLIPNS) similarity & distanceBag distanceEditex distanceEudex distancesSift4 distanceBaystat distance & similarityTypo distanceIndel distanceSynonameStemmersthe Lovins stemmerthe Porter and Porter2 (Snowball English) stemmersSnowball stemmers for German, Dutch, Norwegian, Swedish, and DanishCLEF German, German plus, and Swedish stemmersCaumann’s German stemmerUEA-Lite StemmerPaice-Husk StemmerSchinke Latin stemmerS stemmerString Fingerprintsstring fingerprintq-gram fingerprintphonetic fingerprintPollock & Zomora’s skeleton keyPollock & Zomora’s omission keyCisłak & Grabowski’s occurrence fingerprintCisłak & Grabowski’s occurrence halved fingerprintCisłak & Grabowski’s count fingerprintCisłak & Grabowski’s position fingerprintSynoname ToolcodeInstallationRequired libraries:NumPydeprecationOptional libraries (all available on PyPI, some available on conda or conda-forge):SyllabiPyNLTKPyLZSSpaqTo install Abydos (master) from Github source:git clone https://github.com/chrislit/abydos.git --recursive cd abydos python setup installIf your default python command calls Python 2.7 but you want to install for Python 3, you may instead need to call:python3 setup installTo install Abydos (latest release) from PyPI using pip:pip install abydosTo install fromconda-forge:conda install abydosIt should run on Python 3.5-3.8.Testing & ContributingTo run the whole test-suite just call tox:toxThe tox setup has the following environments: black, py37, doctest, regression, fuzz, pylint, pydocstyle, flake8, doc8, docs, sloccount, badges, & build. So if you only want to generate documentation (in HTML, EPUB, & PDF formats), just call:tox -e docsIn order to only run & generate Flake8 reports, call:tox -e flake8Contributions such as bug reports, PRs, suggestions, desired new features, etc. are welcome through GithubIssues&Pull requests.Release History0.5.0 (2020-01-10)ecgtheowdoi:10.5281/zenodo.3603514Changes:Support for Python 2.7 was removed.0.4.1 (2020-01-07)distant dietrichdoi:10.5281/zenodo.3600548Changes:Support for Python 3.4 was removed. (3.4 reached end-of-life on March 18, 2019)Fuzzy intersections were corrected to avoid over-counting partial intersection instances.Levenshtein can now return an optimal alignmentAdded the following distance measures:Indice de Similitude-Guth (ISG)INClusion ProgrammeGuthVictorian Panel Study (VPS) scoreLIG3 similarityDiscounted LevenshteinRelaxed HammingString subsequence kernel (SSK) similarityPhonetic edit distanceHenderson-Heron dissimilarityRaup-Crick similarityMillar’s binomial deviance dissimilarityMorisita similarityHorn-Morisita similarityClark’s coefficient of divergenceChao’s Jaccard similarityChao’s Dice similarityCao’s CY similarity (CYs) and dissimilarity (CYd)Added the following fingerprint classes:Taft’s Consonant codingTaft’s Extract - letter listTaft’s Extract - position & frequencyL.A. County Sheriff’s SystemLibrary of Congres Cutter table encodingAdded the following phonetic algorithms:Ainsworth’s grapheme-to-phonemePHONIC0.4.0 (2019-05-30)dietrichdoi:10.5281/zenodo.3235034Version 0.4.0 focuses on distance measures, adding 211 new measures. Attempts were made to provide normalized version for measure that did not inherently range from 0 to 1. The other major focus was the addition of 12 tokenizers, in service of expanding distance measure options.Changes:Support for Python 3.3 was dropped.Deprecated functions that merely wrap class methods to maintain API compatibility, for removal in 0.6.0Added methods to ConfusionTable to return:its internal representationfalse negative ratefalse omission ratepositive & negative likelihood ratiosdiagnostic odds ratioerror rateprevalenceJaccard indexD-measurePhi coefficientjoint, actual, & predicted entropiesmutual informationproficiency (uncertainty coefficient)information gain ratiodependencyliftDeprecated f-measure & g-measure from ConfusionTable for removal in 0.6.0Added notes to indicate when functions, classes, & methods were addedAdded the following 12 tokenizers:QSkipgramsCharacterTokenizerRegexpTokenizer, WhitespaceTokenizer, & WordpunctTokenizerCOrVClusterTokenizer, CVClusterTokenizer, & VCClusterTokenizerSonoriPyTokenizer & LegaliPyTokenizerNLTKTokenizerSAPSTokenizerAdded the UnigramCorpus class & a facility for downloading data, such as pre-processed/trained data, from storage on GitHubAdded the Wåhlin phonetic encodingAdded the following 211 similarity/distance/correlation measures:ALINEAMPLEAnderbergAndres & Marzo’s DeltaAverage LinkageAZZOOBaroni-Urbani & Buser I & IIBatagelj & BrenBaulieu I-XVBenini I & IIBennetBhattacharyyaBI-SIMBLEUBlock LevenshteinBrainerd-RobinsonBraun-BlanquetCanberraChordClementCohen’s KappaColeComplete LinkageConsonni & Todeschini I-VCormode’s LZCovingtonDennisDice Asymmetric I & IIDigbyDispersionDoolittleDunningEyraudFager & McGowanFaithFellegi-SunterFidelityFleissFleiss-Levin-PaikFlexMetricForbes I & IIFossumFuzzyWuzzy Partial StringFuzzyWuzzy Token SetFuzzyWuzzy Token SortGeneralized FleissGilbertGilbert & WellsGini I & IIGoodallGoodman & Kruskal’s LambdaGoodman & Kruskal’s Lambda-rGoodman & Kruskal’s Tau A & BGower & LegendreGuttman’s Lambda A & BGwet’s ACHamannHarris & LaheyHassanatHawkins & DotsonHellingerHiguera & MicoHurlbertIterative SubStringJaccard-NMJensen-ShannonJohnsonKendall’s TauKent & Foster I & IIKoppen I & IIKuder & RichardsonKuhns I-XIIKulczynski I & IILongest Common PrefixLongest Common SuffixLorentzianMaarelMarkingMarking MetricMASIMatusitaMaxwell & PillinerMcConnaugheyMcEwen & MichaelMetaLevenshteinMicheletMinHashMountfordMean Squared ContingencyMutual InformationNCD with LZSSNCD with PAQ9aOzbayPatternPearson’s Chi-SquaredPearson & Heron IIPearson II & IIIPearson’s PhiPeircePositional Q-Gram Dice, Jaccard, & OverlapQ-GramQuantitative Cosine, Dice, & JaccardRees-LevenshteinRobertsRogers & TanimotoRogot & GoldbergRouge-L, -S, -SU, & -WRussell & RaoSAPSScott’s PiShapeShapira & Storer ISift4 ExtendedSingle LinkageSizeSoft CosineSoftTF-IDFSokal & MichenerSokal & Sneath I-VSorgenfreiSteffensenStilesStuart’s TauTarantulaTarwidTetrachoricTF-IDFTichyTulloss’s R, S, T, & UUnigram SubtupleUnknown A-MUpholtWarrens I-VWeighted JaccardWhittakerYates’ Chi-SquaredYJHHRYujian & BoYule’s Q, Q II, & YFour intersection types are now supported for all distance measure that are based on _TokenDistance. In addition to basic crisp intersections, soft, fuzzy, and group linkage intersections have been provided.0.3.6 (2018-11-17)classy carldoi:10.5281/zenodo.1490537Changes:Most functions were encapsulated into classes.Each class is broken out into its own file, with test files paralleling library files.Documentation was converted from Sphinx markup to Numpy style.A tutorial was written for each subpackage.Documentation was cleaned up, with math markup corrections and many additional links.0.3.5 (2018-10-31)cantankerous carldoi:10.5281/zenodo.1463204Version 0.3.5 focuses on refactoring the whole project. The API itself remains largely the same as in previous versions, but underlyingly modules have been split up. Essentially no new features are added (bugfixes aside) in this version.Changes:Refactored library and tests into smaller modulesBroke compression distances (NCD) out into separate functionsAdopted Black code styleAdded pyproject.toml to use Poetry for packaging (but will continue using setuptools and setup.py for the present)Minor bug fixes0.3.0 (2018-10-15)carldoi:10.5281/zenodo.1462443Version 0.3.0 focuses on additional phonetic algorithms, but does add numerous distance measures, fingerprints, and even a few stemmers. Another focus was getting everything to build again (including docs) and to move to more standard modern tools (flake8, tox, etc.).Changes:Fixed implementation of Bag distanceUpdated BMPM to version 3.10Fixed Sphinx documentation on readthedocs.orgSplit string fingerprints out of clustering into their own moduleAdded support for q-grams to skip-n charactersNew phonetic algorithms:Statistics CanadaLeinRoger RootOxford Name Compression Algorithm (ONCA)Eudex phonetic hashHaase PhonetikReth-Schek PhonetikFONEMParmar-KumbharanaDavidson’s Consonant CodeSoundDPSHP Soundex/Viewex Codingan early version of Henry CodeNorphoneDolby CodePhonetic SpanishSpanish MetaphoneMetaSoundexSoundexBRNRL English-to-phonemeNew string fingerprints:Cisłak & Grabowski’s occurrence fingerprintCisłak & Grabowski’s occurrence halved fingerprintCisłak & Grabowski’s count fingerprintCisłak & Grabowski’s position fingerprintSynoname ToolcodeNew distance measures:Minkowski distance & similarityManhattan distance & similarityEuclidean distance & similarityChebyshev distance & similarityEudex distancesSift4 distanceBaystat distance & similarityTypo distanceIndel distanceSynonameNew stemmers:UEA-Lite StemmerPaice-Husk StemmerSchinke Latin stemmerS stemmerEliminated ._compat submodule in favor of sixTransitioned from PEP8 to flake8, etc.Phonetic algorithms now consistently use max_length=-1 to indicate that there should be no length limitAdded example notebooks in binder directory0.2.0 (2015-05-27)bertholdAdded Caumanns’ German stemmerAdded Lovins’ English stemmerUpdated Beider-Morse Phonetic Matching to 3.04Added Sphinx documentation0.1.1 (2015-05-12)albrechtFirst Beta release to PyPIAuthorsChristopher C. Little (@chrislit) <[email protected]>
abylai-zhumart-20051
No description available on PyPI.
abysmal
Abysmal stands for “appallingly basic yet somehow mostly adequate language”.Abysmal is a programming language designed to allow non-programmers to implement simple business logic for computing prices, rankings, or other kinds of numeric values without incurring the security and stability risks that would normally result when non-professional coders contribute to production code. In other words, it’s a sandbox in which businesspeople can tinker with their business logic to their hearts’ content without involving your developers or breaking anything.FeaturesSupports Python 3.3 and aboveDependenciespython3-devnative library including Python C header fileslibmpdec-devnative library for decimal arithmeticLanguage ReferenceAbysmal programs are designed to be written by businesspeople, so the language foregoes almost all the features programmers want in a programming language in favor of mimicking something business people understand: flowcharts.Just about the only way your businesspeople can “crash” an Abysmal program is by dividing by zero, because:it’s not Turing-completeit can’t allocate memoryit can’t access the host process or environmentit operates on one and only one type: arbitrary-precision decimal numbersits only control flow construct is GOTOit doesn’t even allow loops!Example program# input variables: # # flavor: VANILLA, CHOCOLATE, or STRAWBERRY # scoops: 1, 2, etc. # cone: SUGAR or WAFFLE # sprinkles: 0 or 1 # weekday: MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, or SUNDAY # output variables: # # price: total price, including tax let TAX_RATE = 5.3% let WEEKDAY_DISCOUNT = 25% let GIVEAWAY_RATE = 1% @start: random! <= GIVEAWAY_RATE => @giveaway_winner price = scoops * (flavor == STRAWBERRY ? 1.25 : 1.00) price = price + (cone == WAFFLE ? 1.00 : 0.00) price = price + (sprinkles * 0.25) weekday not in {SATURDAY, SUNDAY} => @apply_weekday_discount => @compute_total @apply_weekday_discount: price = price * (1 - WEEKDAY_DISCOUNT) => @compute_total @giveaway_winner: price = 0.00 @compute_total: price = price * (1 + TAX_RATE)Control flowAn Abysmal program models a flowchart containing one or more steps, orstates. Program execution begins at the beginning of the first state and continues until it reaches a dead end. Along the way, variables can be assigned new values, and execution can jump to other states.That’s it.Every state has a name that starts with@. A state is declared like this:@start:A state declaration is followed by a sequence ofactions. Each action appears on its own line, and is one of the following:anassignmentof a value to a variable, like this:price = scoops * flavor == STRAWBERRY ? 1.25 : 1.00aconditional jumpto another state, like this:weekday not in {SATURDAY, SUNDAY} => @apply_weekday_discountanunconditional jumpto another state, like this:=> @compute_totalWhen execution reaches a state, that state’s actions are executed in order. If execution reaches the end of a state without jumping to a new state, the program exits.Programs are not allowed to contain loops or any other exeuction cycles. Any program containing a cycle will fail to compile.Actions are typically indented to make the state labels easier to see, but this is just a stylistic convention and is not enforced by the language.CommentsAnything following a#on a line is treated as a comment and is ignored.Line continuationsA\at the end of a line indicates that the next line is a continuation of the line. This makes it easy to format long lines readably by splitting them into multiple, shorter lines. Note that comments can appear after a\.NumbersAbysmal supports integer and fixed-point decimal literals like123,3.14159, etc. In addition, numbers can have the following suffixes:suffixmeaning%percent (12.5%is equivalent to0.125)korKthousand (50kis equivalent to50000)morMmillion (1.2mis equivalent to1200000)borBbillion (0.5bis equivalent to500000000)Scientific notation is not supported.BooleansAbysmal uses1and0to represent the result of any operation that yields a logical true/false value. When evaluating conditions in a conditional jump or a?expression, zero is considered false and any non-zero value is considered true.ExpressionsPrograms can evaluate expressions containing the following operators:operatorprecedencemeaningexample( exp )0 (highest)grouping(x + 1) * y!1logical NOT!x+1unary plus (has no effect)+x-1unary minus-x^2exponentiation (right associative)x ^ 3*3multiplicationx * 100/3divisionx / 2+4additionx + 5-4subtractionx - 3in { exp, … }5is a member of the setx in {0, y, -z}not in { exp, … }5is not a member of the setx not in {0, y, -z}in [ low , high ]5falls within the interval (see Intervals)x in [-3, 7]not in [ low , high ]5does not fall within the intervalx not in [-3, 7]<6is less thanx < y<=6is less than or equal tox <= y>6is greater thanx > y>=6is greater than or equal tox >= y==7is equal tox == y!=7is not equal tox != y&&8logical AND (short-circuiting)x && (y / x > 0.8)||9logical OR (short-circuiting)x > 3 || y > 7exp ? exp : exp10 (lowest)if-then-elsex < 0 ? -x : xIntervalsIntervals support inclusive endpoints (specified with square brackets) and exclusive endpoints (specified with parentheses), and the two can be freely mixed. For example, the follwing are all valid checks:x in (0, 1)x in (0, 1]x in [0, 1)x in [0, 1]Note that “backwards” intervals (where the first endpoint is greater than the second) are considered pathological and treated as empty. Therefore2 in (1, 3)evaluates to1(aka true), but2 in (3, 1)evaluates to0(aka false).FunctionsExpressions can take advantage of the following built-in functions:functionreturnsABS(exp)the absolute value of the specified valueCEILING(exp)the nearest integer value greater than or equal to the specified valueFLOOR(exp)the nearest integer value less than or equal to the specified valueMAX(exp1, exp2, …)the maximum of the specified valuesMIN(exp1, exp2, …)the minimum of the specified valuesROUND(exp)the specified value, rounded to the nearest integerVariablesAbysmal programs can read from and write to variables that you define when you compile the program. Some of these variables will be inputs, whose values you will set before you run the program. Others will be outputs, whose values the program will compute so that those values can be examined after the program has terminated.Abysmal does not distinguish between input and output variables.Allvariables and constant values are decimal numbers. Abysmal does not have any concept of strings, booleans, null, or any other types.If not explicitly set, variables default to 0.random!is a special, read-only variable that yields a new, random value every time it is referenced.You can also provide named constants to your programs when you compile them. Constants cannot be modified.A program can also declare custom variables that it can use to store intermediate results while the model is being run, or simply to define friendlier names for values that are used within the model. Custom variables must be declared before the first state is declared.Each custom variable is declared on its own line, like this:let PI = 3.14159 let area = PI * r * rUsageAn Abysmal program must be compiled before it can be run. The compiler needs to know the names of the variables that the program should have access to and names and values of any constants you want to define:ICE_CREAM_VARIABLES={# inputs'flavor','scoops','cone','sprinkles','weekday',# outputs'price',}ICE_CREAM_CONSTANTS={# flavors'VANILLA':1,'CHOCOLATE':2,'STRAWBERRY':3,# cones'SUGAR':1,'WAFFLE':2,# weekdays'MONDAY':1,'TUESDAY':2,'WEDNESDAY':3,'THURSDAY':4,'FRIDAY':5,'SATURDAY':6,'SUNDAY':7,}compiled_program,source_map=abysmal.compile(source_code,ICE_CREAM_VARIABLES,ICE_CREAM_CONSTANTS)Ignore the second value returned byabysmal.compile()for now (refer to the Measuring Coverage section to see what it’s useful for).Next, we need to make a virtual machine for the compiled program to run on:machine=compiled_program.machine()Next, we can set any variables as we see fit:# Variables can be set in bulk during reset()...machine.reset(flavor=ICE_CREAM_CONSTANTS['CHOCOLATE'],scoops=2,cone=ICE_CREAM_CONSTANTS['WAFFLE'])# ... or one at a time (though this is less efficient)machine['sprinkles']=True# automatically converted to '1'Finally, we can run the machine and examine final variable values:price=Decimal('0.00')try:machine.run()price=round(Decimal(machine['price']),2)exceptabysmal.ExecutionErrorasex:print('The ice cream pricing algorithm is broken: '+str(ex))else:print('Two scoops of chocolate ice cream in a waffle cone with sprinkles costs: ${0}'.format(price))Note that the virtual machine exposes variable values as strings, which may be formatted in scientific or fixed-point notation.Variables can be set from int, float, bool, Decimal, and string values but are converted to strings when assigned. When examining variables after running a machine, you need to convert to the values back to Decimal, float, or whatever numeric type you are interested in.Random NumbersBy default,random!generates numbers between 0 and 1 with 9 decimal places of precision, and uses the default Python PRNG (random.randrange).If you require a more secure PRNG, or different precision, or if you want to force certain values to be produced for testing purposes, you can supply your own random number iterator before running a machine:# force random! to yield 0, 1, 0, 1, ...machine.random_number_iterator=itertools.cycle([0,1])The values you return are not required to fall within any particular range, but [0, 1] is recommended, for consistency with the default behavior.LimitsDecimal values are constrained in accordance with the IEEE 754decimal128format. This provides 34 digits of precision and an exponent range of -6143 to +6144.Infinity, negative infinity, and NaN (not-a-number) are not allowed. Calculations that would give rise to one of these will instead trigger an error.In addition, a calculation can result in overflow or underflow if its result is too large or too small to fit into thedecimal128range.Errorsabysmal.CompilationErrorraised byabysmal.compile()if the source code cannot be compiledabysmal.ExecutionErrorraised bymachine.run()andmachine.run_with_coverage()if a program encounters an error while running; this includes conditions such as: division by zero, invalid exponentiation, stack overflow, floating-point overflow, floating-point underflow, out-of-space, and failure to generate a random numberabysmal.InstructionLimitExceededErrorraised bymachine.run()andmachine.run_with_coverage()if a program exceeds its allowed instruction count and is aborted; this error is a subclass ofabysmal.ExecutionErrorPerformance TipsAbysmal programs run very quickly once compiled, and the virtual machine is optimized to make repeated runs with different inputs as cheap as possible.As always, decide on your performance goals and measure before optimizing.To get the best performance, follow these tips:Avoid recompilationCompiling a program is orders of magnitude slower than actually running it.Save the compiled program and reuse it rather than recompiling every time. Compiled programs are pickleable, so they are easy to cache.Use baseline imagesWhen you create a machine, you can pass keyword arguments to set the machine’s variables to initial values. The state of the variables at this moment is called abaseline image. When you reset a machine, it restores all variables to the baseline image very efficiently. Therefore, if you are going to run a particular program repeatedly with some inputs having the same values for all the runs, you should specify those input values in the baseline.For example:defcompute_shipping_costs(product,weight,zip_codes,compiled_program):shipping_costs={}machine=compiled_program.machine(product=product,weight=weight)forzip_codeinzip_codes:machine.reset(zip=zip_code).run()shipping_costs[zip_code]=round(Decimal(machine['shippingCost']),2)returnshipping_costsSet multiple variables at onceOverride baseline variable values by passing keywords tomachine.reset()rather than assigning variables one-by-one. The overhead of making multiple Python function calls is non-trivial if your scenario needs performance!Only read and write variables you needInitializing variables before a program runs and reading variables afterwards can easily add up to more time it takes to actually run a typical program. If performance is critical for your scenario, you can save time by only examining variables whose values you really need.Limit instruction executionSince Abysmal does not support loops, it is very difficult to create a program that runs for very long. However, you can impose an additional limit on the number of instructions that a program can execute by setting theinstruction_limitattribute of a machine:machine.instruction_limit=5000If a program exceeds its instruction limit, it will raise anabysmal.InstructionLimitExceededError.The default instruction limit is 10000.Therun()method returns the number of instructions that were run before the program exited.Measuring CoverageIn addition torun(), virtual machines expose arun_with_coverage()method which can be used in conjunction with the source map returned byabysmal.compile()to generate coverage reports for Abysmal programs.coverage_tuples=[machine.reset(**test_case_inputs).run_with_coverage()fortest_case_inputsintest_cases]coverage_report=abysmal.get_uncovered_lines(source_map,coverage_tuples)print('Partially covered lines: '+', '.join(map(str,coverage_report.partially_covered_line_numbers)))print('Totally uncovered lines: '+', '.join(map(str,coverage_report.uncovered_line_numbers))How coverage works:run_with_coverage()returns acoverage tuplewhose length is equal to the number of instructions in the compiled program. The value at indexiin the coverage tuple will be True or False depending on whether instructioniwas executed during the program’s run.Thesource mapis another tuple, with the same length as the coverage tuple. The value at indexiin the source map indicates which line or lines in the source code generated instructioniof the compiled program. There are three possibilities:None - the instruction was not directly generated by any source lineint - the instruction was generated by a single source line(int, int, …) - the instruction was generated by multiple source lines (due to line continuations being used)InstallationNote that native library dependencies must be installed BEFORE you install theabysmallibrary.pip install abysmalDevelopment#Installsystem-leveldependenciesonDebian/Ubuntumake setup#Rununittestsmake test#Checkcodecleanlinessmake pylint#Checkcodecoveragemake cover#Createsdistpackagemake package
abyss
MMO/RPG/RTS/RESTAPI game in your terminal!
abyss-airflow-reprocessor
No description available on PyPI.
abyssal-pytorch
No description available on PyPI.
abyssinica
Locale library for the countries of Ethiopia and Eritrea.See alsoHornMT: a machine-learning corpus for the Horn of Africa region.FunctionalityNumeralsConvert between Arabic and Ge’ez numerals:>>> from abyssinica.numerals import arabic_to_geez >>> arabic_to_geez(42) '፵፪' >>> from abyssinica.numerals import geez_to_arabic >>> geez_to_arabic('፵፪') 42CalendarConvert between Gregorian and Ethiopic dates:>>> from abyssinica.calendar import Date as EthiopicDate >>> from datetime import date as GregorianDate >>> EthiopicDate.from_gregorian(GregorianDate(year=1996, month=3, day=2)) abyssinica.calendar.Date(1988, 6, 23) >>> EthiopicDate(year=1988, month=6, day=23).to_gregorian() datetime.date(1996, 3, 2)RomanizationTransliterate Ge’ez characters:>>> from abyssinica.romanization import romanize >>> print(f"{romanize('ሰላም እንደምን አለህ?').capitalize()}") Salām ʼendamn ʼalah?
abyss-sdk
No description available on PyPI.
abz
No description available on PyPI.
abzar
No description available on PyPI.
abzer
abzer═════Install abzer as shown [here] , run it by either calling`/usr/bin/abzer' or `python -m abzer'.Until Read the Docs supports Python 3.5, I'll just dump `--help' here┌────│ usage: abzer [-h] [-c CONFIG] [-p PROCESSES] [-v] FILENAME [FILENAME ...]││ positional arguments:│ FILENAME││ optional arguments:│ -h, --help show this help message and exit│ -c CONFIG, --config CONFIG│ The path to the config file. (default:│ /home/<username>/.abzsubmit/abzsubmit.conf)│ -p PROCESSES, --processes PROCESSES│ The number of processes to use for analyzing files.│ (default: <number of cpus>)│ -v, --verbose Be more verbose. (default: False)└────abzer uses the same config file as the [acousticbrainz-client]application, but only uses the `path' option in the `essentia'section. However, it doesn't require a config file to work, but justuses default values. If those don't work on your system, you'll benotified of that.[here] https://abzer.readthedocs.org/en/latest/setup.html[acousticbrainz-client] https://github.com/MTG/acousticbrainz-client
abzu
AbzuAbzu system for simulating language evolution, which used thengeshandalteruphonolibraries. It is named after theunderground aquifersthat were the domais ofEnki, the Sumerian god of language and confusion.Please remember thatabzuis a work-in-progress.InstallationIn any standard Python environment,abzucan be installed with:pip install abzuThepipinstalation will also fetch dependencies, such asngeshandalteruphono, if necessary. Installation in a virtual environment is recommended.How to useThe library is under development, and the best way to understand its usage is to follow thetests.A quick generation of a vocabulary following a random phonological system can be performed from the command-line:$ abzu Language: Aburo 1: oː e 2: i ŋ ẽ 3: f ɔ j ŋ 4: e h ɪ̃ s eː ʃ 5: i 6: k ɔː m ĩ ŋ uː 7: h a eː 8: u f 9: iː p 10: a o a ŋThe utility acceptssize(indicating the number of words in the vocabulary) andseed(for reproducibility) parameters:$ abzu --size 15 --seed jena Language: Rafvo 1: a m ã 2: e m e ɔ n ɨ n 3: p ɪ ʒ ɔ 4: ĩ b a ɔ 5: i n 6: ɪ a ŋ u j ʃ 7: t ɪ l u 8: n ɔ e 9: d u ɔ e ʃ 10: i s ɪ x 11: a b ẽ j 12: ɪ̃ ɪ b l a t u ʂ 13: e m ɔ n a ɪ ɲ e j s 14: n ɪ̃ tʃ ĩ ã 15: a ʃ aTODOSee internal notesHow to citeIf you useabzu, please cite it as:Tresoldi, Tiago (2019). Abzu, a system for simulating language evolution. Version 0.0.1dev. Jena. Available athttps://github.com/tresoldi/abzuIn BibTex:@misc{Tresoldi2019abzu, author = {Tresoldi, Tiago}, title = {Abzu, a system for simulating language evolution}, howpublished = {\url{https://github.com/tresoldi/abzu}}, address = {Jena}, year = {2019}, }AuthorTiago Tresoldi ([email protected])The author was supported during development by theERC Grant #715618for the projectCALC(Computer-Assisted Language Comparison: Reconciling Computational and Classical Approaches in Historical Linguistics), led byJohann-Mattis List.
ac
# AC.py - Python Autoconf ### Introduction #AC.py is a Python implementation of the popular autoconf tool used in ascertaining a sane, stable environment before attempting to build large projects. The purpose of AC.py is to provide a simpler way of performing these tests, along with added functionality to resolve environmental issues at the same time.### License #AC.py is licensed with [GPLv3](http://www.gnu.org). This is free software that may be used by anyone for any purposes and distributed freely, and comes with no warranty.### Author Info. # Originally authored by [Tom A. Thorogood](mailto:[email protected]).AC.py’s central repository is located at [github.com/tomthorogood/ac.py](http://www.github.com/tomthorogood/ac.py).## Installation #AC.py can be installed usingpip install acoreasy_install acAdditionally, you can clone and install yourself using:git clone git://github.com/tomthorogood/AC.py cd AC.py python setup.py installYou do not need to install ac.py in order to use it. It can be cloned and used as any standard Python module.## Usage #AC.py aims to be simpler than than traditional autoconf, and is highly customizable. The following tutorial will allow you to:Test for libraries and executablesSet up distribution-specific alternatives for failed testsUse test results to populate fields in a manifest Makefile.## The Shell Environment#AC.py will always attempt to test the shell environment first. The default shell can be changed using the–shellflag. When running any shell scripts generated by AC.py or written by you, the hashbang interpreter directive will always be at the head of each script (#!/bin/sh), using the results from the shell environment test.If you do not want your users to have to use the shell flag, but do want to require a specific shell environment, you can set the default using# ac.set_shell ac.set_shell(“sh”) ac.set_shell(“bash”) ac.set_shell(“tcsh”)However, it is highgly recommended that you use bash commands and scripts that will work across all platforms and shells.## Required Successes #Tests marked as required (or called with a ‘require’ function) will halt the configuration script if the test is not a success and there is no fail alternative provided.## A Generic Test #You can use any Python scripting to come up with a true/false result and pass the result into the test framework using# ac.test(“test_name”, result, [required=True|False])
ac207-autodiff
cs107-FinalProject - Group #32Members: Xuliang Guo, Kamran Ahmed, Van Anh Le, Hanwen CuiBroader Impacts and InclusivityBroader ImpactsVirtually all machine learning and AI algorithms can be attributed to solving optimization problems during the training process. While automatic differentiation does not direct broader impacts, its extensive use as an intermediate step in these algorithms forces us to consider the broader impact of our package. First of all, our package will be contributing to biases against African-American and other underrepresented minorities that current ML models used in the criminal justice system or hiring processes are already imposing. Second, any errors in our calculations could lead to misspecified models and erroneous predictions with significant impacts to downstream users. These impacts are especially grave in safety-critical settings such as healthcare, where a model that utilizes a faulty AD library could misdiagnose a patient or suggest sub-optimal treatments.InclusivityWhile our codebase is technically available and open for anyone to contribute through our GitHub repository, there are technical barriers that might prevent certain groups from participating in this process. Any contributors would need to have working knowledge of git version control and principles of software development. This precludes people from rural communities, communities of color, or poor urban communities, who are less likely to receive formal and rigorous training in computer science. Even at the college level, CS curricula are not homogenous and concepts such as git version control might not be taught at every school. Furthermore, users from other disciplines who rely on optimization and AD might be discouraged by the initial fixed cost of learning a complicated system such as git.Any developer who wants to contribute to our codebase can make a new branch and create a pull request. Pull requests will then be reviewed by one or many members of our team, depending on the extent of contribution. In order to make this process more inclusive, we could include a step-by-step guide on our repository that provides explicit direction on how to work with git and the expected best-practices that we hope they would follow.How to installWe recommend creating a virtual environment rather than installing in the base environment:python3 -m venv autodiff-env source autodiff/bin/activateOur package can be installed from Github or PyPI. We also include source distribution files and wheels under Releases.You can install ac207-autodiff viapipwith:pip install ac207-autodiffBasic usageDetailed descriptions about classes, methods, and operations can be found in ourAPI reference.Our automatic differentiation package’s default behavior uses forward mode. You can import this as follows:importautodiffasadIf you would like to use reverse mode, please explicitly import it as:importautodiff.reverseasadForward modeThe properties of a dual number lend itself nicely to a straightforward implementation of forward mode automatic differentiation. Briefly, we use dual numbers as our core data structure (ad.Dual). The value and derivative can be stored as the real and “dual” part of the dual number, respectively.We provide support for:Most arithmetic and comparison operationsElementary operations such as trigonometric functions, square root, logarithmic, logistic, and exponential functions, among others.Univariate functions>>>importautodiffasad>>>x=ad.Dual(2)>>>f=7*(x**3)+3*x>>>print(f"Function value:{f.val}, derivative:{f.der}")Functionvalue:62,derivative:[87.]Multivariate functions>>>importautodiffasad>>>x,y=ad.Dual.from_array([2,4])# helper static method>>>f=7*(x**3)+3*y>>>print(f"Function value:{f.val}, derivative:{f.der}")Functionvalue:68,derivative:[84.3.]Vector functions>>>importautodiffasad>>>deff(x,y,z):# Vector function mapping 3 inputs to 2 outputs....f1=7*(x**3)+3*y...f2=y/x+z**2...return(f1,f2)...>>>x,y,z=ad.Dual.from_array([2,4,6])>>>f1,f2=f(x,y,z)>>>print(f"f1 value:{f1.val}, derivative:{f1.der}")f1value:68,derivative:[84.3.0.]>>>print(f"f2 value:{f2.val}, derivative:{f2.der}")f2value:38.0,derivative:[-1.0.512.]Elementary operations>>>importautodiffasad>>>x,y=ad.Dual.from_array([2,4])>>>f=ad.exp(x)+y>>>print(f"Function value:{f.val:.4f}, "\..."derivative: [{f.der[0]:.4f}{f.der[1]:.4f}]")Functionvalue:11.3891,derivative:[7.38911.0]Reverse modeNote that these are contained within theautodiff.reversemodule.Explicitly import it as:>>>importautodiff.reverseasadad.Nodeis the primary data structure for reverse mode automatic differentiation. The process of evaluating derivatives in reverse mode consists of two passes, forward pass and reverse pass. During the forward pass, we calculate the primal values and the local gradient of child nodes with respect of each parent node in the computational graph. In the reverse pass, we recursively calculate the gradients.Reverse mode only evalates the function at the specified values. To calculate the gradient with respect to each input, you have to explicitly callNode.grad(). Examples can be found below.Univariate functionThe derivative of the function is not stored within the function object, but rather is computed on the fly whenx.grad()is called.>>>importautodiff.reverseasad>>>x=ad.Node(2)>>>f=7*(x**3)+3*x>>>grad=x.grad()# compute gradient>>>print(f"Function value:{f.val}, derivative w.r.t x ={grad}")Functionvalue:62,derivativew.r.tx=87.0Note that to reuse thexvariable again, without accumulating gradients you must callad.Node.zero_grad(x). A more detailed example can be found below when using vector functions.Multivariate functions>>>importautodiff.reverseasad>>>x=ad.Node(2)>>>y=ad.Node(4)>>>f=7*(x**3)+3*y>>>grad=[x.grad(),y.grad()]# explicitly compute all gradients w.r.t. x and y>>>print(f"Function value:{f.val}, derivative:{grad}")Functionvalue:68,derivative:[84.0,3.0]Vector functions>>>importautodiff.reverseasad>>>x,y,z=ad.Node.from_array([2,4,6])>>>deff(x,y,z):# Vector function mapping 3 inputs to 2 outputs...f1=7*(x**3)+3*y...f1_grad=[x.grad(),y.grad(),z.grad()]# compute gradient w.r.t. all inputs, before computing f2...ad.Node.zero_grad(x,y,z)# must be called before computing f2, otherwise gradients will accumulate...f2=y/x+z**2...f2_grad=[x.grad(),y.grad(),z.grad()]...returnf1,f1_grad,f2,f2_grad>>>f1,f1_grad,f2,f2_grad=f(x,y,z)>>>print(f"First function value:{f1.val}, derivative:{f1_grad}")Firstfunctionvalue:68,derivative:[84.0,3.0,1.0]>>>print(f"Second function value:{f2.val}, derivative:{f2_grad}")Secondfunctionvalue:38.0,derivative:[-1.0,0.5,12.0]Elementary operationsWe allow users to import overloaded elementary functions (sine, cosine, tangent, exponential, log, sqrt) to perform operations on Nodes.>>>importautodiff.reverseasad>>>x,y=ad.Node.from_array([2,4])>>>f=ad.exp(x)+y>>>grad=[x.grad(),y.grad()]>>>print(f"Function value:{f.val:.4f}, derivative: [{grad[0]:.4f}{grad[1]:.4}]")Functionvalue:11.3891,derivative:[7.38911.0]
ac4y-object
Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content.
ac4y-service
No description available on PyPI.
ac8593-key-manager
Key ManagerThis library creates a simple key manager so that the actual key is hidden from the code. This was done as a part of the homework for Data Engineering class at NYUInstallationpip install ac8593_key_manager==0.2.0Get startedSuppose Max wants to hide the key from his code. All he needs to do is create a .txt file and put his key in that file. Then, he can add the name of the key file to the .gitginore. This ensures that his key does not get pushed to git and can be passed around independantly of the code he writes. Moreover, people can use their own keys too without having to change the code!Let's say Max saves his key in max_api_key.txtfromac8593_key_managerimportKeyManager# Instantiate a KeyManager objectkey_manager=KeyManager('max_api_key.txt')# Call the get_key method to get the key. This key can be used to build the Polygon API clientkey=key_manager.get_key()
aca
# Aho–Corasick automaton + keyword tree implementation for PythonBy Timo Petmanson @ Funderbeam ( https://funderbeam.com )This package is a C++ implementation of the Aho-Corasick automaton and wrapped in Python with the following features:* dictionary matching with linear O(n) complexity* efficient String -> String dictionary* serialization* functionality for removing overlaps while maximizing the number of matched tokensPlease refer to examples below for more details.## The data structureIn computer science, the Aho–Corasick algorithm is a string searching algorithm invented by Alfred V. Aho and Margaret J. Corasick.It is a kind of dictionary-matching algorithm that locates elements of a finite set of strings (the "dictionary") within an input text.It matches all strings simultaneously.The complexity of the algorithm is linear in the length of the strings plus the length of the searched text plus the number of output matches.Note that because all matches are found, there can be a quadratic number of matches if every substring matches (e.g. dictionary = a, aa, aaa, aaaa and input string is aaaa).Informally, the algorithm constructs a finite state machine that resembles a trie with additional links between the various internal nodes.These extra internal links allow fast transitions between failed string matches (e.g. a search for cat in a trie that does not contain cat, but contains cart, and thus would fail at the node prefixed by ca), to other branches of the trie that share a common prefix (e.g., in the previous case, a branch for attribute might be the best lateral transition).This allows the automaton to transition between string matches without the need for backtracking.When the string dictionary is known in advance (e.g. a computer virus database), the construction of the automaton can be performed once off-line and the compiled automaton stored for later use.In this case, its run time is linear in the length of the input plus the number of matched entries.See https://en.wikipedia.org/wiki/Aho%E2%80%93Corasick_algorithm for more information.See https://github.com/tpetmanson/aca/blob/master/docs/slides04.pdf to learn more about how AC automatons work.## Example usage### Example 1: basic use caseCreate a dictionary of medicines and find where they match in a text.```python# create a new AC automatonfrom aca import Automatonautomaton = Automaton()# add a dictionary of words to the automatonpainkillers = ['paracetamol', 'ibuprofen', 'hydrocloride']automaton.add_all(painkillers)# match the dictionary on a texttext = 'paracetamol and hydrocloride are a medications to relieve pain and fever. paracetamol is less efficient than ibuprofen'for match in automaton.get_matches(text):print (match.start, match.end, match.elems)```Output:```0 11 paracetamol16 28 hydrocloride74 85 paracetamol109 118 ibuprofen```### Example 2: use case with tokenized keys and labels```python# create a new AC automatonfrom aca import Automatonautomaton = Automaton()# instead of plain strings, you can also use lists of tokensnames = [(['Yuri', 'Artyukhin'], 'developer'),(['Tom', 'Anderson', 'Jr'], 'designer'),]automaton.add_all(names)# you can add an item like this as wellautomaton[['Tom', 'Anderson']] = 'manager'# if you are not using plain strings, make sure you tokenize the text as welltext = 'Tom Anderson Jr and Yuri Artyukhin work on my project'.split()print ('matches that maximize the number of matched words')for match in automaton.get_matches(text):print (match.start, match.end, match.elems, match.label)```Output:```matches that maximize the number of matched words0 3 ['Tom', 'Anderson', 'Jr'] designer4 6 ['Yuri', 'Artyukhin'] developer```Note that your dictionary contains both Tom Anderson and Tom Anderson Jr.By default, the matcher removes the matches that overlap, but this featurecan be disabled.```pythonprint ('all matches')for match in automaton.get_matches(text, exclude_overlaps=False):print (match.start, match.end, match.elems, match.label)```Output:```0 2 ['Tom', 'Anderson'] manager0 3 ['Tom', 'Anderson', 'Jr'] designer4 6 ['Yuri', 'Artyukhin'] developer```### Example 3: dictionary use caseYou can use the automaton as a space-efficient dictionary.However, there are some implementation specific constraints:* keys can be only strings or string lists* values must be non-empty strings (with length greater than 0)* deleting keys won't free up memory, to do that you need to rebuild the Automaton* items() will always yield a list of strings```python# create a new AC automatonfrom aca import Automatonmap = Automaton()# use the automaton as a mapmap['electrify'] = 'verb'map['elegant'] = 'adjective'map['acid'] = 'noun'map['acidic'] = 'adjective'# access it like a Python dictionaryprint (map['acid'])```Output:```noun```---```python# Trying to access an non-existent key will raise KeyErrorprint (map['invalid key'])```Output:```KeyError: 'invalid key'```---```python# you can use get to provide a default value when key is missingprint (map.get('invalid key', 'default value'))```Output:```default value```---```python# NB! Implementation specific special case: empty strings# denote "missing" values, so you can't use thesemap['special'] = ''print (map['special'])```Output:```KeyError: 'special'```---```python# you can delete itemsdel map['electrify']# trying to delete a non-existent item raises KeyErrordel map['invalid key']```Output:```KeyError: 'invalid key'```---```python# NB! Implementation specific special case: empty strings# denote "missing" values, so you can't use thesemap['special'] = ''print (map['special'])```Output:```KeyError: 'special'```---```python# iterate items like a dict# NB! Due to implementation specifics, this will always yield list of strings.print ('items:')for key, value in map.items():print ('{}: {}'.format(key, value))```Output:```items:['a', 'c', 'i', 'd']: noun['a', 'c', 'i', 'd', 'i', 'c']: adjective['e', 'l', 'e', 'g', 'a', 'n', 't']: adjective```---```python# you can also iterate prefixesprint ('prefixes:')for prefix, value in map.prefixes():print ('{}: {}'.format(prefix, value))```Output:```[]:['a']:['a', 'c']:['a', 'c', 'i']:['a', 'c', 'i', 'd']: noun['a', 'c', 'i', 'd', 'i']:['a', 'c', 'i', 'd', 'i', 'c']: adjective['e']:['e', 'l']:['e', 'l', 'e']:['e', 'l', 'e', 'c']:['e', 'l', 'e', 'c', 't']:['e', 'l', 'e', 'c', 't', 'r']:['e', 'l', 'e', 'c', 't', 'r', 'i']:['e', 'l', 'e', 'c', 't', 'r', 'i', 'f']:['e', 'l', 'e', 'c', 't', 'r', 'i', 'f', 'y']:['e', 'l', 'e', 'g']:['e', 'l', 'e', 'g', 'a']:['e', 'l', 'e', 'g', 'a', 'n']:['e', 'l', 'e', 'g', 'a', 'n', 't']: adjective['s']:['s', 'p']:['s', 'p', 'e']:['s', 'p', 'e', 'c']:['s', 'p', 'e', 'c', 'i']:['s', 'p', 'e', 'c', 'i', 'a']:['s', 'p', 'e', 'c', 'i', 'a', 'l']:```### Example 4: saving and loading```pythonfrom aca import Automatonautomaton = Automaton()automaton['Estonia'] = 'Tallinn'automaton['Germany'] = 'Berlin'automaton['Finland'] = 'Helsinki'# serialize to diskautomaton.save_to_file('myautomaton.bin')# load from diskautomaton2 = Automaton()automaton2.load_from_file('myautomaton.bin')# save / load to binary stringautomaton3 = Automaton()automaton3.load_from_string(automaton.save_to_string())print (automaton2['Estonia'])print (automaton3['Germany'])```Output:```TallinnBerlin```## Install```pip install wheelpip install cythonpip install aca```### DevelopmentFor write / test cycles, use the following command to build the code in the project folder.```python setup.py build_ext --inplace```### Distributing the library```python setup.py buildpython setup.py sdist bdist_wheel upload```### DebuggingDefine ```ACA_DEBUG``` macro in ```aca.h``` header and recompile to see more debugging output.### LicenseGPLv3
acaba
Failed to fetch description. HTTP Status Code: 404
acabim-common-services-cas
No description available on PyPI.
acac
acac競プロ便利 CLI ツール。AtCoderとアルゴ式に対応。*現在 Pre-release のため、挙動やコマンドは変更される場合があります。概要競技プログラミングの過去問を解くときの(個人的に)典型的なワークフローを CLI として自動化したものです。過去問だけでなく開催中のコンテストでも使えますが、ログイン機能は実装されていないため、手動で HTML ファイルを取得する必要があります。インストールPython 3.7 以上がインストールされていれば利用可能です。pipinstallacac事前準備作業ディレクトリに移動して、acac initを実行します。# 例mkdirkyoprocdkyopro acacinitacac.tomlが作成されます。これが設定ファイルです。使用例まず、ブラウザで問題ページ(例えば、ABC 280 A - Pawn on a Grid)にアクセスします。URL をコピーします使用可能な場合、以下のショートカットキーが便利です。Windows:Ctrl+L,Ctrl+CMac:command+L,command+Cターミナルで以下のようなコマンドを実行すると、問題用のフォルダ(以下、問題フォルダ)に環境が自動作成されます。acachttps://atcoder.jp/contests/abc280/tasks/abc280_a処理の詳細問題フォルダを作成します。ソースコードのテンプレートファイルが用意されていれば、そのファイルをコピーします。そうでなければ、ソースコード用の空ファイルを作成します。(cache.htmlが無ければ)問題ページにアクセスし、HTML ファイルをcache.htmlとして保存します。metadata.tomlを作成します。問題ページのタイトルと URL が格納されます。問題ページ中からテストケースのサンプルを抽出し、テキストファイルとして保存します。acac.tomlで設定したコマンドを実行します。acac.tomlで設定したメッセージをクリップボードにコピーします。私は Git のコミットメッセージを設定しています。コードを書いて問題を解きます。ターミナルで以下のようなコマンドを実行します。acachttps://atcoder.jp/contests/abc280/tasks/abc280_a-jすると、以下のように処理されます。acac.tomlで設定したコマンドを実行します(バージョン確認、コンパイル等)。用意されたテストケースに対してジャッジを行います。acac.tomlで設定したコマンドを実行します(クリーンアップ等)。すべて AC であれば、ソースコードがクリップボードにコピーされますので、ブラウザに貼り付けて提出してください。「他の人の提出を確認しますか?」と聞かれるので、yと答えれば、同じ言語で AC した提出の一覧ページをブラウザで開きます。acac.tomlで設定したメッセージをクリップボードにコピーします。設定ファイル私が実際に使用している設定ファイルはこちらです。# 設定ファイルの例[create]# 環境作成後に実行されるコマンドのリスト(以下は git add をして、VSCode でソースコード用のファイルを開いている)post_create_commands=["git add ${dir_path}/in ${dir_path}/out ${dir_path}/metadata.toml","code . ${dir_path}/${source_file_name}",]# 環境作成後にクリップボードにコピーされるメッセージclipboard_message="Create: ${url}"[judge]# ジャッジ後にソースコードをクリップボードにコピーするかどうかcopy_source_code_when_ac=true# ジャッジ後にクリップボードにコピーされるメッセージclipboard_message="AC: ${url} ${source_file_name}"[language]# デフォルトの使用言語default="cpp"[language.settings.cpp]# ソースコードのファイル名source_file_name="main.cpp"# テンプレートファイルのパスtemplate_file_path="templates/main.cpp"[language.settings.cpp.commands]# ジャッジ前に実行するコマンドのリスト(以下はバージョンを表示し、コンパイルしている)pre_execute=["g++ --version","g++ ${dir_path}/${source_file_name} -o ${dir_path}/a.out",]# 実行コマンドexecute="${dir_path}/a.out"# ジャッジ後に実行するコマンドのリスト(以下は `a.out` を削除している)post_execute=["rm ${dir_path}/a.out"][language.settings.python3]# ...${var}の置換リスト置換前置換後${dir_path}問題フォルダのパス${lang}言語名${source_file_name}ソースコードのファイル名(パスではありません)${url}問題ページの URLコマンドオプションモード指定オプションモード-c, --create作業環境構築(デフォルト)-j, --judgeジャッジ-m, --manualURL にアクセスせず、HTML ファイルを手動で配置してテストケースを作成するログインが必要な場合、acac <url> -mを実行後、問題フォルダに問題ページの HTML ファイルを配置してください。その他acac.tomlに指定したデフォルト値を一時的に上書きするような動きをします。イコールは必須です。オプション上書きされるもの-l, --lang, lang=LANG_NAME使用言語-s, --source, source=SOURCE_FILE_NAMEソースコードのファイル名# 例acachttps://atcoder.jp/contests/abc280/tasks/abc280_a-l=python3--source=main2.py acachttps://atcoder.jp/contests/abc280/tasks/abc280_a-s=main2.pylang=python3--judgeコンセプトなぜacac create <url>やacac judge <url>のような一般的な CLI の慣例に沿っていないのかacac <url>で環境作成コードを書くターミナルでCtrl+P末尾に-jをつけてジャッジという流れを高速で行うためです。基本的に、一つの問題に対し複数のコマンドを実行することが多いので、URL のあとにコマンドやオプションを指定する方式を採っています。問題フォルダ構成が URL そのままで冗長なのはなぜか開発当初はAtCoder/ABC/280/A/のようなフォルダ構成にしていましたが、過去のコンテストの URL 規則との整合性や、未来への拡張性、ghqのような厳密性を保持するため、現在のような形にしました。
acache
No description available on PyPI.
acachecontrol
Async CacheControl for aiohttpRequires Python3.6+Note: Library is still under development, there might be a lot of bugs.For contributing see development_notes.md as a starting guideWhat and whyThere is a good and simple libraryCacheControlwritten for python requests library. And there is nothing similar for aiohttp. "Async CacheControl" project strives to cover this hole.UsageimportasynciofromacachecontrolimportAsyncCache,AsyncCacheControlasyncdefmain():cache=AsyncCache(config={"sleep_time":0.2})# `AsyncCache()` with default configuration is used# if `cache` not providedasyncwithAsyncCacheControl(cache=cache)ascached_sess:asyncwithcached_sess.get('http://example.com')asresp:resp_text=awaitresp.text()print(resp_text)asyncio.run(main())Extending or creating new classesIt is possible to use any cache backend, which should implement OrderedDict interfaces:__contains__,__len__,__getitem__,__setitem__,get,pop,popitem,move_to_end:classCustomCacheBackend():def__init__(self):self.item_order=[]self.storage={}def__contains__(self,key):returnkeyinself.storagedef__len__(self):returnlen(self.storage)def__getitem__(self,key):returnself.storage[key]def__setitem__(self,key,value):self.storage[key]=valueself.item_order.append(key)defget(self,key):returnself.storage.get(key)defpop(self,key):self.item_order.remove(key)returnself.storage.pop(key)defmove_to_end(self,key):last_index=len(self.item_order)-1key_index=self.item_order.index(key)whilekey_index<last_index:self.item_order[key_index]=self.item_order[key_index+1]key_index+=1self.item_order[last_index]=keydefpopitem(self,last=True):key=self.item_order.pop()iflastelseself.item_order.pop(0)value=self.storage.pop(key)returnvalueThen you can use it inAsyncCache:importasynciofromacachecontrolimportAsyncCache,AsyncCacheControlasyncdefmain():cache=AsyncCache(cache_backend=CustomCacheBackend())asyncwithAsyncCacheControl(cache=cache)ascached_sess:asyncwithcached_sess.get('http://example.com')asresp:resp_text=awaitresp.text()print(resp_text)asyncio.run(main())Similarly, you can replace RequestContextManager (assume its implementation in modulecustom_implementations):importasynciofromacachecontrolimportAsyncCache,AsyncCacheControlfromcustom_implementationsimportCustomRequestContextManagerasyncdefmain():asyncwithAsyncCacheControl(request_context_manager_cls=CustomRequestContextManager)ascached_sess:asyncwithcached_sess.get('http://example.com')asresp:resp_text=awaitresp.text()print(resp_text)asyncio.run(main())
acacia
SpruceAcacia is a program that lets you generate simple and concise presentations, as all presentations should be.InstallationUse the package managerpipto install foobar.pipinstallAcaciaUsageAcaciayourfile.txtContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.LicenseGPL
academia-rl
academiaThis package’s purpose is to provide easy-to-use tools for Curriculum Learning. It is a part of an engineering thesis at Warsaw University of Technology that touches on the topic of curriculum learning.Documentationhttps://academia.readthedocs.io/SourcesAn unordered list of interesting books and papersBooksReinforcement Learning: An Introduction (Barto, Sutton)PapersPaper LinkShort DescriptionRelated PapersCurriculum Learning for Reinforcement Learning Domains: A Framework and SurveySurvey of curriculum learning papers. Formalises curriuclum learning based on a variety of attributes and gives a good introduction into the topic.Curriculum Design for Machine Learners in Sequential Decision TasksTalks about curricula desgined by non-experts (i.e. people who do not know much/anything about a given domain). Uses "dog training" game as the basis of their experiments. Users can design a curriculum by sequencing any number of tasks in more or less complex environments. A target (more difficult) task is also provided to the user but they are not allowed to include it in the curriculum. A trainer model is used to go through the curriculum and provide feedback to the agent. They measure how good a curriculum is by the number of feedbacks that the trainer has to give to the agent i.e. if a curriculum is well designed the agent will require a relatively smaller number of feedbacks from the trainer to move on to the next task. They use three different trainer behaviours and show that the type of the trainer oes not influence the impact of curriuclum design i.e. if a curriculum is well designed under one trainer it is also well designed under another trainer. Another condition for a curriculum to be considered good is that the number of feedbacks over the entire curriculum (with the target task) should be smaller than the number of feedbacks when training on the target task alone. Results show that non-experts can design a better-than-random curriculum when it comes to reducing number of feedbacks on the target task alone but are not better-than-random in desigining a curriculum that decreases the overall number of feedbacks.Language and Policy Learning from Human-delivered Feedback,Learning behaviors via human-delivered discrete feedbackProximal Policy Optimization AlgorithmsNot directly related to Curriculum Learning but related to Reinforcement Learning.A Deep Hierarchical Approach to Lifelong Learning in MinecraftHaven't read it yet, not directly connected to CL but should still be helpful
academic
中文Academic File Converter📚 Easily import publications and Jupyter notebooks to your Markdown-formatted website or bookFeaturesImport Jupyter notebooksas blog posts or book chaptersImport publications(such asbooks, conference proceedings, and journals) from your reference manager to your Markdown-formatted website or bookSimply export a BibTeX file from your reference manager, such asZotero, and provide this as the input to the converter toolCompatible with all static website generatorssuch as Next, Astro, Gatsby, Hugo, etc.Easy to use- 100% Python, no dependency on complex software such as PandocAutomatefile conversions using aGitHub ActionCommunity📚View thedocumentationbelow💬Chat live with thecommunityon Discord🐦 Twitter:@GetResearchDev@GeorgeCushen#MadeWithAcademic❤️ Support Open Research & Open SourceWe are on a mission to fosteropen researchby developingopen sourcetools like this.To help us develop this open source software sustainably under the MIT license, we ask all individuals and businesses that use it to help support its ongoing maintenance and development via sponsorship and contributing.Support the open research movement:⭐️Starthis project on GitHub❤️Become aGitHub Sponsorandunlock perks☕️Donate a coffee👩‍💻ContributeInstallationOpen yourTerminalorCommand Promptapp and enter one of the installation commands below.With PipxFor theeasiestinstallation, install withPipx:pipx install academicPipx willautomatically install the required Python version for youin a dedicated environment.With PipTo install using the Python's Pip tool, ensure you havePython 3.11+installed and then run:pip3 install -U academicUsageOpen your Command Line or Terminal app and use thecdcommand to navigate to the folder containing the files you wish to convert, for example:cd ~/Documents/my_websiteImport publicationsDownload references from your reference manager, such as Zotero, in the Bibtex format.Say we downloaded our publications to a file namedmy_publications.bibwithin the website folder, let's import them into thecontent/publication/folder:academic import my_publications.bib content/publication/ --compactOptional arguments:--compactGenerate minimal markdown without comments or empty keys--overwriteOverwrite any existing publications in the output folder--normalizeNormalize tags by converting them to lowercase and capitalizing the first letter (e.g. "sciEnCE" -> "Science")--featuredFlag these publications asfeatured(to appear in your website'sFeatured Publicationssection)--verboseor-vShow verbose messages--helpHelpImport full text and cover imageAfter importing publications, we suggest you:Edit the Markdown body of each publication to add the full text directly to the page (if the publication is open access), or otherwise, to add supplementary notes for each publicationAdd an image namedfeaturedto each publication's folder to visually represent your publication on the page and for sharing on social mediaAdd the publication PDF to each publication folder (for open access publications), to enable your website visitors to download your publicationLearn more in the Hugo Blox Docs.Import blog posts from Jupyter NotebooksSay we have our notebooks in anotebooksfolder within the website folder, let's import them into thecontent/post/folder:academic import 'notebooks/*.ipynb' content/post/ --verboseOptional arguments:--overwriteOverwrite any existing blog posts in the output folder--verboseor-vShow verbose messages--helpHelpContributeInterested in contributing toopen sourceandopen research?Learnhow to contribute code on Github.Check out theopen issuesand contribute aPull Request.For local development, clone this repository and use Poetry to install and run the converter using the following commands:git clone https://github.com/GetRD/academic-file-converter.git cd academic-file-converter poetry install poetry run academic import tests/data/article.bib output/publication/ --overwrite --compact poetry run academic import 'tests/data/**/*.ipynb' output/post/ --overwrite --verboseWhen preparing a contribution, run the following checks and ensure that they all pass:Lint:make lintFormat:make formatTest:make testType check:make typeHelp beta test the dev versionYou can help test the latest development version by installing the latestmainbranch from GitHub:pip3 install -U git+https://github.com/GetRD/academic-file-converter.gitLicenseCopyright 2018-presentGeorge Cushen.Licensed under theMIT License.
academic-ads-bibtex
academic-ads-bibtexAPI documentation is available at:ReadTheDocsOverviewInstallationFrom PyPiFrom sourceExamplesVersioningAuthorsLicenseOverviewTheHugo Academic admin toolallows for the ingestion of BibTeX records to add to the publication list. One easy solution is to use the NASA ADS to retrieve such records from a NASA ADS Library. However, such records often contain LaTeX\newcommand. For example:@ARTICLE{2016ApJS..226....5L, author = {{Ly}, C. and {Malhotra}, S. and {Malkan}, M.~A. and {Rigby}, J.~R. and {Kashikawa}, N. and {de los Reyes}, M.~A. and {Rhoads}, J.~E. }, title = "{The Metal Abundances across Cosmic Time (MACT) Survey. I. Optical Spectroscopy in the Subaru Deep Field}", journal = {\apjs}, archivePrefix = "arXiv", eprint = {1602.01089}, keywords = {galaxies: abundances, galaxies: distances and redshifts, galaxies: evolution, galaxies: ISM, galaxies: photometry, galaxies: star formation}, year = 2016, month = sep, volume = 226, eid = {5}, pages = {5}, doi = {10.3847/0067-0049/226/1/5}, adsurl = {https://ui.adsabs.harvard.edu/abs/2016ApJS..226....5L}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} }Here, the journal name is simplified to "\apjs". This ends up propagating into Hugo Academic sites. To fix this, this simple pure Python script will convert such aliases into the full journal names. It uses a journal database to conduct the replacement.InstallationThere are two ways to get the code:FromPyPiFromsourceBut first, we recommend creating a separate (virtual) environment to avoid any possible conflicts with existing software that you used. Instructions are provided forcondaandvirtualenv.From PyPi:Usingconda:(base)$(sudo)condacreate-nbibtexpython=3.7(base)$condaactivatebibtex(bibtex)$(sudo)pipinstallacademic-ads-bibtexUsingvirtualenv:(base)$(sudo)condainstallvirtualenv# if not installed(base)$mkdiracademic-ads-bibtex(base)$cdacademic-ads-bibtex(base)$virtualenvvenv(base)$sourcevenv/bin/activate(venv)$pipinstallacademic-ads-bibtexFrom source:Usingconda:(base)$(sudo)condacreate-nbibtexpython=3.7(base)$condaactivatebibtex(bibtex)$gitclonehttps://github.com/astrochun/academic-ads-bibtex.git(bibtex)$cdacademic-ads-bibtex(bibtex)$(sudo)pythonsetup.pyinstallUsingvirtualenv:(base)$(sudo)condainstallvirtualenv# if not installed(base)$gitclonehttps://github.com/astrochun/academic-ads-bibtex.git(base)$cdacademic-ads-bibtex(base)$virtualenvvenv(base)$sourcevenv/bin/activate(venv)$pythonsetup.pyinstallExamplesThe primary script to execute isacademic_ads_bibtex. The above installation will include this executable in your python environment paths.Execution requires only one argument, which is the full path to the BibTeX file. It can be provided with the-for--filenamecommand-line flags.$ academic_ads_bibtex -f /full/path/to/my_pubs.bblBy default:The code uses the repository-based journal database,bibtex_journals.db. This can be changed by specifying the-dor--db_filenamecommand-line flag.The revised BibTeX file will be based on the inputfilenamewith the prefix changed to include_revised. For example, for the above case, the output file will be/full/path/to/my_pubs_revised.bbl. This can be changed by specifying the-oor--out_filenamecommand-line flag.A log file is constructed:/full/path/to/academic_ads_bibtex.YYYY-MM-DD.logVersioningWe useSemVerfor versioning. For the versions available, see thereleases on this repository.AuthorsChun Ly, Ph.D. (@astrochun)LicenseThis project is licensed under theGNU GPLv3 License. See theLICENSEfile for details.
academical-api-client
UNKNOWN
academic-chatgpt
ChatGPT Academic OptimizerA simple web interface for academic research and experimentation using GPT-3.5.This is the forked version fromthe projectIf you like this project, please give it a star. If you have come up with more useful academic shortcuts, feel free to open an issue or pull request.FeaturesAutomatic paper abstract generation based on a provided LaTeX fileAutomatic code summarization and documentation generationC++ project header file analysisPython project analysisSelf-code interpretation and dissectionExperimental function templateFunctionDescriptionOne-click polishingSupports one-click polishing and finding grammar errors in papersOne-click Chinese-English translationOne-click Chinese-English translationOne-click code interpretationCan display code correctly and interpret codeCustom shortcut keysSupports custom shortcut keysConfigure proxy serverSupports configuring proxy serverModular designSupports customizable high-order experimental functionsSelf-program analysis[Experimental feature] One-click to understand the source code of this projectProgram analysis[Experimental feature] One-click to analyze other Python/C++ projectsReading papers[Experimental feature] One-click to read the full text of a latex paper and generate an abstractBatch comment generation[Experimental feature] One-click to generate function comments in batcheschat analysis report generation[Experimental feature] Automatically generates summary reports after runningFormula displayCan display the tex form and rendering form of the formula at the same timeImage displayCan display images in markdownSupports markdown tables generated by GPTSupports markdown tables generated by GPTNew interfaceAll buttons are dynamically generated by reading functional.py, and custom functions can be freely added to free the clipboardCode display is also naturalhttps://www.bilibili.com/video/BV1F24y147PD/Supports markdown tables generated by GPTIf the output contains formulas, it will be displayed in both tex and rendering forms at the same time for easy copying and readingToo lazy to look at the project code? Just show off the chatgpt mouthUsagePrerequisitesOpenAI API key (can be obtained fromhere)Python 3.9 or higherSetup$pipinstallacademic-chatgptRunSet your OpenAI API key and other configurations inchataca.tomlor~/.config/chataca/chataca.tomlThe configuration file will locate at current working directory or~/.config/chataca/. The example ofchataca.tomlAPI_KEY="sk-zH**********************************************"API_URL="https://api.openai.com/v1/chat/completions"USE_PROXY=falseTIMEOUT_SECONDS=30WEB_PORT=8080MAX_RETRY=3LLM_MODEL="gpt-3.5-turbo"If you are in China, you need to set up an overseas agent to use the OpenAI API.Start the server:chatacaExperimental featuresC++ project header file analysisIn theproject patharea, enter the project path and click on "[Experimental] Analyze entire C++ project (input the root directory of the project)LaTeX project abstract generationIn theproject patharea, enter the project path and click on "[Experimental] Read LaTeX paper and write abstract (input the root directory of the project)Python project analysisIn theproject patharea, enter the project path and click on "[Experimental] Analyze entire Python project (input the root directory of the project)"
academicdb
academicdb: An academic database builderWhy build your CV by hand when you can create it programmatically? This package uses a set of APIs (including Scopus, ORCID, CrossRef, and Pubmed) to generate a database of academic acheivements, and provides a tool to render those into a professional-looking CV. Perhaps more importantly, it provides a database of collaborators, which can be used to generate the notorious NSF collaborators spreadsheet.Installing academicdbTo install the current version:pip install academicdbIn addition to the Python packages required by academicdb (which should be automatically installed), you will also need a MongoDB server to host the database. There are two relatively easy alternatives:Install MongoDBon your own system.Create a free cloud MongoDB instancehere.The former is easier, but I prefer the latter because it allows the database to accessed from any system.Rendering the CV after building the database requires that LaTeX be installed on your system and available from the command line. There are various LaTeX distributions depending on your operating system.Note: If you get an error that the font Tex Gyre Termes is not installed when executingrender_cv, you can install it using Homebrew like so:$ brew tap homebrew/cask-fonts $ brew install --cask font-tex-gyre-termesConfiguring academicdbTo use academicdb, you must first set up some configuration files, which will reside in~/.academicdb. The most important isconfig.toml, which contains all of the details about you that are used to retrieve your information. Here are the contents of mine as an example:[researcher] lastname = "poldrack" middlename = "alan" firstname = "russell" email = "[email protected]" orcid = "0000-0001-6755-0259" query = "poldrack-r" url = "http://poldrack.github.io" twitter = "@russpoldrack" github = "http://github.com/poldrack" phone = "650-497-8488" scholar_id = "RbmLvDIAAAAJ" scopus_id = "7004739390" address = [ "Stanford University", "Department of Psychology", "Building 420", "450 Jane Stanford Way", "Stanford, CA, 94305-2130", ]Most of this should be self-explanatory. There are several identifiers that you need to specify:ORCID: This is a unique identifier for researchers. If you don't already have an ORCID you can get onehere. You will need to enter information about your education, employment, invited position and distinctions, and memberships and service into your ORCID account since that is where academicdb looks for that information.Google Scholar: You will also need to retrieve your Google Scholar ID. Once you have set up your profile, go to the "My Profile" page. The URL from that page contains your id: for example, my URL ishttps://scholar.google.com/citations?user=RbmLvDIAAAAJ&hl=enand the ID isRbmLvDIAAAAJ.Scopus: Scopus is a service run by Elsevier. It provides a service that is not available from anywhere else: For each reference it provides a set of unique identifiers for the coauthors, which can be used to retreive their affilation information. This is essential for generating the NSF collaborators spreadsheet.cloud MongoDB setupIf you are going to use a cloud MongoDB server, you will need to add the following lines to yourconfig.toml:[mongo] CONNECT_STRING = 'mongodb+srv://<username>:<password>@<server>'The cloud provider should provide you with the connection string that you can paste into this variable.Obtaining an API key for ScopusYou will need to obtain an API key to access the Scopus database, which you can obtain fromhttp://dev.elsevier.com/myapikey.html. This is used by thepybliometricspackage to access the APIs. Note that this key will only work if you are on your institution's network and the institution has the appropriate license with Elsevier. You can also request an institutional token from Elsevier if you wish to use the APIs from outside of your institution's network.The first time you use the package, you will be asked by pybliometrics to enter your API key (and InstToken if you have one), which will be stored in~/.pybliometrics/config.inifor reuse.specifying additional informationThere are a number of pieces of information that are difficult to reliably obtain from ORCID or other APIs, so they must be specified in a set of text files, which should be saved in the base directory that is specified when the command linedbbuildertool is used. See theexamplesdirectory for examples of each of these.editorial.csv: information about editorial rolestalks.csv: information about non-conference talks at other institutionsconference.csv: Information about conference talksteaching.csv: Information about teachingfunding.csv: Information about grant fundingIn addition, there may be references (including books, book chapters, and published conference papers) that are not detected by the automated search and need to be added by hand, using a file calledadditional_pubs.csvin the base directory.Finally there is a file calledlinks.csvthat allows one to specify links related to individual publications, such asOSFrepositories, shared code, and shared data. These links will be rendered in the CV alongside the publications.Building the databaseTo build the database, you use thedbbuildercommand line tool. The simplest usage is:dbbuilder -b <base directory for files and output>The full usage for the script is:usage: dbbuilder [-h] [-c CONFIGDIR] -b BASEDIR [-d] [-o] [--no_add_pubs] [--no_add_info] [--nodb] [-t] [--bad_dois_file BAD_DOIS_FILE] optional arguments: -h, --help show this help message and exit -c CONFIGDIR, --configdir CONFIGDIR directory for config files -b BASEDIR, --basedir BASEDIR base directory -d, --debug log debug messages -o, --overwrite overwrite existing database --no_add_pubs do not get publications --no_add_info do not add additional information from csv files --nodb do not write to database -t, --test test mode (limit number of publications) --bad_dois_file BAD_DOIS_FILE file with bad dois to removeRendering the CVThe render the CV after building the database, use therender_cvcommand line tool. The simplest usage is:render_cvThis will create a LaTeX version of the CV and then render it usingxelatex.The full usage is:usage: render_cv [-h] [-c CONFIGDIR] [-f FORMAT] [-d OUTDIR] [-o OUTFILE] [--no_render] optional arguments: -h, --help show this help message and exit -c CONFIGDIR, --configdir CONFIGDIR directory for config files -d OUTDIR, --outdir OUTDIR output dir -o OUTFILE, --outfile OUTFILE output file stem --no_render do not render the output file (only create .tex)Creating the NSF collaborators spreadsheetThe database builder script will create a database collection calledcoauthorsthat contains the relevant information. The script to convert these to a spreadsheet is currently TBD; PR's welcome!
academic-markdown
Academic Markdown CLIThis is the command line tool used in theacademic_markdownproject. While there should be a discrete README for this project, for now I would like to ask anyone to move there for more information on the project.
academic-observatory
Failed to fetch description. HTTP Status Code: 404
academic-observatory-workflows
Academic Observatory Workflows provides Apache Airflow workflows for fetching, processing and analysing data about academic institutions.Telescope WorkflowsA telescope a type of workflow used to ingest data from different data sources, and to run workflows that process and output data to other places. Workflows are built on top of Apache Airflow's DAGs.The workflows include: Crossref Events, Crossref Fundref, Crossref Metadata, Geonames, GRID, Microsoft Academic Graph, Open Citations, ORCID, Scopus, Unpaywall and Web of Science.Telescope WorkflowDescriptionCrossref Event Data captures discussion on scholarly content and acts as a hub for the storage and distribution of this data. An event may be a citation in a dataset or patent, a mention in a news article, Wikipedia page or on a blog, or discussion and comment on social media.The Crossref Funder Registry is an open registry of grant-giving organization names and identifiers, which can be used to find funder IDs and include them as part of metadata deposits. It is a freely-downloadable RDF file. It is CC0-licensed and available to integrate with your own systems. Funder names from acknowledgements should be matched with the corresponding unique funder ID from the Funder RegistryCrossref is a non-for-profit membership organisation working on making scholarly communications better. It is an official Digital Object Identifier (DOI) Registration Agency of the International DOI Foundation. They provide metadata for every DOI that is registered with Crossref.The GeoNames geographical database covers all countries. It contains over 25 million geographical names and consists of over 11 million unique features whereof 4.8 million populated places and 13 million alternate namesGRID is a free, openly accessible database of research institution identifiers which enables users to make sense of their data. It does so by minimising the work required to link datasets together using a unique and persistent identifier.Microsoft Academic Graph contains scientific publication records, citation relationship between those publications, as well as authors, institutions, journals, conferences, and field of study. It is updated on a weekly basis. It currently indexes over 220 million publications, 88 million of which are journal articlesOpenCitations is an independent not-for-profit infrastructure organization for open scholarship dedicated to the publication of open bibliographic and citation dataORCID is a non-profit organization that provides researchers with a unique digital identifier which eliminates the risk of confusing an identity with another researcher having the same name. ORCID provides a record that supports automatic links among all the researcher's professional activities.SCOPUS is an Elsevier bibliometrics database containing abstracts, citations, of journals, books, and conference proceedingsUnpaywall is an open database of free scholarly articles. It includes data from open indexes like Crossref and DOAJ where it exists. Data comes from “monitoring over 50,000 unique online content hosting locations, including Gold OA journals, Hybrid journals, institutional repositories, and disciplinary repositories.Web of science, previously Web of knowledge, provides bibliometric information, including funding acknowledgements, international publication identifiers, and abstractsDocumentationFor detailed documentation about the Academic Observatory see the Read the Docs websitehttps://academic-observatory-workflows.readthedocs.io
academic-search-engine
Academic Search EngineDescriptionOur project was developed on the basis of determining the sites frequently used by the academicians, and outputting the links of the article studies according to the input that they search automatically through the sites.Our project works with search engine logic. You enter the topic you are looking for in the search engine and you will get results. This study also works only on academic research. Simply enter the topic you are looking for and define how many articles you want.Project workspace platforms are as follows;Google AkademikDergi ParkScience Direct! We are waiting for your development support to work on more platforms.How to Setup?We have made it work in the PYPI environment, which allows you to automatically install our project, so you can install it automatically using the following command.pipinstallacademic-search-engineHow to RUN?To run our project, it is sufficient to examine the content of the example file.Project Testfrom academic_search_engine import academic_search_engine result = academic_search_engine() """ Returns Dergipark and Google Scholar results. """ result.sayfa("big data", 1)Result:{"Dergi Park Link":"https://dergipark.org.tr/tr/pub/connectist/issue/56249/775841","Google Akademik Link":"https://www.nature.com/articles/498255a"}[{'DergiParkLink':'https://dergipark.org.tr/tr/pub/connectist/issue/56249/775841', 'Google Akademik Link': 'https://www.nature.com/articles/498255a'}, {'Dergi Park Link': 'https://dergipark.org.tr/tr/pub/iktisad/issue/72676/1144242', 'Google Akademik Link': 'http://dbjournal.ro/archive/10/10.pdf'}, {'Dergi Park Link': 'https://dergipark.org.tr/tr/pub/ajit-e/issue/54422/740748', 'Google Akademik Link': 'https://iacis.org/iis/2015/2_iis_2015_81-90.pdf'}, {'Dergi Park Link': 'https://dergipark.org.tr/tr/pub/jes/issue/64868/878318', 'Google Akademik Link': 'https://citeseerx.ist.psu.edu/document?repid=rep1&amp;type=pdf&amp;doi=2d562bb0dbb0c5b757be86995f5497e760c769b8'}, {'Dergi Park Link': 'https://dergipark.org.tr/tr/pub/iuchkd/issue/57846/813328', 'Google Akademik Link': 'https://rss.onlinelibrary.wiley.com/doi/pdf/10.1111/j.1740-9713.2014.00778.x'}, {'Dergi Park Link': 'https://dergipark.org.tr/tr/pub/ibad/issue/58423/827546', 'Google Akademik Link': 'https://ieeexplore.ieee.org/iel5/4236/6188571/06188576.pdf'}, {'Dergi Park Link': 'https://dergipark.org.tr/tr/pub/ejosat/issue/59648/822219', 'Google Akademik Link': 'https://books.google.com/books?hl=tr&amp;lr=&amp;id=p7d1BwAAQBAJ&amp;oi=fnd&amp;pg=PR8&amp;dq=big+data&amp;ots=72g6pMcWLr&amp;sig=M819r5jeynPW-zFaj-Mv-JbdGRg'}, {'Dergi Park Link': 'https://dergipark.org.tr/tr/pub/talid/issue/70969/1115782', 'Google Akademik Link': 'http://www.scielo.org.co/scielo.php?script=sci_arttext&amp;pid=S0121-11292015000100006'}, {'Dergi Park Link': 'https://dergipark.org.tr/tr/pub/trta/issue/60117/819385', 'Google Akademik Link': 'https://www.ias.ac.in/public/Volumes/reso/021/08/0695-0716.pdf'}, {'Dergi Park Link': 'https://dergipark.org.tr/tr/pub/dusuncevetoplum/issue/63163/915151', 'Google Akademik Link': 'https://ieeexplore.ieee.org/iel7/38/6562702/06562707.pdf'}]WARNINGS!!Important Note!: In order to use the Science Direct platform, you must be registered to the platform and enter your user name and password into the program. Unless you take the program live or share it, no one can access your username and password, all responsibility belongs to the user.!Important Note!: The project is not developed for web scraping and malicious purposes to collect data of people. The project was developed to bring all article "Platform Links" with a single input in order to minimize the time spent by academic research individuals researching on separate platforms. Responsibility belongs to the user. Do not use it for other than its intended purpose!TürkçeTanımProjemiz akademik çalışma yapan kimselerin sık kullandıkları siteler belirlenerek siteler üzerinden otomatik olarak arama yaptığı girdiye göre makale çalışmalarının linklerini çıktı halinde getirmesi üzerine geliştirildi.Projemiz arama motoru mantığı ile çalışmaktadır. Arama motoruna aradığınız konuyu girersiniz ve sonuç alırsınız. Bu çalışma da sadece akademik araştırmalar üzerine çalışmaktadır. Aradığını konuyu girmeniz ve kaç makale istediğinizi tanımlamanız yeterlidir.Proje çalışma alanı platformları aşağıdaki gibidir;Google AkademikDergi ParkScience Direct! Daha fazla platformda çalışması için geliştirme desteklerinizi bekliyoruz.Nasıl Kurulur?Projemizi otomatik olarak kurmanıza olanak sağlayan PYPI ortamında çalışmasını sağladık bu sayede aşağıdaki komutu kullanarak otomatik olarak kurabilirsiniz.pipinstallacademic-search-engineNasıl Çalışır?Projemizi çalıştırmanız için example dosyası içerisini incelemeniz yeterlidir.Projeyi Test Edelim.from academic_search_engine import academic_search_engine result = academic_search_engine() """ Dergi Park ve Google Akademik sonuçlarını döndürür. """ result.sayfa("big data", 1)Sonuç:{"Dergi Park Link":"https://dergipark.org.tr/tr/pub/connectist/issue/56249/775841","Google Akademik Link":"https://www.nature.com/articles/498255a"}[{'DergiParkLink':'https://dergipark.org.tr/tr/pub/connectist/issue/56249/775841', 'Google Akademik Link': 'https://www.nature.com/articles/498255a'}, {'Dergi Park Link': 'https://dergipark.org.tr/tr/pub/iktisad/issue/72676/1144242', 'Google Akademik Link': 'http://dbjournal.ro/archive/10/10.pdf'}, {'Dergi Park Link': 'https://dergipark.org.tr/tr/pub/ajit-e/issue/54422/740748', 'Google Akademik Link': 'https://iacis.org/iis/2015/2_iis_2015_81-90.pdf'}, {'Dergi Park Link': 'https://dergipark.org.tr/tr/pub/jes/issue/64868/878318', 'Google Akademik Link': 'https://citeseerx.ist.psu.edu/document?repid=rep1&amp;type=pdf&amp;doi=2d562bb0dbb0c5b757be86995f5497e760c769b8'}, {'Dergi Park Link': 'https://dergipark.org.tr/tr/pub/iuchkd/issue/57846/813328', 'Google Akademik Link': 'https://rss.onlinelibrary.wiley.com/doi/pdf/10.1111/j.1740-9713.2014.00778.x'}, {'Dergi Park Link': 'https://dergipark.org.tr/tr/pub/ibad/issue/58423/827546', 'Google Akademik Link': 'https://ieeexplore.ieee.org/iel5/4236/6188571/06188576.pdf'}, {'Dergi Park Link': 'https://dergipark.org.tr/tr/pub/ejosat/issue/59648/822219', 'Google Akademik Link': 'https://books.google.com/books?hl=tr&amp;lr=&amp;id=p7d1BwAAQBAJ&amp;oi=fnd&amp;pg=PR8&amp;dq=big+data&amp;ots=72g6pMcWLr&amp;sig=M819r5jeynPW-zFaj-Mv-JbdGRg'}, {'Dergi Park Link': 'https://dergipark.org.tr/tr/pub/talid/issue/70969/1115782', 'Google Akademik Link': 'http://www.scielo.org.co/scielo.php?script=sci_arttext&amp;pid=S0121-11292015000100006'}, {'Dergi Park Link': 'https://dergipark.org.tr/tr/pub/trta/issue/60117/819385', 'Google Akademik Link': 'https://www.ias.ac.in/public/Volumes/reso/021/08/0695-0716.pdf'}, {'Dergi Park Link': 'https://dergipark.org.tr/tr/pub/dusuncevetoplum/issue/63163/915151', 'Google Akademik Link': 'https://ieeexplore.ieee.org/iel7/38/6562702/06562707.pdf'}]UYARILAR!!Önemli Not!: Science Direct platformunu kullanmanız için platforma kayıtlı olmanız ve kullanıcı adı, şifrenizi programa girmeniz gerekmektedir. Programı canlıya almadığınız veya paylaşmadığınız sürece kullanıcı adı ve şifrenize kimse erişemez tüm sorumluluk kullanıcıya aittir.!Önemli Not!: Proje web scraping ile kişilerin verilerini toplamak ve kötü amaçlı geliştirilmemiştir. Proje akademik araştırma yapan bireylerin ayrı ayrı platformlarda araştırma yaparak geçirdiği süreyi en aza indirmek için tek girdiyle tüm makale "Platform Linklerini" getirmek üzere geliştirildi. Sorumluluk kullanıcıya aittir. Amacı dışında kullanmayınız!İstek ve Şikayetleriniz İçin Mail Göndermenizi Rica Ederim.Please send an e-mail for your requests and complaints.
academics-reddit-scraper
No description available on PyPI.
academics-scholar-scraper
No description available on PyPI.
academictorrents
Copyright (c) 2018 academictorrentsPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.Description: UNKNOWN Platform: UNKNOWN Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.0 Classifier: Programming Language :: Python :: 3.1 Classifier: Programming Language :: Python :: 3.2 Classifier: Programming Language :: Python :: 3.3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6
academic-tracker
Academic Tracker was created to automate the process of making sure that federally funded publications get listed on PubMed and that the grant funding source for them is cited.Academic Tracker is a command line tool to search PubMed, ORCID, Google Scholar, and Crossref for publications. The program can either be given a list of authors to look for publications for, or references/citations to publications themselves. The program will then will look for publications on the aforementioned sources and return what relevant information is available from those sources.The primary use case is to give the program a list of authors to find publications for. From this list of publications it can then be determined which ones need further action to be in compliance.A secondary use case for finding author’s publications is to create a report of the collaborators they have worked with, and can be done by specifying the creation of that report in the configuration file. Details on reports are in thedocumentation.The other primary use case is to give the program a list of publication references to find information for.LinksAcademic Tracker @GitHubAcademic Tracker @PyPIDocumentation @PagesInstallationThe Academic Tracker package runs under Python 3.8+. Usepipto install. Starting with Python 3.4,pipis included by default. Be sure to use the latest version of pip as older versions are known to have issues grabbing all dependencies. Academic Tracker relies on sendmail to send emails, so if you need to use that feature be sure sendmail is installed and in PATH.Install on Linux, Mac OS Xpython3-mpipinstallacademic-trackerInstall on Windowspy-3-mpipinstallacademic-trackerUpgrade on Linux, Mac OS Xpython3-mpipinstallacademic-tracker--upgradeUpgrade on Windowspy-3-mpipinstallacademic-tracker--upgradeQuickstartAcademic Tracker has several commands and options. The simplest most common use case is simply:academic_trackerauthor_searchconfig_file.jsonExample config files can be downloaded from theexample_configsdirectory of theGitHub.Academic Tracker’s behavior can be quite complex though, so it is highly encouraged to read theguideandtutorial.Creating The Configuration JSONA configuration JSON file is required to run Academic Tracker, but initially creating it the first time can be burdensome. Unfortunately, there is no easy solution for this. It is encouraged to read the configuration JSON section injsonschemaand use the example there to create it initially. The add_authors command can help with building the Authors section if you already have a csv file with author information. A good tool to help track down pesky JSON syntax errors ishere. There are also examples in theexample_configsdirectory of the GitHub repo. There are also more examples in the supplemental material of the paperhttps://doi.org/10.6084/m9.figshare.19412165.Registering With ORCIDIn order to have this program search ORCID you must register with ORCID and obtain a key and secret. Details on how to do that arehere. If you do not want to do that then the –no_ORCID option can be used to skip searching ORCID, or don’t include the ORCID_search section in the config file.Mac OS NoteWhen you try to run the program on Mac OS you may get an SSL error.certificate verify failed: unable to get local issuer certificateThis is due to a change in Mac OS and Python. To fix it go to to your Python folder in Applications and run the Install Certificates.command shell command in the /Applications/Python 3.x folder. This should fix the issue.Email Sending NoteAcademic Tracker uses sendmail to send emails, so any system it is going to be used on needs to have sendmail installed and the path in PATH. If you try to send emails without this the program will display a warning. This can be avoided by using the –test option though. The –test option blocks email sending. Email sending can also be avoided by leaving the from_email attribute out of the report sections of the configuration JSON file.How Authors Are IdentifiedWhen searching by authors it is necessary to confirm that the author given to Academic Tracker matches the author returned in the query. In general this matching is done by matching the first and last names and at least one affiliation given for the author in the configuration JSON file. Note that affiliations can change over time as authors move, so they may need many affiliations to accurately match them to their publications depending on how far back you want to search in time.How Publications Are MatchedWhen searching by publications it is necessary to confirm that the publication in the given reference matches the publication returned in the query. This is done by either matching the DOIs, PMIDs, or the title and at least one author. Titles are fuzzy matched using fuzzywuzzy which is why at least one author must also be matched. Author’s are matched using last name and at least one affiliation.Troubleshooting ErrorsIf you experience errors when running Academic Tracker the first thing to do is simply try again. Since Academic Tracker is communicating with multiple web sources it is not uncommon for a problem to occur with one of these sources. It might also be a good idea to wait several hours or the next day to try again if there is a communication issue with a particular source. You can also use the various “–no_Source” options for whatever source is causing an error. For example, if Crossref keeps having 504 HTTP errors you can run with the –no_Crossref option. If the issue persists across multiple runs then try upgrading Academic Tracker’s dependencies with “pip install –upgrade dependency_name”. The list of dependencies is in theguide.LicenseThis package is distributed under the BSDlicense.
academic-tweet
No description available on PyPI.
academlo
Academlo: Python Web Framework built for learning purposesAcademlo is a Python web framework built for learning purposes.It's a WSGI framework and can be used with any WSGI application server such as Gunicorn.InstallationpipinstallacademloHow to use itBasic usage:fromacademlo.apiimportAPIapp=API()@app.route("/home")defhome(request,response):response.text="Hello from the HOME page"@app.route("/hello/{name}")defgreeting(request,response,name):response.text=f"Hello,{name}"@app.route("/book")classBooksResource:defget(self,req,resp):resp.text="Books Page"defpost(self,req,resp):resp.text="Endpoint to create a book"@app.route("/template")deftemplate_handler(req,resp):resp.body=app.template("index.html",context={"name":"academlo","title":"Best Framework"}).encode()Unit TestsThe recommended way of writing unit tests is withpytest. There are two built in fixtures that you may want to use when writing unit tests with academlo. The first one isappwhich is an instance of the mainAPIclass:deftest_route_overlap_throws_exception(app):@app.route("/")defhome(req,resp):resp.text="Welcome Home."withpytest.raises(AssertionError):@app.route("/")defhome2(req,resp):resp.text="Welcome Home2."The other one isclientthat you can use to send HTTP requests to your handlers. It is based on the famousrequestsand it should feel very familiar:deftest_parameterized_route(app,client):@app.route("/{name}")defhello(req,resp,name):resp.text=f"hey{name}"assertclient.get("http://testserver/matthew").text=="hey matthew"TemplatesThe default folder for templates istemplates. You can change it when initializing the mainAPI()class:app=API(templates_dir="templates_dir_name")Then you can use HTML files in that folder like so in a handler:@app.route("/show/template")defhandler_with_template(req,resp):resp.html=app.template("example.html",context={"title":"Awesome Framework","body":"welcome to the future!"})Static FilesJust like templates, the default folder for static files isstaticand you can override it:app=API(static_dir="static_dir_name")Then you can use the files inside this folder in HTML files:<!DOCTYPE html><htmllang="en"><head><metacharset="UTF-8"><title>{{title}}</title><linkhref="/static/main.css"rel="stylesheet"type="text/css"></head><body><h1>{{body}}</h1><p>This is a paragraph</p></body></html>MiddlewareYou can create custom middleware classes by inheriting from theacademlo.middleware.Middlewareclass and overriding its two methods that are called before and after each request:fromacademlo.apiimportAPIfromacademlo.middlewareimportMiddlewareapp=API()classSimpleCustomMiddleware(Middleware):defprocess_request(self,req):print("Before dispatch",req.url)defprocess_response(self,req,res):print("After dispatch",req.url)app.add_middleware(SimpleCustomMiddleware)
academlogen9
Academlo: Python Web Framework built for learning purposesAcademlo is a Python web framework built for learning purposes.It's a WSGI framework and can be used with any WSGI application server such as Gunicorn.Installationpipinstallacademlo_gen_9How to use itBasic usage:fromacademlo_gen_9.apiimportAPIapp=API()@app.route("/home")defhome(request,response):response.text="Hello from the HOME page"@app.route("/hello/{name}")defgreeting(request,response,name):response.text=f"Hello,{name}"@app.route("/book")classBooksResource:defget(self,req,resp):resp.text="Books Page"defpost(self,req,resp):resp.text="Endpoint to create a book"@app.route("/template")deftemplate_handler(req,resp):resp.body=app.template("index.html",context={"name":"academlo_gen_9","title":"Best Framework"}).encode()Unit TestsThe recommended way of writing unit tests is withpytest. There are two built in fixtures that you may want to use when writing unit tests with academlo. The first one isappwhich is an instance of the mainAPIclass:deftest_route_overlap_throws_exception(app):@app.route("/")defhome(req,resp):resp.text="Welcome Home."withpytest.raises(AssertionError):@app.route("/")defhome2(req,resp):resp.text="Welcome Home2."The other one isclientthat you can use to send HTTP requests to your handlers. It is based on the famousrequestsand it should feel very familiar:deftest_parameterized_route(app,client):@app.route("/{name}")defhello(req,resp,name):resp.text=f"hey{name}"assertclient.get("http://testserver/matthew").text=="hey matthew"TemplatesThe default folder for templates istemplates. You can change it when initializing the mainAPI()class:app=API(templates_dir="templates_dir_name")Then you can use HTML files in that folder like so in a handler:@app.route("/show/template")defhandler_with_template(req,resp):resp.html=app.template("example.html",context={"title":"Awesome Framework","body":"welcome to the future!"})Static FilesJust like templates, the default folder for static files isstaticand you can override it:app=API(static_dir="static_dir_name")Then you can use the files inside this folder in HTML files:<!DOCTYPE html><htmllang="en"><head><metacharset="UTF-8"><title>{{title}}</title><linkhref="/static/main.css"rel="stylesheet"type="text/css"></head><body><h1>{{body}}</h1><p>This is a paragraph</p></body></html>MiddlewareYou can create custom middleware classes by inheriting from theacademlo.middleware.Middlewareclass and overriding its two methods that are called before and after each request:fromacademlo_gen_9.apiimportAPIfromacademlo_gen_9.middlewareimportMiddlewareapp=API()classSimpleCustomMiddleware(Middleware):defprocess_request(self,req):print("Before dispatch",req.url)defprocess_response(self,req,res):print("After dispatch",req.url)app.add_middleware(SimpleCustomMiddleware)
academy
Failed to fetch description. HTTP Status Code: 404
academy2AI
No description available on PyPI.
academy3AI
No description available on PyPI.
academyAI
No description available on PyPI.
academylib
No description available on PyPI.
academylibr
No description available on PyPI.
academylibrary
For The Academy4summer Course
academyruins
Failed to fetch description. HTTP Status Code: 404
acadia
acadiaAn audio alternative of conventional diagrms for blind and visually impaired peopleAcadia stands for accessible audio diagrams, the tool designed to assist blind and visually impaired people to get access to representation of numeric data.IntroductionWhen dealing with really big amounts of numbers, it is common practice to use some kind of plot or diagram. Indeed, with those graphic objects one can study numeric data in a very natural way, clearly and efficiently making vivid impressions of different entities through their shapes almost instantaneously. It is more than convenient in most cases So an approach to data representation is considered to be strictly visual without any alternative. Given that, there is the problem for those who happen to be blind or visually impaired.Of course, with assistive technologies like screenreading program or Braile display those users can get access to the data itself, but in case of hundreds and thousands of numbers they are of little help. A solution that would provide blind people with alternative of visual representation of numeric data is needed.Obviously, if it cannot be visual, it has to be either tactile or aural. An aural approach looks like more preferable, because it probably can be implemented without any additional hardware.Concept formulationLet us take a classic example of two-dimensional linear diagram. All we need is to provide audio alternative for its contour defined by vertical and horizontal coordinates. For that purpose we can try to use frequency of a signal and its position within the stereo basis. But the nature of aural perception is immanently temporal, whereas the nature of visual perception is spacial. And the problem is that the third dimension, time itself, cannot be fully excluded like the third dimension in case of conventional charting. We have to try to reduce it as much as possible if our goal is to emulate graphic representation, which perception is almost instantaneous.Note:We can theoretically have even the fourth dimension, expressed through sound pressure, so the audio representation of numeric data can potentially give us quite unusual and promicing opportunities even beyond the context of accessibility.For the present our goal is to create something like "audio snapshot", assuming that several seconds would be quite enough to create a mental image of the data representation.Subject area explorationThere is a project calledthe Accessible Graphs Project, that exploits more or less the same idea. You can install a client with pip and then upload your data to a web application. Once it is done, a representation opens in a default browser. It is accessible not only with audio interface, but also with screenreader and Braile display. Though the main concept seems to be realized, there are some features that in our humble opinion decrease its practical meaning:the number of values that can be represented is limited to 29 itemspresence of an Internet connection is vitalvalid data types are only list and dictinability to integrate into third-party applicationsthe only supported OS is WindowsAlso, however the "read entire graph" button is mentioned, it is absent and you have a single option to move through the representation manually, entry by entry. The latter could be a good extra feature, but only when combined with an ability to provide an instant impression. Once again, that ability is the very thing that makes visual plots so handy. Now we should take those drawbacks into account and try to create a tool whichdoes not have any inherent limitations in terms of amount of data being processeddoes not need an Internet connection to work properlysupports numpy arrayscan be embedded into a Jupyter notebookis OS-independentAnd finally, we need to make our audio diagram as quick as a glance.Data processingOk, so far so good, to get a result we just need to realize a number of steps. The pipeline could look like this:get the values for both dimensions, x and yscale them to fit the range of audible frequencies and width of stereo basis (let us take the range between 220 and 7040 Hz and -45 and 45 degrees)synthesize mono signal (for example sine wave with 44100 samples per second and duration of 10 or 50 milliseconds) for each value y multiply mono signal by amplitude determined by value x in order to produce a stereo signalbuild a collection of those signalssend it to sound deviceSolution structureIt was decided to implement the solution in an object-oriented manner and establish several classes encapsulating logic forproduction of single stereo signal (class Tone)scaling coordinates(class Space)transformation the coordinates into collection of stereo signals(class Shape), the key entity of the solutionhighly specialized "bar chart" construction (class Histogram, a derivative of Shape)There are also several public methods defined in Shape class:to_device()to send values to sound device using sounddevice module.to_wav()- to create a wav-file from values using soundfile module.switch(mode)- to switch the mode of the shape between fast and slowadd(x, y)- to add an embedded object of the same typeThe SAMPLERATE constant contains standard sampling rate value 44100 Hz.Shape logicOnce we evoke Shape's constructor, it creates and configures an instance ready to produce the values. The process itself starts only when the 'values' property is called. If there are embedded Shapes, the values are accessed recursively and there is no difference if they are or not and what the depth is.Inside Tone class the values are also being manufactured only when they are needed and what we have is a very lightweighted object which contains only its configuration.The most important logic is encapsulated inside private methods__tone()and__pan():# synthesis of sine wave with given parameters t = np.linspace(0, self.duration, int(SAMPLERATE * self.duration), False) return np.sin(2 * np.pi * self.frequency * t)# its panning left = ( np.sqrt(2) / 2.0 * (np.cos(self.deviation) - np.sin(self.deviation)) * values ) right = ( np.sqrt(2) / 2.0 * (np.cos(self.deviation) + np.sin(self.deviation)) * values ) return np.vstack((left, right))UsageOk, assume we already have values x and y for plotting, the first thing that we do is calling Shape's constructor taking x and y as arguments. Also it has an optional parameter "mode", which determins the length of each segment of our plot and consequently it final "appearance". There are two valid options: 'fast' (10 milliseconds) and 'slow' (50 milliseconds). Default is 'fast'. Then we can add one or whatever number is needed sets of coordinates so we have several shapes through a single object. Next and finally we can send our shapeto_device()orto_wav(),switch()the mode or change its parameters and then repeat once again or, in case of Jupyter notebook, just callAudio()function with the values.ExamplesJupyter or notWhen in context of Jupyter notebook, it is strongly recommended to useIPython.display.Audio()function instead of callingto_device()method. That function places in the notebook play/pause control right below the cell, so you can launch the sound any time you like without necessity of repeated cell execution. It takes two arguments - values of Shape object and sampling rate (44100).from IPython.display import Audio sr = 1000 f = 7 x = np.arange(1000) y = np.sin(2 * np.pi * f * x / sr) sinusoid = Shape(x, y) Audio(sinusoid.values, rate=44100) #or replace just the last line like this: sinusoid.to_device()HistogramTo create a histogram, you can use the Shape object with proper x and y, yet there is an option to use an instance of the dedicated class Histogram. It inherits all the methods and attributes from Shape, but its constructor takes a distribution as an argument (and a variaty of optional keyword arguments as well).from scipy.stats import norm normal_distribution = norm.rvs(size=10000) histogram = Histogram(normal_distribution, bins=100, density=False, smooth=False, window_size=3, mode='fast' ) # or, with the same result histogram = Histogram(normal_distribution) # and either Audio(histogram.values, rate=44100) # or just histogram.to_device()ConclusionFinally, we have a library that is designed to provide an audio representation of numeric data of various kinds without graphic component. Its main features:a simple intuitive interfacethe Shape object generates values, that can beplayed via sound devicesent to Audio() function when in Jupyter notebooksaved into a wav-filean amount of data processed is limited only to the characteristics of hardware and resulting time capacityOS-independencescalable structureIt can be applied as a kind of assistive technology in education, for teaching topics regarding trigonometry, mathematical statistics, probability theory and so on to blind students, within the framework of academic and applied studies in data science, data analysis, economics, sociology and anywhere else where it is necessary to represent big amounts of numbers, as a substitute of conventional diagrams.We sincerely hope that the tool would be helpful for blind and visually impaired people around the world and the project has a bright future. We are going to continue to do our best to make it better and there are already some further steps in the pipeline (of course, the four-dimensional diagram is the most intriguing) We are always open to any form of collaboration and kindly appreciate any suggestions of possible improvement, help and feedback.
acai-aws
Acai AWSDRY, configurable, declarative node library for working with Amazon Web Service Lambdas.FeaturesHighly configurable apigateway internal routerOpenapi schema adherence for all event typesExtensible and customizable middleware for validation and other tasksDRY coding interfaces without the need of boilerplateEase-of-use with theserverless frameworkLocal Development supportHappy Path Programming (See Philosophy below)PhilosophyThe Acai philosophy is to provide a dry, configurable, declarative library for use with the amazon lambdas, which encourages Happy Path Programming (HPP).Happy Path Programming is an idea in which inputs are all validated before operated on. This ensures code follows the happy path without the need for mid-level, nested exceptions and all the nasty exception handling that comes with that. The library uses layers of customizable middleware options to allow a developer to easily dictate what constitutes a valid input, without nested conditionals, try/catch blocks or other coding blocks which distract from the happy path that covers the majority of that codes intended operation.InstallationThis is apythonmodule available through thepypi registry.Before installing,download and install python. python 3 or higher is required.Installation is done using thepip installcommand:$pipinstallacai_aws# pipenv install acai_aws# poetry add acai_awsDocumentation & ExamplesFull DocsTutorialExamplesContributingIf you would like to contribute please make sure to follow the established patterns and unit test your code:Unit TestingTo run unit test, enter command:$pipenvinstall $pipenvruntest
acal
No description available on PyPI.
acalang
AcaAca, a functional programming language, and shitty toy.Aca is a toy functional programming language initially inspired by ISWIM. The interpreter is currently written in Python.Install$pipinstallacalangExampleCommand line usage$catfoo.acaletmain=dechurch3$acafoo.aca3$acafoo.aca-S# `noeval' mode(lambdax:dechurch(x))((lambdax:x)((lambdaf:lambdax:(f(f(f(x)))))))$aca# REPL$aca-S# REPL with `noeval'Lambda calculuslet main = (\x y f. f x y)Standard library for basic datatypeslet a = zero let b = succ zero let main = dechurch b -- 1Sugar for Church numeralslet main = 0 -- This is identical to {- let main = (\x . x) -}Special builtin functions for debugging (decoded into Python value)let main = dechurch 42 -- Output: 42 let main = debool true -- Output: True let main = dereal (neg (u2i 42)) -- Output: -42Simple module import withuse$foo.acaletfoo=42$bar.aca usefooletmain=dechurchfoo $acabar.aca42GoalsBeforev1.0.0:Untyped lambda calculusv1.0.0:Simply typed lambda calculusv2.0.0:System FLicenseMIT
acalc
No description available on PyPI.
acalib
UNKNOWN
acall
acallDeveloper GuideSetup# create conda environment$mambaenvcreate-fenv.yml# update conda environment$mambaenvupdate-nacall--fileenv.ymlInstallpipinstall-e.# install from pypipipinstallacallnbdev# activate conda environment$condaactivateacall# make sure the acall package is installed in development mode$pipinstall-e.# make changes under nbs/ directory# ...# compile to have changes apply to the acall package$nbdev_preparePublishing# publish to pypi$nbdev_pypi# publish to conda$nbdev_conda--build_args'-c conda-forge'$nbdev_conda--mambabuild--build_args'-c conda-forge -c dsm-72'UsageInstallationInstall latest from the GitHubrepository:$pipinstallgit+https://github.com/dsm-72/acall.gitor fromconda$condainstall-cdsm-72acallor frompypi$pipinstallacallDocumentationDocumentation can be found hosted on GitHubrepositorypages. Additionally you can find package manager specific guidelines oncondaandpypirespectively.
a-calver-test
No description available on PyPI.
acamodels
datamodelsDatamodels based onpydanticused in Python tools developed at Aarhus City Archives.StructureEach model is placed in a separate.pyfile in order to achieve maintainability and better version control. In addition, each model must be served in__init__.pysuch that it is possible to callfrom acamodels import model.VersioningUpdatingone or more models is considered apatchversion, e.g.0.1.0 -> 0.1.1Addingnew models is considered aminorversion, e.g.0.1.0 -> 0.2.0Major versions will be pushed when models have reached a yet to be determined mature stage.
acampos-cli
Failed to fetch description. HTTP Status Code: 404
acampreq
这是提供给A营用户的post项目(可以post用户,工作室,任务,作品)。至少使用2.0版本!远古版本可能含有严重bug!
acanban
AcanbanAcanban is an academicKanban board. It aims to provide a collaboration platform for students and mentors, with first-class support for academic evaluation.PrerequisitesAcanban runs on Python 3.7+ and requiresRethinkDBandIPFS0.7 or above.SetupThe development version of Acanban can be installed from this git repository:python-mpipinstallgit+https://github.com/Huy-Ngo/acanbanAcanban can then be evoked viapython -m acanban. In production, it is typical to run it as a systemd service, configured like followed.[Unit]Description=The Acanban ServerAfter=network.target[Service]ExecStart=/path/to/venv/bin/python -m acanbanRestart=on-failure[Install]WantedBy=default.targetConfigurationAcanban's configuration files are looked for inuser and site config dir(in that order), withacanbannamespace.With third-party configuration separated to dedicated files,acanban.tomlmay define the following keys:# Used for HTTP to HTTPS redirection if defineddomain='example.com'HypercornHypercorn configurationis loaded fromhypercorn.toml. Acanban does not override any of the server's defaults.RethinkDBValues defined inrethinkdb.tomlwill be passed to the RethinkDB client. Below are some of the fields that commonly need to be configured and their default values:host='localhost'port=28015user='admin'timeout=20db='test'IPFSCustomipfs.tomlmust define the following keys, whose default values are listed as follows:[api]base='http://127.0.0.1:5001/api/v0'[gateway]base='http://127.0.0.1:8080/ipfs'fallback='https://ipfs.io'HackingFirst clone the repository and install Acanban as editable:gitclonehttps://github.com/Huy-Ngo/acanbancdacanban python-mpipinstallflit flitinstall--symlinkAfter playing around with the source code, one can usetoxto test the modified version:python-mpipinstalltox tox
acanthophis
AcanthophisA reusable, comprehensive, opinionated Snakemake pipeline for plant-microbe genomics and plant variant calling.DocumentationFor documentation, see./documentation.md. In summary:# create conda env, activate itmambacreate-nsomeprojectpythonsnakemakepipnatsort mambaactivatesomeproject# Install acanthophis itselfpipinstallacanthophis# Generate a workspace. This copies all files the workflow will need to your# workspace directory.acanthophis-init/path/to/someproject/# Edit config.yml to suit your project. Hopefully this config file documents# all options available in an understandable fashion. If not, please raise an# issue on github.vimconfig.yml# Run snakemakesnakemake-j16-p--use-conda--conda-frontendmamba--ri# Or on a cluster, see acanthophis-init --list-available-profilessnakemake--profile./ebio-cluster/CompatablityWhile snakemake and Acanthophis are cross-platform, most of the underlying tools are only packaged for and/or only operate on Linuxx68_64. Therefore, I only support users on Linux systems. In theory, everythingshouldrun on OSX or WSL, but the vast majority of users will want to utilise a high performance linux workstation (at least, probably more likely a cluster).Contribution & AssistanceIf you have anything (advice, docs, code) you'd like to contribute, pull requests are more than welcome. Please discuss any major contribution in a new issue before implementing it, to avoid wasted effort.If you need any assistance, or have other questions or comments, please make an issue on github, or open a discussion. Unfortunately both need an account on github, so alternatively you can email me (foss <usual email symbol> kdmurray.id.au).About & AuthorsThis is an amalgamation of several pipelines developed between theWeigel Group, MPI DB, Tübingen, DE, theWarthmann Group, IAEA/FAO PBGL, Seibersdorf, ATand theBorevitz Group, ANU, Canberra, AU. This amalgamation authored by Dr. K. D. Murray, original code primary by K. D. Murray, Norman Warthmann, with contributions from others at the aforementioned institutes.
acapdf
This is the HomePage of our Project.
acapela-box
acapela_boxacapela_ voices online not official APIinstallingpip install https://github.com/alekssamos/acapela_box/archive/refs/heads/master.zipusingsee file ./example1.pyor from command line$acapela_box"fr-FR""AntoineFromAfar""Ce module est développé par un français."https://vaasbox.acapela-box.com/MESSAGES/013099097112101108097066111120095086050/Listen.php?q=NiivxaaFRPDRk2FD7Lfg!CV7fN MzZD!aYmkMKE_u1OIUZmfHJWx1IW09pewrMyoZ.mp3This library is compatible withGe0/acapela-group
acapela-downloader-py
Acapela Downloader but in PythonTranslated over to Python from C#:https://github.com/weespin/WillFromAfarDownloaderHow to useFirst download the pip package.pip install acapela-downloader-pyThen use the 'generate_audio' function from the 'acapela' libraryfromacapelaimportgenerate_audiogenerate_audio("Great Test","enu_willhappy_22k_ns.bvcu","output.mp3")Other StuffIf you find any code from StupidTown in my package feel free to make a pull request.
acapelladb
ОписаниеАсинхронный Python-клиент для Key-Value базы данныхAcapellaDB.Примеры использованияДля начала работы необходимо создать сессию:>>>session=Session(host='localhost',port=12000)Базовые GET/SET операции с ключами производятся с помощью класса Entry:>>># создание объекта Entry, ключи являются массивом строк>>>entry=session.entry(["foo","bar"])>>># установка значения>>>awaitentry.set("value 1")>>># установка значения с условием совпадения версии>>>awaitentry.cas("value 2")>>># получение значения по ключу и сохранение в Entry>>>entry=awaitsession.get_entry(["foo","bar"])>>>print(f'value = "{entry.value}", version ={entry.version}')value="value 2",version=2Для хранения сложных структур данных введены две части ключа:partitionиclustering. Первый используется для распределения данных по кластеру. Все clustering-ключи в пределах одного partition-ключа лежат вместе на каждой реплике, что обеспечивает возможность выборок и batch-запросов.Пример работы со списком пользователей внутри одного partition'а:>>># создание списка>>>awaitsession.entry(partition=["users"],clustering=["first"]).set({>>>'age':25>>>})>>>awaitsession.entry(partition=["users"],clustering=["second"]).set({>>>'age':32>>>})>>>awaitsession.entry(partition=["users"],clustering=["third"]).set({>>>'age':21>>>})>>># выборка всех пользователей>>>data=awaitsession.range(partition=["users"])>>>foreindata:>>>print(f'{e.key[0]}:{e.value.age}')first:25second:32third:21>>># выборка первых 2-х пользователей>>>data=awaitsession.range(partition=["users"],limit=2)>>>foreindata:>>>print(f'{e.key[0]}:{e.value.age}')first:25second:32Пример работы с очередью:>>># запись событий в очередь по 10 штук>>>foriinrange(10):>>># записи производятся в батч, а потом выполняется один запрос>>>batch=BatchManual()>>>foriinrange(10):>>>key=str(uuid1())>>>e=session.entry(partition=["queue-1"],clustering=[key])>>>awaite.set(value=i,batch=batch)>>># выполнение батча>>>awaitbatch.send()>>># чтение событий из очереди по 10 штук>>>first=[]>>>foriinrannge(10):>>>data=awaitsession.range(partition=["queue-1"],first=first,limit=10)>>>foreindata:>>>print(f'{e.key}:{e.value}')>>>first=data[len(data)-1].key>>>['be2a5d92-8cc0-11e7-8bb2-40e230b5623b']:0['be2a6058-8cc0-11e7-8bb2-40e230b5623b']:1['be2a61f2-8cc0-11e7-8bb2-40e230b5623b']:2...['be2ae000-8cc0-11e7-8bb2-40e230b5623b']:99>>># выборка всех событий за определённый интервал времени>>>data=awaitsession.range(>>>partition=["queue-1"],>>>first=['be2a61f2-8cc0-11e7-8bb2-40e230b5623b'],>>>last=['be2a7fac-8cc0-11e7-8bb2-40e230b5623b']>>>)Для работы с деревьями (DT, Distributed Tree) используются классы Tree и Cursor:>>># создание дерева>>>tree=session.tree(["test","tree"])>>># заполнение дерева>>>awaittree.cursor(["a"]).set("1")>>>awaittree.cursor(["b"]).set("2")>>>awaittree.cursor(["c"]).set("3")>>>awaittree.cursor(["d"]).set("4")>>>awaittree.cursor(["e"]).set("5")>>># получение следующего ключа в дереве>>>next=awaittree.cursor(["b"]).next()# next.key = ["c"]>>># получение предыдущего ключа в дереве>>>prev=awaittree.cursor(["b"]).next()# next.key = ["a"]>>># выборка данных по заданным ограничениям>>>data=awaittree.range(first=["a"],last=["d"],limit=2)>>>print([e.keyforeindata])[['b'],['c']]Так же для всех операций доступен транзакционный режим. Транзакции можно использовать в двух режимах:как context manager>>>asyncwithsession.transaction()astx:>>># использование транзакции>>>entry=awaittx.get_entry(["foo","bar"])>>>awaitentry.cas("value 3")в "ручном" режиме, необходимо явно вызвать commit/rollback при завершении работы с транзакцией>>># создание транзакции>>>tx=awaitsession.transaction_manual()>>>try:>>># использование транзакции>>>entry=awaittx.get_entry(["foo","bar"])>>>awaitentry.cas("value 3")>>># commit, если не произошло исключений>>>awaittx.commit()>>>exceptException:>>># rollback, если произошла какая-либо ошибка>>>awaittx.rollback()Больше примеров использования можно найти втестах.
acapi
Python Library for Acquia's Cloud APIThis is a python client for using theAcquia Cloud API.InstallationInstalling With pip (recommended)pip install acapiManual Installation$ git clone [email protected]:skwashd/python-acquia-cloud.git acapi $ cd acapi $ ./setup.py build && ./setup.py installExamplesimportacapifrompprintimportpprint# Acquia subscription name.subname='example'# Website domain.domain='example.com'# Instantiate client using environment variables.# Set ACQUIA_CLOUD_API_USER and ACQUIA_CLOUD_API_TOKEN accordingly.c=acapi.Client()# Get the site object.site=c.site(subname)# Get the environments object.envs=site.environments()# Print all environments on a subscription.pprint(envs)# List the SSH host for each environment.forenvinenvs:print"Env:{env}SSH Host:{host}".format(env=env,host=envs[env]['ssh_host'])# Move a domain from stage to production.envs['prod'].domain(domain).move('test')# Backup the development environment database and download the dump file.site.environment('dev').db(subname).backups().create().download('/tmp/backup.sql.gz')This library was created and maintained byDave Hall.SeeLICENSE.
acapi2
Python Acquia Cloud API v2Python Client library to communicate with theAcquia Cloud API V2.Pablo Fabregat-LicenseDeprecation notice:The following items will be removed in 2.0.3:Support for environment variablesACQUIA_CLOUD_API_KEYandACQUIA_CLOUD_API_SECRET; how the credentials are provided to the library is responsibility of the user.I've decided not to remove this, for now.Tasks object; Acquia API is deprecating this as well, the Notifications object should be used insteadExamplesPlease bear in mind that the library is being actively developed and most of its functionality is just a reduced set of what it should be.Minimal requestacquia=Acquia(api_key,api_secret)application=acquia.application("a47ac10b-58cc-4372-a567-0e02b2c3d470")print(application["name"])Using filterssubscription_name="MySubsName"filters="name="+subscription_nameapplication=acapi.applications(filters=filters).first()dev_environment=application.environments()["dev"]print(dev_environment["id"])dev_environment.set_php_version("7.0")more_settings={"max_execution_time":10,"memory_limit":192,"apc":128,"max_input_vars":1000,"max_post_size":256,"sendmail_path":"/usr/bin/sendmail","varnish_over_ssl":False}dev_environment.configure(more_settings)Notificationsacapi2 now supportsthe notifications endpointWhenever an action is executed (e.g.a code import), the API will return a uuid for its correspondant task status (notification), this can be used to check on the status of the task itself"notif_uuid="d82a122d-b7b8-46fc-9999-39cb824fac8d"notification=acquia.notification(notif_uuid)print(notification.data)You can also check on thecurrent notifications for a specific applicationfilters="name=@*myapp*"app=acquia.applications(filters=filters).first()notifications=app.notifications()foruuid,notificationinnotifications.items():print(notification.data)RoadmapCurrent version:2.0.32.0.12.x becomes the default repository branch,Out of the beta status,Notifications support,Code coverage increase,Clean up the original code a bit.Support for backups.2.0.2Small release to put back support of credentials in environment variables, which is now being announced as deprecated.2.0.3Tasks endpoint removal (you should use notifications),Credential environment variables removal,Wait until a notification completes,More support for log forwarding2.0.4Minor release: Added support for DB Backup Downloads2.0.5Credential environment variables removal (now for real :) )I've decided not to remove this, for now,Taken overhttps://pypi.org/project/http-hmac-pythonsince it dissapeared.Support for environment cron operations2.0.6Distributions endpoint support,Messages endpoint support,Better exceptions handling.CreditsThis library was originally based on the Acquia API Python Library created by Dave Hall (http://github.com/skwashd/python-acquia-cloud)
acappella-info
No description available on PyPI.
acaps
No description available on PyPI.
acaptain
Package provides a local database integration, arduino sensor handling, and a UI for remote access to sensors
acapture
https://github.com/aieater/python_acaptureacapture (async capture python library)Descriptionacapture is a python camera/video capturing library for realtime. When python apps implement video, web camera and screenshot capturing, it is too slow FPS and suffering with that performance. In addition, python is non event driven architecture and always be I/O blocking problem, and that will be imped parallelism.acapture(AsynchronusCapture) library provides async video/camera capturing implementation and can solve that blocking and performance problems.acapture library is useful instead of OpenCV VideoCapture API.OpenCV has blocking problem.import cv2 cap = cv2.VideoCapture(0) check,frame = cap.read() # blocking!! and depends on camera FPS.acapture library can solve that blocking problem in realtime apps.import acapture cap = acapture.open(0) check,frame = cap.read() # non-blockingAlso see 'pyglview' package.OpenCV3 renderer is too slow due to cv2.waitKey(1). If you want to more performance, you should use OpenCV4+ or 'pyglview' package.https://github.com/aieater/python_glview.gitThis package is supported fastest OpenGL direct viewer and OpenCV renderer both. If your environment was not supported OpenGL, it will be switched to CPU renderer(OpenCV) automatically and also available remote desktop(Xserver) like VNC.Getting StartedBase libraries on Ubuntu16.04LibraryinstallationCamerasudo apt install -y libv4l-dev libdc1394-22 libdc1394-22-dev v4l-utilsVideosudo apt install -y ffmpeg libavcodec-dev libavformat-dev libswscale-dev libxine2-dev libfaac-dev libmp3lame-dev mplayerBase libraries on MacOSXLibraryinstallationBrew/usr/bin/ruby -e "$(curl -fsSLhttps://raw.githubusercontent.com/Homebrew/install/master/install)"Camera-Videobrew install ffmpeg mplayerPython package dependenciesVersionLibraryinstallationv3.x/v4.xOpenCVpip3 install opencv-pythonv4.xmsspip3 install mssv1.1x.xnumpypip3 install numpyv1.9.xpygamepip3 install pygamev3.7.xconfigparserpip3 install configparserFinally, install acapture.pip3 install acaptureExamplesVideo stream (Async)import acapture import cv2 cap = acapture.open("test.mp4") while True: check,frame = cap.read() # non-blocking if check: frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB) cv2.imshow("test",frame) cv2.waitKey(1)Video frames (Async)cap = acapture.open("test.mp4",frame_capture=True)Camera stream (Async)import acapture import cv2 cap = acapture.open(0) # /dev/video0 while True: check,frame = cap.read() # non-blocking if check: frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB) cv2.imshow("test",frame) cv2.waitKey(1)Screenshot stream (Sync)import acapture import cv2 cap = acapture.open(-1) while True: check,frame = cap.read() # blocking if check: frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB) cv2.imshow("test",frame) cv2.waitKey(1)Directory images (Sync)import acapture import cv2 cap = acapture.open("images/") while True: check,frame = cap.read() # blocking if check: frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB) cv2.imshow("test",frame) cv2.waitKey(1)Unit image (Preloaded)import acapture import cv2 cap = acapture.open("images/test.jpg") while True: check,frame = cap.read() if check: frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB) cv2.imshow("test",frame) cv2.waitKey(1)Extract video to jpg images.import acapture acapture.extract_video2images("test.mp4",format="jpg",quality=2)APIsVersionFunctionRequiredDescriptionv1.0open(f,**kargs)fOpen stream. [-1=>screenshot], [0=>camera0], [1=>camera1], [dirpath=>images], [path=>image],[path=>video]v1.0extract_video2images(path,**kargs)pathExtract video to images.v1.0camera_info()Display camera information on Ubuntu.v1.0compress_images2video(path,**kargs)pathMake video from images.v1.0extract_video2audio(f)pathExtract audio file as mp3.v1.0join_audio_with_video(vf,af)vf, afJoin video file and audio file.Also see 'pyglview' package.OpenCV3 renderer is too slow. If you want to more performance, you should use OpenCV4 or pyglview package.https://github.com/aieater/python_glview.gitThis package is supported fastest OpenGL viewer and OpenCV renderer both. If your environment was not supported OpenGL, it will be switched to CPU renderer(OpenCV) automatically and also available remote desktop(Xserver) like VNC.acapture + pyglview + webcamera example.import cv2 import acapture import pyglview viewer = pyglview.Viewer() cap = acapture.open(0) # Camera 0, /dev/video0 def loop(): check,frame = cap.read() # non-blocking frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB) if check: viewer.set_image(frame) viewer.set_loop(loop) viewer.start()Logicool C922 1280x720(HD) is supported 60FPS. This camera device and OpenGL direct renderer is best practice. Logicool BRIO 90FPS camera is also good!, but little bit expensive.LicenseThis project is licensed under the MIT License - see theLICENSEfile for details
acapy
acapy package (Python bindings for AVALDATA AcapLib2 )IndexInstallationRequirmentLICENSEAcaPy ClassExamplesConstructorsPropertiesMethodsGraphicsBox ClassConstructorsPropertiesMethodsChangelogInstallationpip install acapyRequirmentAcapLib2 (AVALDATA SDK)numpyPillow※サンプルの実行にはOepnCVが必要になります。LICENSEBSD 3-Clause License Copyright (c) 2021, AVALDATA All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.AcaPy ClassAcapLib2をPythonから使えるようにしたクラスです。ExamplesSnap sample画像1枚を繰り返し取得します。 高速に画像を連続取込する場合は、Grab sampleを参照下さい。importsysimportcv2importacapy# AcaPyクラスのインスタンスcapture=acapy.AcaPy()ifcapture.is_opened==False:# キャプチャボードが見つからない、別のアプリが起動しているときは終了del(capture)sys.exit(0)# iniファイル(ボード設定ファイル)の読込# iniファイルは実際に使用するカメラ用のファイルを指定してください。capture.load_inifile("./AreaSensor_mono.ini")while(True):# 画像を1枚取得ret,frame=capture.snap()# カラーのときはBGR(OpenCVの画像データと互換)# 画像の表示cv2.imshow("Image",frame)ifcv2.waitKey(1)>0:# キー入力待ちbreakcv2.destroyAllWindows()Grab sample連続的に画像を取り込みます。importsysimportcv2importacapy# AcaPyクラスのインスタンスcapture=acapy.AcaPy()ifcapture.is_opened==False:# キャプチャボードが見つからない、別のアプリが起動しているときは終了del(capture)sys.exit(0)# iniファイル(ボード設定ファイル)の読込# iniファイルは実際に使用するカメラ用のファイルを指定してください。capture.load_inifile("./AreaSensor_mono.ini")# mem_numの変更例#capture.mem_num = 10# grab(連続画像取込)の開始capture.grab_start()while(True):# 前回のフレームの次から今回のフレームまでを取得ret,frames,count,frame_no=capture.read_frames()# カラーのときはBGR(OpenCVの画像データと互換)ifret<0:# 画像メモリの上書きが発生している# iniファイルのUSER_MEMORY_NUMの値、もしくはAcaPyクラスのmem_numプロパティの値を大きくすることpasselifret==acapy.AcaPy.ERROR:# フレームエンドのタイムアウト発生時、Grabのループ停止break# 最後に取得した画像を表示cv2.imshow("Image",frames[count-1])ifcv2.waitKey(1)>0:# キー入力待ちbreak# grab(連続画像取込)の停止capture.grab_stop()cv2.destroyAllWindows()Constructorsボード番号、チャンネル番号を指定して、画像入力ボードをオープンします。 実際にボードがオープン出来たかどうかは、is_openedプロパティで確認してください。def __init__(self, board_id: int = 0, ch: int = 1, debug_print: bool = True):パラメータ名前型説明board_idint0以上のボード番号を指定します。chint1以上のチャンネル番号を指定します。debug_printboolTrueを指定した場合、AcaPyクラスメソッドにエラー等が発生した場合、コンソールにエラー情報が表示されます。Falseを指定すると何も表示されません。Propertiesプロパティの値は、値を設定後、refrect_param()メソッドを実行すると設定が画像入力ボードに反映されます。 refrect_param()メソッドを実行し忘れた場合、grab_start()メソッド内でrefrect_param()メソッドが実行されます。PropertiesDescriptoinGet/Setini parameteracaplib2_versionAcapLib2のDLLバージョンを取得します。●/-a_cw_ccwエンコーダA相の回転方向を取得します。●/-b_cw_ccwエンコーダB相の回転方向を取得します。●/-bayer_enableベイヤー変換の有効/無効を取得・設定します。●/●USER_BAYER_ENABLEbayer_gridベイヤー変換開始位置のパターンを取得・設定します。●/●USER_BAYER_GRIDbayer_input_bitBayer画像のBit数を取得・設定します。●/●USER_BAYER_INPUT_BITbayer_lut_data編集するLUTデータ配列(リスト)を取得・設定します。●/●bayer_lut_editBayerLUTの編集を開始・停止します。●/●bayer_output_bitBayer変換後のBit数を取得・設定します。●/●USER_BAYER_OUTPUT_BITboard_bitボードで確保する画像用メモリのビット幅を取得します。●/-USER_GRABBER_BIT_DEPTHboard_id初期化されたボードIDを取得します。●/-board_errorボードエラーを取得・クリアします。●/●board_name初期化されたボード名を取得します。●/-board_temp基板温度を取得します。●/-buffer_zero_fill初期化時にバッファをゼロクリアするかを取得・設定します。●/●camera_bitカメラから入力する画像のビット数を取得・設定します。●/●USER_CAMERA_BIT_DEPTHcamera_stateカメラの接続状態を取得します。●/-capture_flagキャプチャ状況を取得します。●/-cc_cycleCC信号の周期[usec]を取得・設定します。●/●USER_SW_TRIGGER_CYCLEcc_cycle_exCC信号の周期[100nsec]を取得・設定します。●/●USER_SW_TRIGGER_CYCLEcc_delayCC信号の出力遅延時間[usec]を取得・設定します。●/●USER_CC_DELAYcc_enableCC信号出力の有効(1)/無効(0)を取得・設定します。●/●USER_SW_TRIGGER_ENABLEcc_out_noCC信号の番号(1~4)を取得・設定します。●/●USER_CL_CC_OUT_NOcc_polarityCC信号の論理を取得・設定します。●/●USER_SW_TRIGGER_POLARITYcc1_polarityCC1信号の出力レベルを取得・設定します。●/●USER_CL_CC1_POLARITYcc2_polarityCC2信号の出力レベルを取得・設定します。●/●USER_CL_CC2_POLARITYcc3_polarityCC3信号の出力レベルを取得・設定します。●/●USER_CL_CC3_POLARITYcc4_polarityCC4信号の出力レベルを取得・設定します。●/●USER_CL_CC4_POLARITYcc_stop画像入力停止後、CC出力を行うかの設定を取得・設定します。●/●USER_CC_STOPcancel_initialize初期化と内部バッファ確保の設定を取得・設定します。●/●USER_CANCEL_INITIALIZEch初期化されたチャンネル番号を取得します。●/-channel_numボードの総チャンネル数を取得します。●/-chatter_separate外部トリガ検出無効時間の設定方法を取得・設定します。●/●USER_EXTERNAL_TRIGGER _CHATTER_SEPARATEcount_ccCC信号の出力回数を取得します。●/-count_exttrigEXTTRIG(外部トリガ)信号の入力回数を取得します。●/-count_fvalFVAL信号の入力回数を取得します。●/-count_lvalLVAL信号の入力回数を取得します。●/-customFPGAカスタマイズが行われた番号を取得します。●/-cxp_acquision _start_addressCXP規格のAcquisitionStartのアドレスを取得・設定します。●/●USER_CXP_ACQUISION _START_ADDRESScxp_acquision _start_valueCXP規格のAcquisitionStartのアドレスに指定する値を取得・設定します。●/●USER_CXP_ACQUISION _START_VALUEcxp_acquision _stop_addressCXP規格のAcquisitionStopのアドレスを取得・設定します。●/●USER_CXP_ACQUISION _STOP_ADDRESScxp_acquision _stop_valueCXP規格のAcquisitionStopのアドレスに指定する値を取得・設定します。●/●USER_CXP_ACQUISION _STOP_VALUEcxp_bitrateCXPのビットレートを取得・設定します。●/●USER_CXP_PHY_BITRATEcxp_link_speedCXPのリンク速度を取得・設定します。●/●USER_CXP_PHY_LINKSPEEDcxp_pixel_formatCXP規格のPixelFormatアドレスに指定する値を取得・設定します。●/●USER_CXP_PIXEL_FORMATcxp_pixel _format_addressCXP規格のPixelFormatアドレス値を取得・設定します。●/●USER_CXP_PIXEL _FORMAT_ADDRESSdata_mask_lowerカメラリンクポートA~Dのマスク値を取得・設定します。●/●USER_DATA_MASK_LOWERdata_mask_upperカメラリンクポートE~Hのマスク値を取得・設定します。●/●USER_DATA_MASK_UPPERdevice実装されているデバイス名を取得します。●/-dval_enableカメラデータ入力時のDVAL信号参照の有効(1)/無効(0)を取得・設定します。●/●USER_DVAL_SIGNAL_ENABLEencoder_abs_countエンコーダ絶対カウンタ値を取得します。●/-encoder_abs_start絶多値エンコーダの開始/停止の制御値を取得・設定します。●/●encoder_agr_count相対位置エンコーダ使用時の一致パルス数を取得します。●/-encoder_all_count相対位置エンコーダ使用時の総カウント値を取得します。●/-encoder_compare_reg_1エンコーダ比較レジスタ1の値を取得・設定します。●/●USER_ENCODER _COMPARE_REG_1encoder_compare_reg_2エンコーダ比較レジスタ2の値を取得・設定します。●/●USER_ENCODER _COMPARE_REG_2encoder_countエンコーダ相対カウンタ値を取得します。●/-encoder_directionエンコーダ入力パルス方向を取得・設定します。●/●USER_ENCODER_PULSEencoder_enableエンコーダの有効/無効/使用方法を取得・設定します。●/●USER_ENCODER_ENABLEencoder_modeエンコーダ動作モードを取得・設定します。●/●USER_ENCODER_MODEencoder_phaseエンコーダ入力パルス選択を取得・設定します。●/●USER_ENCODER_PHASEencoder_start外部トリガをエンコーダで使用する方法を取得・設定します。●/●USER_ENCODER_STARTencoder_z_phaseエンコーダ起動にZ相を使用するかの設定を取得・設定します。●/●USER_ENCODER_Z_PHASEexposureCC信号の出力幅(露光時間)[usec]を取得・設定します。●/●USER_SW_TRIGGER_WIDTHexposure_exCC信号の出力幅(露光時間)[100nsec]を取得・設定します。●/●USER_SW_TRIGGER_WIDTHexpress_linkPCI-Expressバスに接続されているリンク幅を取得します。●/-external_trigger_chatter外部トリガ検出無効時間[usec]を取得・設定します。●/●USER_EXTERNAL_TRIGGER _CHATTERexternal_trigger_delay外部トリガ検出遅延時間[usec]を取得・設定します。●/●USER_EXTERNAL_TRIGGER _DELAYexternal_trigger_enable外部トリガに使用する信号選択を取得・設定します。●/●USER_EXTERNAL_TRIGGER _ENABLEexternal_trigger_mode外部トリガを使用してCC信号を出力する方法を取得・設定します。●/●USER_EXTERNAL_TRIGGER _POLARITYfifo_fullFIFOステータスを取得します。●/-fpga_tempFPGA温度を取得します。●/-fpga_versionFPGAバージョンを取得します。●/-freq_aエンコーダA相の周波数(Hz)を取得します。●/-freq_bエンコーダB相の周波数(Hz)を取得します。●/-freq_fvalFVALの周波数(kHz)を取得します。●/-freq_lvalLVALの周波数(kHz)を取得します。●/-freq_ttl1TTL1の周波数(Hz)を取得します。●/-freq_ttl2TTL2の周波数(Hz)を取得します。●/-freq_zエンコーダZ相の周波数(Hz)を取得します。●/-gpin_polGP_INのレベルをBit情報で取得します。●/-gpin_pin_selGP_INピン割込みピンを取得・設定します。●/●USER_GPIN_SELgpout_polGP_OUTピンの出力レベルを取得・設定します。●/●USER_GPOUT_POLgpout_selGP_OUTピンの出力設定を取得・設定します。●/●USER_GPOUT_SELhandle初期化されたボードハンドルを取得します。●/-height画像入力サイズの高さを取得・設定します。●/●USER_Y_SIZE, USER_Y_TOTAL_SIZEinfrared_enableRGBIの有効/無効を取得・設定します。●/●USER_INFRARED_ENABLEinterval_exttrig_1認識した外部トリガ間隔の時間(カウント値)の最新の値を取得します。●/-interval_exttrig_2認識した外部トリガ間隔の時間(カウント値)の2番目に新しい値を取得します。●/-interval_exttrig_3認識した外部トリガ間隔の時間(カウント値)の3番目に新しい値を取得します。●/-interval_exttrig_4認識した外部トリガ間隔の時間(カウント値)の4番目に新しい値を取得します。●/-is_grabGrab中かどうかを取得します。●/-is_openedボードのOpenに成功したかどうかを取得します。●/-line_reverseカメラから入力したラインデータの左右反転設定を取得・設定します。●/●USER_CAMERALINK _LINE_REVERSEinterrupt_line1フレーム入力ライン数カウント間隔を取得・設定します。●/●USER_DATA_INTERRUT_LINElval_delayカメラから入力するLVALのX方向遅延量(clock)を取得・設定します。●/●USER_CAMERALINK _LVAL_DELAYlvds_cclk_selカメラ駆動クロックを取得・設定します。●/●USER_LVDS_CCLK_SELlvds_phase_sel入力サンプリングの位相を取得・設定します。●/●USER_LVDS_PAHSE_SELlvds_synclt_selSYNCLTピンの入出力を取得・設定します。●/●mem_numリングバッファの画像面数を取得・設定します。●/●USER_MEMORY_NUMnarrow10bit_enableデータ詰め転送の有効(1)/無効(0)を取得・設定します。●/●USER_NARROW10BIT_ENABLEpocl_lite_enablePoCL-Liteカメラ用設定の有効(1)/無効(0)を取得・設定します。●/●USER_POCL_LITE_ENABLEpower_stateカメラ電源エラー状態を取得・クリアします。●/●power_supplyカメラへの給電状態を取得・設定します。●/●pix_shiftカメラから入力するデータを右シフトするビット数を取得・設定します。●/●USER_PIXEL_DATA_SHIFTreverse_dma_enable上下反転DMAの有効(1)/無効(0)を取得・設定します。●/●USER_REVERSE_DMA_ENABLErolling_shutterローリングシャッタの有効(1)/無効(0)を取得・設定します。●/●USER_ROLLING_SHUTTER _TRIGGER_ENABLEscan_system取込を行うカメラの種類を取得・設定します。●/●USER_INTERLACE_TYPEserial_noボードのシリアル番号を取得します。●/-start_frame_no取込を開始するフレーム番号(1, 2, 3…)を取得・設定します。●/●strobe_delayストロボ信号出力遅延時間[usec]を取得・設定します。●/●USER_STROBE_DELAY_COUNTstrobe_enableストロボ出力信号の有効(1)/無効(0)を取得・設定します。●/●USER_STROBE_ENABLEstrobe_polストロボの極性を取得・設定します。●/●USER_STROBE_POLALITYstrobe_timeストロボ信号出力時間[usec]を取得・設定します。●/●USER_STROBE_TIME_COUNTsync_ch指定チャンネルの取込をどのchに同期されるかを取得・設定します。●/●USER_SYNC_CHsync_ltCC信号の出力をSYNCLT入力に同期されるかどうかを取得・設定します。●/●USER_SYNC_LTtap_arrangeカメラ入力タップの並べ替え方法を取得・設定します。●/●USER_CAMERALINK _REARRANGEMENT_ENABLEtap_arrange_x_sizeカメラが1ラインとして出力する総画素数を取得・設定します。●/●USER_CAMERALINK _REARRANGEMENT_XSIZEtap_direction1複数のタップ入力を行う時、Tap1の入力方向を取得・設定します。●/●USER_CAMERALINK_TAP_DIRECTON_1tap_direction2複数のタップ入力を行う時、Tap2の入力方向を取得・設定します。●/●USER_CAMERALINK_TAP_DIRECTON_2tap_direction3複数のタップ入力を行う時、Tap3の入力方向を取得・設定します。●/●USER_CAMERALINK_TAP_DIRECTON_3tap_direction4複数のタップ入力を行う時、Tap4の入力方向を取得・設定します。●/●USER_CAMERALINK_TAP_DIRECTON_4tap_direction5複数のタップ入力を行う時、Tap5の入力方向を取得・設定します。●/●USER_CAMERALINK_TAP_DIRECTON_5tap_direction6複数のタップ入力を行う時、Tap6の入力方向を取得・設定します。●/●USER_CAMERALINK_TAP_DIRECTON_6tap_direction7複数のタップ入力を行う時、Tap7の入力方向を取得・設定します。●/●USER_CAMERALINK_TAP_DIRECTON_7tap_direction8複数のタップ入力を行う時、Tap8の入力方向を取得・設定します。●/●USER_CAMERALINK_TAP_DIRECTON_8tap_num入力タップ数を取得・設定します。●/●USER_X_TAPS_PER_CHtimeoutgrab_stop()メソッドのフレーム入力待ちタイムアウト時間をミリ秒単位で取得・設定します。●/●USER_TIMEOUT1trigger_enableCC信号出力の有効(1)/無効(0)を取得・設定します。●/●USER_SW_TRIGGER_ENABLEvertical_remapY方向特殊DMAの設定/無効(0)を取得・設定します。●/●USER_VERTICAL_REMAP _ENABLEvirtual_comport仮想COMポート番号を取得します。●/-width画像入力サイズの幅を取得・設定します。●/●USER_X_SIZEx_delayカメラ入力のX方向遅延量を取得・設定します。●/●USER_X_FRONT_PORCHy_delayカメラ入力のY方向遅延行数を取得・設定します。●/●USER_Y_FRONT_PORCHy_totalカメラから入力する行数を取得・設定します。y_totalを設定する場合は、必ずwidth→y_totalの順に設定してください。●/●USER_Y_TOTAL_SIZE※設定値の詳細は、AcapLib2のマニュアル「SDK-AcapLib2 Library Manual for Windows(OM15018*).pdf」を参照願います。MethodsMethodsDescriptoinAcapLib2 Functionbgr2rgb(image)カラー画像のとき、BGRからRGBへ並びを入れ替えます。count_reset()すべてのカウンタをリセットします。cxp_link_reset()CoaXPressカメラ、ボードに対してリンクリセットを行います。get_boardInfo()PCに接続されているボード一覧の情報を取得します。(static method)get_encoder()エンコーダの設定を取得します。AcapGetEncoderget_encoder_abs_multipoint(point_no)指定したエンコーダ絶対カウントのポイント番号の絶対カウント値を取得します。get_external_trigger()外部トリガの設定を取得します。AcapGetExternalTriggerget_frame_no()現在の画像取得枚数(1,2,3・・・)を取得します。get_gpout()汎用出力(GPOUT)のレベルを取得します。AcapGetGPOutget_image_data(index)リングバッファのIndex番号を指定して、画像データを取得します。get_last_error([error_reset])最後に発生したエラー情報を取得します。get_line_trigger()ライントリガの設定を取得します。AcapGetLineTriggerget_info(value_id[, memnum])設定IDを指定してボードの設定値を取得します。get_shutter_trigger()エリアセンサシャッタトリガの設定を取得します。AcapGetShutterTriggerget_strobe()ストロボの設定を取得します。AcapGetStrobegrab_abort()画像入力を強制停止します。grab_start([input_num])画像入力枚数を指定し、画像入力を開始します。grab_stop()画像入力を停止します。load_inifile(filename)ボード設定ファイル(*.ini)を読込、ボードの設定を行います。opt_link_reset()Opt-C:Linkボードに対してリンクリセットを行います。print_acapy_values()ボードに設定されている値をコマンドプロンプトへ表示します。print_last_error()最後に発生したエラー情報をコマンドプロンプトへ表示します。read([copy, wait_frame])grab_start()後、現在のフレーム画像を取得します。read_frames([copy])前回取得したフレームの次から、現在のフレームまでの画像を取得します。refrect_param([force_execution])プロパティで設定した値をボードへ反映させます。save_inifile(inifilename)ボードに設定されている値をiniファイルに保存します。serial_close()シリアル通信ポートを閉じます。AcapSerialCloseserial_get_parameter()シリアル通信のパラメータを取得します。AcapSerialGetParameterserial_open([baud_rate, data_bit, parity, stop_bit, flow])シリアルポートをオープンし、シリアル通信のパラメータを設定します。AcapSerialOpen AcapSerialSetParameterserial_read([asc, time_out, buffer_size, end_str])シリアル通信コマンドを受信します。受信バッファが空になるまで受信します。AcapSerialReadserial_write(write_command[, asc, start_str, end_str])シリアル通信コマンドを送信します。AcapSerialWriteset_encoder(enc_enable, enc_mode, enc_start, enc_phase, enc_direction, z_phase_enable, compare1, compare2)エンコーダを設定します。AcapSetEncoderset_encoder_abs_multipoint(point_no, abs_count)エンコーダ絶対カウントのポイント番号を指定して、絶対カウント値を設定します。set_external_trigger( exp_trg_en, ext_trg_mode, ext_trg_dly, ext_trg_chatter, timeout)外部トリガを設定します。AcapSetExternalTriggerset_gpout(output_level)汎用出力(GPOUT)のレベルを設定します。AcapSetGPOutset_info(value_id, value[, memnum])ボードへ設定IDを指定して値を設定します。プロパティに設定値がある場合は設定しないでください。AcapSetInfoset_line_trigger(exp_cycle, exposure, exp_pol)ライントリガを設定します。AcapSetLineTriggerset_power_supply(wait_time, value)カメラクロック確認待機時間(waite_time)をmsecで指定し、給電のON(1)/OFF(0)をvalueに設定します。set_shutter_trigger(exp_cycle, exposure, exp_pol, exp_unitt, cc_sel)エリアセンサシャッタトリガを設定します。AcapSetShutterTriggerset_strobe(strobe_en, strobe_delay, strobe_time)ストロボを設定します。AcapSetStrobesnap([copy])1フレームの取込を行います。wait_grab_start([timeout])Grab開始を待ちます。wait_frame_end([timeout])現在のフレームの入力完了を待ちます。wait_gpin([timeout])GPIN割込みを待ちます。wait_grab_end([timeout])Grab(連続入力)の完了を待ちます。※コールバック関数は対応していません。bgr2rgbメソッドカラーデータのRGBの並びをB,G,RからR,G,Bへ入れ替えます。def bgr2rgb(bgr_image : np.ndarray) -> np.ndarray:パラメータ名前型説明bgr_imagenp.ndarraynumpyのndarray形式のB,G,R順のカラー画像を指定します。戻り値rgb_image名前型説明rgb_imagenp.ndarraynumpyのndarray形式のR,G,B順のカラー画像を取得します。count_resetメソッドFVAL/LVAL/外部トリガ/CC カウント回数及び、外部トリガ間隔カウンタ1~4 のカウンタをリセットしま す。def count_reset(self) -> bool:戻り値ret名前型説明retbool成功時: 1失敗時: 0cxp_link_resetメソッド対象のポートに接続されているカメラに対し、カメラ接続確立(再初期化)を行います。 ※カメラ設定はデフォルトに戻りますので、LinkSpeed 等カメラ設定を再度実施する必要があります。def cxp_link_reset(self) -> bool:戻り値ret名前型説明retbool成功時: 1失敗時: 0get_boardInfoメソッド接続されているボード情報をACAPBOARDINFOEX構造体で取得します。AcapLib2のAcapGetBoardInfoEx相当def get_boardInfo() -> Tuple[int, acaplib2.ACAPBOARDINFOEX]:戻り値(ret, board_info)名前型説明retbool成功時: 1失敗時: 0board_infoacaplib2.ACAPBOARDINFOEX接続されたボード情報が格納された構造体get_encoderメソッド接続されているボード情報をACAPBOARDINFOEX構造体で取得します。AcapLib2のAcapGetEncoder相当def get_encoder(self) -> Tuple[int, int, int, int, int, int, int, int, int, int]:戻り値(ret, enc_enable, enc_mode, enc_start, enc_phase, enc_direction, z_phase_enable, compare1, compare2, comp2_count)名前型説明retint成功時: 1失敗時: 0enc_enableintエンコーダ使用設定enc_modeintエンコーダモードenc_startintエンコーダ起動方法enc_phaseintエンコーダパルスenc_directionintエンコーダ回転方向z_phase_enableintZ 相使用設定compare1int比較レジスタ1(遅延パルス設定)compare2int比較レジスタ2(間隔パルス設定)comp2_countintエンコーダカウント値相対カウント、有効時:総カウント数絶対カウント、有効時:絶対カウント値get_encoder_abs_multipointメソッド絶対カウント・マルチポイント値を取得します。def get_encoder_abs_multipoint(self, point_no : int) -> Tuple[int, int]:パラメータ名前型説明point_noint設定するマルチポイントの番号を指定します。(0~255)戻り値(ret, abs_count)名前型説明retint成功時: 1失敗時: 0abs_countint絶対カウント・マルチポイント値get_external_triggerメソッド外部トリガの設定を取得します。AcapLib2のAcapGetExternalTrigger相当def get_external_trigger(self) -> Tuple[int, int, int, int, int, int]:戻り値(ret, exp_trg_en, ext_trg_mode, ext_trg_dly, ext_trg_chatter, timeout)名前型説明retint成功時: 1失敗時: 0exp_trg_enint外部トリガ 使用設定ext_trg_modeint外部トリガモードext_trg_dlyint外部トリガ 検出遅延時間 (1us 単位)ext_trg_chatterint外部トリガ 検出無効時間 (1us 単位)timeoutint検出待機時間 (1ms 単位)get_frame_noメソッド入力フレームの情報を取得します。AcapLib2のAcapGetFrameNo相当def get_frame_no(self) -> Tuple[int, int, int , int]:戻り値(ret, frame_no, line, index)名前型説明retint成功時: 1失敗時: 0frame_noint入力が完了したフレーム数を取得します。lineintフレーム内の入力が完了したライン数を取得します。indexint最後に入力が完了したメモリ番号(1,2,3...)を取得します。get_gpoutメソッドGPOUTの状態を取得します。AcapLib2のAcapGetGPOut相当def get_gpout(self) -> Tuple[int, int]:戻り値(ret, dwOutputLevel)名前型説明retint成功時: 1失敗時: 0dwOutputLevelintGPOUT 出力設定状態を対応するBit で格納します。0:OFF/1:ONBit0:GPOUT[1]ピン (GP_OUT1)Bit1:GPOUT[2]ピン (GP_OUT2):Bit7:GPOUT[8]ピンget_image_dataメソッドリングバッファのIndex番号(0, 1, 2...)を指定し、画像データを取得します。def get_image_data(self, index : int) -> Union[np.ndarray, None]:パラメータ名前型説明indexintリングバッファのIndex番号(0, 1, 2...)を指定しリます。戻り値image名前型説明imagenp.ndarray画像データはnumpyのndarrayで取得します。get_last_errorメソッド最後に発生したエラー情報を取得します。AcapLib2のAcapGetLastErrorCode相当def get_last_error(self, error_reset : bool = False) -> acaplib2.ACAPERRORINFO:パラメータ名前型説明error_resetboolTrue: エラー情報をリセットFalse: エラー情報をリセットしない戻り値error_info名前型説明error_infoacaplib2.ACAPERRORINFOエラーの内容をACAPERRORINFO構造体で取得します。get_line_triggerメソッドライントリガの設定を取得します。AcapLib2のAcapGetLineTrigger相当def get_line_trigger(self) -> Tuple[int, int, int, int]:戻り値(ret, exp_cycle, exposure, exp_pol)名前型説明retint成功時: 1失敗時: 0exp_cycleint露光周期を1μsec単位で取得します。exposureint露光時間を1μsec単位で取得します。exp_polint出力論理を取得します。1: 正論理0: 負論理get_infoメソッド設定IDを指定して、設定値を取得します。AcapLib2のAcapGetInfo相当def get_info(self, value_id : int, mem_num : int = 0) -> Tuple[int, int]:パラメータ名前型説明value_idint設定IDmem_numintvalue_idの値により使用方法が異なります。詳細はAcapLib2マニュアルのAcapGetInfo関数を参照ください。戻り値(ret, value)名前型説明retint成功時: 1失敗時: 0valueint取得した設定値get_shutter_triggerメソッドシャッタトリガの設定を取得します。AcapLib2のAcapGetShutterTrigger相当def get_shutter_trigger(self) -> Tuple[int, int, int, int, int, int]:戻り値(ret, exp_cycle, exposure, exp_pol, exp_unit, cc_sel)名前型説明retint成功時: 1失敗時: 0exp_cycleintCC周期を1μsec単位で取得します。exposureintCC出力幅を1μsec単位で取得します。exp_polint出力論理を取得します。exp_unitint未サポートcc_selint露光信号を出力するCC 番号を取得します。get_strobeメソッドストロボの設定を取得します。AcapLib2のAcapGetStrobe相当def get_strobe(self) -> Tuple[int, int, int, int]:戻り値(ret, strobe_en, strobe_delay, strobe_time)名前型説明retint成功時: 1失敗時: 0strobe_enintストロボの有効(1)/無効(0)状態を取得します。strobe_delayintストロボパルスが出力されるまでの遅延時間(1μsec単位)を取得します。strobe_timeintストロボを出力する時間(1μsec単位)を取得します。grab_abortメソッド画像入力を強制停止します。AcapLib2のAcapGrabAbort相当def grab_abort(self) -> int:戻り値ret名前型説明retint成功時: 1失敗時: 0grab_startメソッド画像入力を開始します。AcapLib2のAcapGrabStart相当def grab_start(self, input_num : int = 0) -> int:パラメータ名前型説明input_numint0: 連続入力1: 一画面入力2~0xFFFFFFFF: 指定枚入力戻り値ret名前型説明retint成功時: 1失敗時: 0grab_stopメソッド画像入力を停止します。AcapLib2のAcapGrabStop相当def grab_stop(self) -> int:戻り値ret名前型説明retint成功時: 1失敗時: 0load_inifileメソッドボード設定ファイル(iniファイル)を読込ます。AcapLib2のAcapSelectFile相当def load_inifile(self, inifilename : str) -> int:パラメータ名前型説明inifilenamestriniファイルのファイル名を指定します。戻り値ret名前型説明retint成功時: 1失敗時: 0opt_link_resetメソッドOpt-C:Link ボードに対してリンクリセット(再初期化)を行います。def opt_link_reset(self) -> bool:戻り値ret名前型説明retint成功時: 1失敗時: 0print_acapy_valuesメソッドAcaPyクラスに設定されている値をコマンドラインに表示します。def print_acapy_values(self):print_last_errorメソッド最後に発生したエラー情報をコマンドラインに表示します。def print_last_error(self) -> acaplib2.ACAPERRORINFO:戻り値error_info名前型説明error_infoacaplib2.ACAPERRORINFOエラーの内容をACAPERRORINFO構造体で取得します。readメソッドgrab_startメソッドを実行後、grab_startメソッドを実行するまでの間で実行し、現在の画像データを1フレーム分取得します。def read(self, copy : bool = False, wait_frame : bool = True) -> Tuple[int, Union[np.ndarray, None], int, int]:パラメータ名前型説明copyboolTrue: リングバッファから画像データをコピーして取得します。False:リングバッファの画像データを取得します。この場合、画像データが上書きされる場合があります。wait_frameboolTrue: 1フレーム分の画像データ取得完了を待ってから画像データを取得します。False:1フレーム分の画像データ取得完了を待たずに画像データを取得します。戻り値(ret, frame, frame_no, line)名前型説明retint成功時: 1失敗時: 0framenp.ndarray取得した画像データカラー画像の場合、データの並びはB, G, Rとなります。(OpenCVと同等)frame_nointgrab_startから入力が完了したフレーム数(1, 2, 3...)lineint入力が完了したライン数read_framesメソッドgrab_startメソッドを実行後、grab_startメソッドを実行するまでの間で実行し、前回取得したフレームの次のフレームから現在取得したフレームまでの画像データをリストで取得します。カメラのフレームレートが速い場合、取得したフレーム画像が上書きされる場合があります。 その場合、mem_numプロパティの値(リングバッファの画像面数)を大きくしてください。def read_frames(self, copy : bool = False) -> Tuple[int, Union[List[np.ndarray], None], int, int]:パラメータ名前型説明copyboolTrue: リングバッファから画像データをコピーして取得します。False:リングバッファの画像データを取得します。戻り値(ret, frames, count, frame_no)名前型説明retint成功時: 1失敗時: 0データ上書き時: 上書きされた画像枚数を負にした値framesnp.ndarray前回取得したフレームの次のフレームから現在取得したフレームまでの画像データをリストで取得します。カラー画像の場合、データの並びはB, G, Rとなります。(OpenCVと同等)countint前回取得したフレームの次のフレームから現在取得したフレームまでのフレーム数frame_nointgrab_startから入力が完了したフレーム数(1, 2, 3...)refrect_paramメソッドプロパティで設定した値を画像入力ボードに反映します。def refrect_param(self, force_execution : bool = False) -> int:パラメータ名前型説明force_executionboolTrue: 強制的に実行します。False:反映が必要な場合に、実行します。戻り値ret名前型説明retint成功時: 1失敗時: 0save_inifileメソッドボードに設定されている値をiniファイルに保存します。AcapLib2のAcapSelectFile相当def save_inifile(self, inifilename : str) -> int:パラメータ名前型説明inifilenamestr保存するiniファイルのファイルパス戻り値ret名前型説明retint成功時: 1失敗時: 0serial_closeメソッドシリアル通信のポートをクローズします。AcapLib2のAcapSerialClose相当def serial_close(self) -> int:戻り値ret名前型説明retint成功時: 1失敗時: 0serial_get_parameterメソッドシリアル通信の設定を取得します。AcapLib2のAcapSerialGetParameter相当def serial_get_parameter(self) -> Tuple[int, int, int, int, int, int]:戻り値(ret, baud_rate, data_bit, parity, stop_bit, flow)名前型説明retint成功時: 1失敗時: 0baud_rateintボーレートを以下の値を取得します。data_bitintデータビットを取得します。parityintパリティを取得します。stop_bitintストップビットを取得します。flowintフロー制御を取得します。serial_openメソッドシリアル通信のポートをオープンします。AcapLib2のAcapSerialOpen、AcapSerialSetParameter相当def serial_open(self, baud_rate : int = 9600, data_bit : int = 8, parity : int = 0, stop_bit : int = 0, flow : int = 0) -> int:パラメータ名前型説明baud_rateintボーレートを以下の値の中から設定します。9600, 19200, 38400, 57600, 115200data_bitintデータビットを指定します。(現在、8 のみ設定可)parityintパリティを指定します。(現在、「0 :なし」 のみ設定可)stop_bitintストップビットを指定します。(現在、「0: 1bit」 のみ設定可)flowintフロー制御を指定します。(現在、「0 :なし」 のみ設定可)戻り値ret名前型説明retint成功時: 1失敗時: 0serial_readメソッドシリアル通信コマンドの受信を行います。 受信はデータが空になるまで、stop_bitになるまで、タイムアウトになるまで行われます。AcapLib2のAcapSerialRead相当def serial_read(self, asc : bool = True, time_out : int = 100, buffer_size : int = 511, end_str : Union[str, None] = None) -> Tuple[int, str, int]:パラメータ名前型説明ascboolシリアルに読込む(受信)文字のコードを指定します。False: 16 進数(HEX)表記True: ASCIItime_outintデータビットを指定します。(現在、8 のみ設定可)buffer_sizeintパリティを指定します。(現在、「0 :なし」 のみ設定可)end_strintストップビットを指定します。(現在、「0: 1bit」 のみ設定可)戻り値(ret, read_command, read_bytes)名前型説明retint成功時: 1失敗時: 0read_commandstr受信した文字列read_bytesint受信したデータのバイト数serial_writeメソッド指定した文字コードでコマンドの書込み(送信)を行います。 AcapLib2のAcapSerialWrite相当def serial_write(self, write_command : str, asc : bool = True, start_str : Union[str, None] = None, end_str : Union[str, None] = "\r") -> int:パラメータ名前型説明write_commandstrシリアルに送信するコマンドascboolシリアルに書込む(送信)文字のコードを指定します。False: 16 進数(HEX)表記True: ASCIIstart_strstrasc がTRUE の場合に指定できます。コマンドの開始文字列(ASCII 表記)使用しない場合は「None」を指定して下さいend_strstrasc がTRUE の場合に指定できます。コマンドの終端文字列(ASCII 表記)使用しない場合は「None」を指定して下さい戻り値ret名前型説明retint成功時: 1失敗時: 0set_encoderメソッドエンコーダの設定を行います。 AcapLib2のAcapSetEncoder相当def set_encoder(self, enc_enable : int, enc_mode : int, enc_start : int, enc_phase : int, enc_direction : int, z_phase_enable : int, compare1 : int, compare2 : int) -> int:パラメータ名前型説明enc_enableintエンコーダ使用設定0: 無効1: 有効(相対カウント)2: 有効(絶対カウント)enc_modeintエンコーダモード0: エンコーダスキャンモード1: エンコーダライン選択モードenc_startintエンコーダの起動方法0: エンコーダをCPU で起動1: エンコーダを外部トリガで起動2: エンコーダをCPU で起動して、外部トリガを一致パルスとして使用enc_phaseintエンコーダパルス0: AB相1: A相enc_directionintエンコーダ回転方向0: CW1: CCWz_phase_enableintZ 相使用設定0: 使用しない1: 使用するcompare1int比較レジスタ1(遅延パルス設定)0 ~ 4,294,967,295compare2int比較レジスタ2(間隔パルス設定)1 ~ 4,294,967,295戻り値ret名前型説明retint成功時: 1失敗時: 0set_encoder_abs_multipointメソッド絶対カウントマルチポイントの値を設定します。def set_encoder_abs_multipoint(self, point_no : int, abs_count : int) -> int:パラメータ名前型説明point_noint設定するマルチポイントの番号を指定します。(0~255)abs_countint絶対カウント・マルチポイント値の設定を行います。-2147483648 ~ 2147483647戻り値ret名前型説明retint成功時: 1失敗時: 0set_external_triggerメソッド外部トリガの種別、検出方法などを設定します。 AcapLib2のAcapSetExternalTrigger相当def set_external_trigger(self, exp_trg_en : int, ext_trg_mode : int, ext_trg_dly : int, ext_trg_chatter : int, timeout : int) -> int:パラメータ名前型説明exp_trg_enint外部トリガとして使用する信号の選択0 : 無効1 : TTL トリガ2 : 差動トリガ(エンコーダと共用)3 : 新規差動トリガ4 : OPT トリガext_trg_modeint外部トリガモード0 : 外部トリガ1 回でCC が1 回出力するモード(連続外部トリガモード)1 : 外部トリガ1 回でCC が周期出力するモード(単発外部トリガモード)ext_trg_dlyint外部トリガ 検出遅延時間 (1us 単位)ext_trg_chatterint外部トリガ 検出無効時間 (1us 単位)timeoutint検出待機時間 (1ms 単位)1 ~ 4,294,967,295戻り値ret名前型説明retint成功時: 1失敗時: 0set_gpoutメソッドGPOUT ピンの出力を制御します。 AcapLib2のAcapSetGPOut相当def set_gpout(self, output_level : int) -> int:パラメータ名前型説明output_levelint対応するBit のGPOUT ピンをON(High)/OFF(Low)します。0:OFF/1:ONBit0:GPOUT[1]ピン (GP_OUT1)Bit1:GPOUT[2]ピン (GP_OUT2):Bit7:GPOUT[8]ピン戻り値ret名前型説明retint成功時: 1失敗時: 0set_infoメソッド設定IDを指定して、設定値を設定します。 プロパティに同様の設定値がある場合は設定しないでください。AcapLib2のAcapSetInfo相当def set_info(self, value_id : int, value : int, mem_num : int = -1) -> int:パラメータ名前型説明value_idint設定IDvalueint設定値mem_numintvalue_idの値により使用方法が異なります。詳細はAcapLib2マニュアルのAcapSetInfo関数を参照ください。戻り値ret名前型説明retint成功時: 1失敗時: 0set_line_triggerメソッドラインセンサへ出力するCC 信号の周期・幅を設定します。 AcapLib2のAcapSetLineTrigger相当def set_line_trigger(self, exp_cycle : int, exposure : int, exp_pol : int) -> int:パラメータ名前型説明exp_cycleintCC 出力周期 (1us 単位) ※CoaXPress の場合はトリガパケットです。0 ~ 4,294,967,295exposureintCC 出力幅 (1us 単位)0 ~ 4,294,967,295exp_polint出力論理0 : 負論理1 : 正論理戻り値ret名前型説明retint成功時: 1失敗時: 0set_power_supplyメソッドカメラへの電源供給を制御します。def set_power_supply(self, value : int, wait_time : int = 3000) -> int:パラメータ名前型説明valueint電源のON/OFF0: OFF1: ONwait_timeintタイムアウト時間をmsで指定します。戻り値ret名前型説明retint成功時: 1失敗時: 0set_shutter_triggerメソッドエリアセンサシャッタトリガを設定します。AcapLib2のAcapSetShutterTrigger相当def set_shutter_trigger(self, exp_cycle : int, exposure : int, exp_pol : int, exp_unit : int, cc_sel : int) -> int:パラメータ名前型説明exp_cycleintCC 出力周期 (1us 単位)0 ~ 429,496,729exposureintCC 出力幅 (1us 単位)0 ~ 429,496,729exp_polint出力論理0 : 負論理1 : 正論理exp_unitint未サポートcc_selint出力する番号1 : CC1 / 2 : CC2 / 3 : CC3 / 4 : CC4戻り値ret名前型説明retint成功時: 1失敗時: 0set_strobeメソッドストロボを設定します。AcapLib2のAcapSetStrobe相当def set_strobe(self, strobe_en : int, strobe_delay : int, strobe_time : int) -> int:パラメータ名前型説明strobe_enintストロボ 使用設定0 : 無効 / 1 : 有効strobe_delayintストロボパルスが出力されるまでの遅延時間 (1us 単位)0 ~ 65,535strobe_timeintストロボパルスを出力する時間 (1us 単位)0 ~ 65,535戻り値ret名前型説明retint成功時: 1失敗時: 0snapメソッド画像を1枚取込みます。高速に連続取込する場合は、grabをお使い下さい。(参照:Grab sample)def snap(self, copy : bool = False) -> Tuple[int, Union[np.ndarray, None]]:パラメータ名前型説明copyboolTrue: リングバッファから画像データをコピーして取得します。False:リングバッファの画像データを取得します。この場合、画像データが上書きされる場合があります。戻り値(ret, frame)名前型説明retint成功時: 1失敗時: 0framenp.ndarray取得した画像データカラー画像の場合、データの並びはB, G, Rとなります。(OpenCVと同等)wait_frame_endメソッド1フレーム分の画像取込完了を待ちます。AcapLib2のAcapWaitEventのACL_INT_FRAMEEND相当def wait_frame_end(self, timeout = -1) -> int:パラメータ名前型説明timeoutint待機時間をmsec単位で指定します。値が負の場合、timeoutプロパティで指定された時間分待機します。戻り値ret名前型説明retint成功時: 1失敗時: 0wait_gpinメソッドGPIN 割り込みを待ちます。AcapLib2のAcapWaitEventのACL_INT_GPIN相当def wait_gpin(self, timeout = -1) -> int:パラメータ名前型説明timeoutint待機時間をmsec単位で指定します。値が負の場合、timeoutプロパティで指定された時間分待機します。戻り値ret名前型説明retint成功時: 1失敗時: 0wait_grab_endメソッドgrabの入力停止を待ちます。AcapLib2のAcapWaitEventのACL_INT_GRABEND相当def wait_grab_end(self, timeout = -1) -> int:パラメータ名前型説明timeoutint待機時間をmsec単位で指定します。値が負の場合、timeoutプロパティで指定された時間分待機します。戻り値ret名前型説明retint成功時: 1失敗時: 0wait_grab_startメソッドgrabの入力開始を待ちます。AcapLib2のAcapWaitEventのACL_INT_GRABSTART相当def wait_grab_start(self, timeout = -1) -> int:パラメータ名前型説明timeoutint待機時間をmsec単位で指定します。値が負の場合、timeoutプロパティで指定された時間分待機します。戻り値ret名前型説明retint成功時: 1失敗時: 0GraphicsBox ClasstkinterのCanvasクラスを継承した、tkinter用画像表示ウィジェットとなります。ConstructorsConstructorsDescriptoinGraphicsBox(parent, option = value, ...)option設定はTkinterのCanvasクラスと同じPropertiesPropertiesDescriptoinGet/Setaffine_matrix画像表示に使用している3x3のアフィン変換行列を取得・設定します。●/●bright_enabled輝度値の表示(True)/非表示(False)の設定を取得・設定します。●/●cross_beam_colorプロファイル用十字線の色を取得・設定します。●/●disp_scale画像表示倍率を取得します。●/-grid_colorグリッド線の色を取得・設定します。●/●grid_disp_scaleグリッド線を表示する画像の最小倍率を取得・設定します。●/●grid_enabled画像拡大時のグリッド線の表示(True)/非表示(False)の設定を取得・設定します。●/●max_scale画像の拡大時の最大倍率を取得・設定します。●/●min_scale画像の縮小時の最小倍率を取得・設定します。●/●profile_enabledプロファイルの表示(True)/非表示(False)の設定を取得・設定します。●/●profile_hightプロファイルグラフの高さを取得・設定します。●/●profile_xプロファイルを表示するX座標(ウィジェットの座標)を取得・設定します。●/●profile_x_colorモノクロ画像時の横方向のプロファイルの線色を取得・設定します。●/●profile_yプロファイルを表示するY座標(ウィジェットの座標)を取得・設定します。●/●profile_y_colorモノクロ画像時の縦方向のプロファイルの線色を取得・設定します。●/●zoomup_direction画像拡大時のホイールの回転方向を取得・設定します。-1:下へ回転、1:上へ回転●/●MethodsMethodsDescriptoindest_to_src_xy(xy)現在の表示状態において、ウィジェット上の座標を画像上の座標へ変換します。draw_image(image)PillowのImage形式もしくはnumpyのndarray形式の画像をGraphicsBoxへ表示します。表示可能なのは8bit,24bit,32bitの画像のみです。redraw_image()画像を再描画します。reset_transform()画像表示を初期状態(左上に等倍率)に戻します。scale_at(scale, cx, cy)指定した点を中心に拡大縮小します。scale_transform(scale)画像表示の相対倍率を指定し拡大縮小します。src_to_dest_xy(xy)現在の表示状態において、画像上の座標をウィジェット上の座標へ変換します。translate(offset_x, offset_y)画像表示位置を平行移動します。zoom_fit(image_width, image_height)画像の幅と高さを指定し、ウィジェット全体に画像を表示します。dest_to_src_xyメソッド表示している画像のウィジェット上の座標(ウィジェットの左上が原点)から、画像の座標へ変換します。def dest_to_src_xy(self, xy : Tuple[float, float]) -> Tuple[float, float]:パラメータ名前型説明xyTuple[float, float]表示している画像のウィジェット上のxy座標(ウィジェットの左上が原点)をfloat型のタプルで指定します。戻り値(x, y)名前型説明(x, y)Tuple[float, float]画像上のxy座標がfloat型のタプルで取得されます。draw_imageメソッド画像データをウィジェットに表示します。def draw_image(self, image : Union[Image.Image, np.ndarray]) -> int:パラメータ名前型説明imageUnion[Image.Image, np.ndarray]表示する画像データデータはPillowのImageオブジェクトもしくはnumpyのndarrayオブジェクトとなります。カラー画像の場合、R,G,Bの順で指定してください。戻り値id名前型説明idinttkinterのcreate_imageの戻り値を取得します。redraw_imageメソッド画像を再描画します。def redraw_image(self):reset_transformメソッドクラス内部で使用している画像表示用アフィン変換行列をセット(単位行列にする)します。 リセット後に画像を表示すると、画像はウィジェットの左上に等倍(倍率が1)で表示されます。def reset_transform(self):scale_atメソッド指定した座標(ウィジェットの座標)を中心に画像を拡大縮小します。def scale_at(self, scale:float, cx:float, cy:float):パラメータ名前型説明scalefloat画像の表示倍率を指定します。倍率は現在の表示状態からの相対倍率となります。cxfloat拡大縮小の基点となる x 座標(ウィジェットの座標)を指定します。cyfloat拡大縮小の基点となる y 座標(ウィジェットの座標)を指定します。scale_transformメソッド原点(ウィジェットの左上)を中心に画像を拡大縮小します。def scale_transform(self, scale:float):パラメータ名前型説明scalefloat画像の表示倍率を指定します。倍率は現在の表示状態からの相対倍率となります。src_to_dest_xyメソッド表示している画像上の座標から、ウィジェット上の座標へ変換します。def src_to_dest_xy(self, xy : Tuple[float, float]) -> Tuple[float, float]:パラメータ名前型説明xyTuple[float, float]表示している画像上のxy座標をfloat型のタプルで指定します。戻り値(x, y)名前型説明(x, y)Tuple[float, float]ウィジェット上のxy座標がfloat型のタプルで取得されます。translateメソッド画像の表示位置を指定した大きさ(ウィジェット上の距離)で平行移動します。def translate(self, offset_x : float, offset_y : float):パラメータ名前型説明offset_xfloat画像のX方向の移動量をウィジェット上の距離で指定します。offset_yfloat画像のY方向の移動量をウィジェット上の距離で指定します。zoom_fitメソッド画像全体がウィジェット全体に表示されるように表示位置、倍率を調整します。def zoom_fit(self, image_width : int, image_height : int):パラメータ名前型説明image_widthint画像の幅の画素数を指定します。image_heightint画像の高さの画素数を指定します。ChangelogVer.1.0.0モジュールバージョン備考acaplib2Ver.1.0.0初版acapyVer.1.0.0初版graphicsboxVer.1.0.0初版初版
acapy-client
acapy-clientA client library for accessing Aries Cloud AgentUsageFirst, create a client:fromacapy_clientimportClientclient=Client(base_url="https://api.example.com")If the endpoints you're going to hit require authentication, useAuthenticatedClientinstead:fromacapy_clientimportAuthenticatedClientclient=AuthenticatedClient(base_url="https://api.example.com",token="SuperSecretToken")Now call your endpoint and use your models:fromacapy_client.modelsimportMyDataModelfromacapy_client.api.my_tagimportget_my_data_modelfromacapy_client.typesimportResponsemy_data:MyDataModel=get_my_data_model.sync(client=client)# or if you need more info (e.g. status_code)response:Response[MyDataModel]=get_my_data_model.sync_detailed(client=client)Or do the same thing with an async version:fromacapy_client.modelsimportMyDataModelfromacapy_client.api.my_tagimportget_my_data_modelfromacapy_client.typesimportResponsemy_data:MyDataModel=awaitget_my_data_model.asyncio(client=client)response:Response[MyDataModel]=awaitget_my_data_model.asyncio_detailed(client=client)By default, when you're calling an HTTPS API it will attempt to verify that SSL is working correctly. Using certificate verification is highly recommended most of the time, but sometimes you may need to authenticate to a server (especially an internal server) using a custom certificate bundle.client=AuthenticatedClient(base_url="https://internal_api.example.com",token="SuperSecretToken",verify_ssl="/path/to/certificate_bundle.pem",)You can also disable certificate validation altogether, but beware thatthis is a security risk.client=AuthenticatedClient(base_url="https://internal_api.example.com",token="SuperSecretToken",verify_ssl=False)Things to know:Every path/method combo becomes a Python module with four functions:sync: Blocking request that returns parsed data (if successful) orNonesync_detailed: Blocking request that always returns aRequest, optionally withparsedset if the request was successful.asyncio: Likesyncbut the async instead of blockingasyncio_detailed: Likesync_detailedby async instead of blockingAll path/query params, and bodies become method arguments.If your endpoint had any tags on it, the first tag will be used as a module name for the function (my_tag above)Any endpoint which did not have a tag will be inacapy_client.api.defaultBuilding / publishing this ClientThis project usesPoetryto manage dependencies and packaging. Here are the basics:Update the metadata in pyproject.toml (e.g. authors, version)If you're using a private repository, configure it with Poetrypoetry config repositories.<your-repository-name> <url-to-your-repository>poetry config http-basic.<your-repository-name> <username> <password>Publish the client withpoetry publish --build -r <your-repository-name>or, if for public PyPI, justpoetry publish --buildIf you want to install this client into another project without publishing it (e.g. for development) then:If that projectis using Poetry, you can simply dopoetry add <path-to-this-client>from that projectIf that project is not using Poetry:Build a wheel withpoetry build -f wheelInstall that wheel from the other projectpip install <path-to-wheel>
acapy-mydata-did-protocol
ACA-Py plugin for MyData DID DIDComm protcolAcknowledgementsThis repository was originally created as as a deliverable for Automated Data Agreement (ADA) Project. ADA project is part of NGI-eSSIF-Lab project that has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871932.The lead developer to this project is iGrant.io (Sweden), supported by Linaltec (Sweden) and PrivacyAnt (Finland).ACA-Py Version CompatibilityThis plugin is compatible with ACA-Py version 0.5.6.InstallationRequirements:Python 3.6 or higherACA-Py 0.5.6Setup Aries Cloud Agent - PythonIf you already have an existing installation of ACA-Py, you can skip these steps and move on toplugin installation. It is also worth noting that this is not the only way to setup an ACA-Py instance. For more setup configurations, see theAries Cloud Agent - Python repository.First, prepare a virtual environment:$python3-mvenvenv $sourceenv/bin/activateInstall ACA-Py 0.5.6 into the virtual environment:$pipinstallaries-cloudagent==0.5.6Plugin InstallationInstall this plugin into the virtual environment:$pipinstallacapy-mydata-did-protocolNote:Depending on your version ofpip, you may need to drop or add#egg=mydata_didto install the plugin with the above command.Plugin LoadingStart up ACA-Py with the plugin parameter:$aca-pystart\-ithttp0.0.0.08002\-othttp\-e"http://localhost:8002/"\--label"Agent"\--admin0.0.0.08001\--admin-insecure-mode\--auto-accept-requests\--auto-ping-connection\--auto-respond-credential-offer\--auto-respond-credential-request\--auto-store-credential\--auto-respond-presentation-proposal\--auto-respond-presentation-request\--auto-verify-presentation\--genesis-urlhttps://indy.igrant.io/genesis\--wallet-typeindy\--wallet-name"agent_wallet"\--log-levelinfo\--wallet-key"wallet@123"\--plugin"mydata_did"LicensingCopyright (c) 2021-23 LCubed AB (iGrant.io), SwedenLicensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License.You may obtain a copy of the License athttps://www.apache.org/licenses/LICENSE-2.0.Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the LICENSE for the specific language governing permissions and limitations under the License.
acapy-patched
Aries Cloudagent Python (Patched)This is a patched fork of Aries Cloudagent Pythonv0.5.6LicenseApache License Version 2.0
acapy-patched-old
Hyperledger Aries Cloud Agent - PythonAn easy to use Aries agent for building SSI services using any language that supports sending/receiving HTTP requests.Hyperledger Aries Cloud Agent Python (ACA-Py) is a foundation for building self-sovereign identity (SSI) / decentralized identity services running in non-mobile environments using DIDcomm messaging, the did:peer DID method, and verifiable credentials. With ACA-Py, SSI developers can focus on building services using familiar web development technologies instead of trying to learn the nuts and bolts of low-level SDKs.As we create ACA-Py, we're also building resources so that developers with a wide-range of backgrounds can get productive with ACA-Py in a hurry. Checkout theresourcessection below and jump in.The "cloud" in Aries Cloud Agent - Python doesNOTmean that ACA-Py cannot be used as an edge agent. ACA-Py is suitable for use in any non-mobile agent scenario, including as an enterprise edge agent for issuing, verifying and holding verifiable credentials.Table of ContentsBackgroundInstallUsageSecurityAPIResourcesQuickstartArchitectural Deep DiveGetting Started GuideRead the DocsWhat to Focus On?CreditContributingLicenseBackgroundDeveloping an ACA-Py-based application is pretty straight forward for those familiar with web development. An ACA-Py instance is always deployed with a paired "controller" application that provides the business logic for that ACA-Py agent. The controller receives webhook event notifications from its instance of ACA-Py and uses an HTTP API exposed by the ACA-Py instance to provide direction on how to respond to those events. No ACA-Py/Python development is needed--just deploy an ACA-Py instance from PyPi (examples available). The source of the business logic is your imagination. An interface to a legacy system? A user interface for a person? Custom code to implement a new service? You can build your controller in any language that supports making and receiving HTTP requests. Wait...that's every language!ACA-Py currently supports "only" Hyperledger Indy's verifiable credentials scheme (which is pretty powerful). We are experimenting with adding support to ACA-Py for other DID Ledgers and verifiable credential schemes.ACA-Py is built on the Aries concepts and features defined in theAries RFCrepository.This documentcontains a (reasonably up to date) list of supported Aries RFCs by the current ACA-Py implementation.InstallACA-Py can be run with docker without installation, or can be installedfrom PyPi. Use the following command to install it locally:pipinstallaries-cloudagentUsageInstructions for running ACA-Py can befound here.SecurityThe administrative API exposed by the agent for the controller to use must be protected with an API key (using the--admin-api-keycommand line arg) or deliberately left unsecured using the--admin-insecure-modecommand line arg. The latter should not be used other than in development if the API is not otherwise secured.APIA deployed instance of an ACA-Py agent assembles an OpenAPI-documented REST interface from the protocols loaded with the agent. This is used by a controller application (written in any language) to manage the behaviour of the agent. The controller can initiate agent actions such as issuing a credential, and can respond to agent events, such as sending a presentation request after a new pairwise DID Exchange connection has been accepted. Agent events are delivered to the controller as webhooks to a configured URL. More information on the administration API and webhooks can be foundhere.ResourcesQuickstartIf you are an experienced decentralized identity developer that knows Indy, are already familiar with the concepts behind Aries, want to play with the code, and perhaps even start contributing to the project, an "install and go" page for developers can be foundhere.Architectural Deep DiveThe ACA-Py team presented an architectural deep dive webinar that can be viewedhere. Slides from the webinar can be foundhere.Getting Started GuideFor everyone those new to SSI, Indy and Aries, we've created aGetting Started Guidethat will take you from knowing next to nothing about decentralized identity to developing Aries-based business apps and services in a hurry. Along the way, you'll run some early Indy apps, apps built on ACA-Py and developer-oriented demos for interacting with ACA-Py. The guide has a good table of contents so that you can skip the parts you already know.Read the DocsThe ACA-Py Python docstrings are used as the source of aRead the Docscode overview site. Want to review the modules that make up ACA-Py? This is the best place to go.What to Focus On?Not sure where your focus should be? Building apps? Aries? Indy? Indy's Blockchain? Ursa? Here is adocumentthat goes through the technical stack to show how the projects fit together, so you can decide where you want to focus your efforts.CreditThe initial implementation of ACA-Py was developed by the Verifiable Organizations Network (VON) team based at the Province of British Columbia. To learn more about VON and what's happening with decentralized identity in British Columbia, please go tohttps://vonx.io.ContributingPull requests are welcome! Please read ourcontributions guideand submit your PRs. We enforcedeveloper certificate of origin(DCO) commit signing. See guidancehere.We also welcome issues submitted about problems you encounter in using ACA-Py.LicenseApache License Version 2.0
acapy-peopledata-did
ACA-Py plugin for Peopledata DID DIDComm protcolACA-Py Version CompatibilityThis plugin is compatible with ACA-Py version 0.5.6.InstallationRequirements:Python 3.6 or higherACA-Py 0.5.6Setup Aries Cloud Agent - PythonIf you already have an existing installation of ACA-Py, you can skip these steps and move on toplugin installation. It is also worth noting that this is not the only way to setup an ACA-Py instance. For more setup configurations, see theAries Cloud Agent - Python repository.First, prepare a virtual environment:$python3-mvenvenv $sourceenv/bin/activateInstall ACA-Py 0.5.6 into the virtual environment:$pipinstallaries-cloudagent==0.5.6Plugin InstallationInstall this plugin into the virtual environment:$pipinstallacapy-peopledata-didNote:Depending on your version ofpip, you may need to drop or add#egg=peopledata_didto install the plugin with the above command.Plugin LoadingStart up ACA-Py with the plugin parameter:$aca-pystart\-ithttp0.0.0.08002\-othttp\-e"http://localhost:8002/"\--label"Agent"\--admin0.0.0.08001\--admin-insecure-mode\--auto-accept-requests\--auto-ping-connection\--auto-respond-credential-offer\--auto-respond-credential-request\--auto-store-credential\--auto-respond-presentation-proposal\--auto-respond-presentation-request\--auto-verify-presentation\--genesis-urlhttps://indy.igrant.io/genesis\--wallet-typeindy\--wallet-name"agent_wallet"\--log-levelinfo\--wallet-key"wallet@123"\--plugin"Peopledata_did"LicensingLicensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License.You may obtain a copy of the License athttps://www.apache.org/licenses/LICENSE-2.0.
acapy-plugin-pickup
No description available on PyPI.
acapy-revocation-demo
No description available on PyPI.
aca-py-taurien
No description available on PyPI.
acapy_wallet_groups_plugin
No description available on PyPI.
acas-auth
No description available on PyPI.
acasclient
acasclientACAS API ClientFree software: GNU General Public License v3Documentation:https://acasclient.readthedocs.io.FeaturesTODOCreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.Installationpip install acasclientHistory0.1.0 (2020-01-29)First release on PyPI.
acasio9
This is simple Calculator.Change Log0.0.1 (10/06/2022)First Release
acat
ACAT:AlloyCatalysisAutomatedToolkitACAT is a Python package for atomistic modelling of metal (alloy) (oxide) catalysts used in heterogeneous catalysis. The package is based on automatic identifications of adsorption sites and adsorbate coverages on surface slabs and nanoparticles. Synergizing with ASE, ACAT provides useful tools to build atomistic models and perform global optimization tasks for alloy surfaces and nanoparticles with and without adsorbates. The goal is to automate workflows for the high-throughput screening of alloy catalysts.ACAT has been developed by Shuang Han at the Section of Atomic Scale Materials Modelling, Department of Energy Conversion and Storage, Technical University of Denmark (DTU) in Lyngby, Denmark.To use ACAT, please readACAT documentation(alsonotebook tutorialandexamples). For all symmetry-inequivalent adsorption sites on the surfaces (and nanoparticles) supported in ACAT, please refer to thetable of adsorption sites.Developers:Shuang Han ([email protected]) - current maintainerDependenciespython>=3.6networkx>=2.4aseasap3 (strongly recommended but not required, since asap3 does not support Windows)InstallationInstall via pip:pip3 install acatAlternatively, you can clone the repository:git clone https://gitlab.com/asm-dtu/acat.gitthen go to the installed path and install all dependencies:pip3 install -r requirements.txtFinally, install the main package:python3 setup.py installAcknowledgementsI would like to highly appreciate the support from BIKE project, where fundings are received from the European Union’s Horizon 2020 Research and Innovation programme under the Marie Skłodowska-Curie Action – International Training Network (MSCA-ITN), grant agreement 813748.I also want to thank Dr. Steen Lysgaard for the useful scripts and Dr. Giovanni Barcaro, Dr. Alessandro Fortunelli for the useful discussions.How to cite ACATIf you find ACAT useful in your research, please citethis paper:[1] Han, S., Lysgaard, S., Vegge, T. et al. Rapid mapping of alloy surface phase diagrams via Bayesian evolutionary multitasking. npj Comput. Mater. 9, 139 (2023).If you use ACAT's modules related to symmetric nanoalloy, please also citethis paper:[2] S. Han, G. Barcaro, A. Fortunelli et al. Unfolding the structural stability of nanoalloys via symmetry-constrained genetic algorithm and neural network potential. npj Comput. Mater. 8, 121 (2022).NotesACAT was originally developed for metal (alloy) surface slabs and nanoparticles. Therefore H, C, N, O, F, S and Cl atoms are treated as adsorbate molecules and metals are treated as catalyst by default. Now ACAT is generalized for any given surface structure throughacat.settings.CustomSurface, which means (mixed) metal oxide surfaces are also allowed. However, note that the H, C, N, O, F, S and Cl atoms at the surface are still always treated as adsorabtes.Some functions distinguishes between nanoparticle and surface slabs based on periodic boundary condition (PBC). Therefore, before using the code, it is recommended (but not required) toset all directions as non-periodic for nanoparticles and at least one direction periodic for surface slabs, and also add vacuum layers to all non-periodic directions. For periodic surface slabs the code will not work if the number of layers is less than 3 (which should be avoided anyways).Each layer always has the same number of atoms as the surface atoms. For stepped surface slabs one layer will have atoms at different z coordinates. However, note thatthere is no limitation to the size of the cell in the x and y directions.ACAT is able to identify adsorption sites for even a 1x1x3 cell with only one surface atom.ACAT uses a regularized adsorbate string representation. In each adsorbate string,the first element must set to the bonded atom (i.e. the closest non-hydrogen atom to the surface). Hydrogen should always follow the element that it bonds to.For example, water should be written as 'OH2', hydrogen peroxide should be written as 'OHOH', ethanol should be written as 'CH3CH2OH', formyl should be written as 'CHO', hydroxymethylidyne should be written as 'COH', sulfur dioxide can be written either as 'SO2' (S down) or 'O2S' (S up). If the string is not supported by the code, it will return the ase.build.molecule instead, which could result in a weird orientation. If the string is not supported by this code nor ASE, you can make your own molecules in theadsorbate_moleculefunction inacat.settings.There is a bug that causesget_neighbor_site_list()to not return the correct neighbor site indices with ASE version <= 3.18. This is most likely due to shuffling of indices in some ASE functions, which is solved after the release of ASE 3.19.0.If the adsorption site identification bySlabAdsorptionSitesis unsatisfactory, it is most likely due to a lattice mismatch of the input surface with the reference surface used for code parameterization. Most of the time this can be resolved by settingoptimize_surrogate_cell=True. If the result is still not good enough, you can always use theacat.settings.CustomSurfaceclass to represent your surface.
acAutoMechine
A fast Python implementation of Ac Auto MechineInstallationTo install:$pipinstallacAutoMechineQuickstartfromacAutoMechineimportAc_mechine### usage one:actree=Ac_mechine()actree.add_keys('he')actree.add_keys('her')actree.add_keys('here')actree.build_actree()### all matchprint(actree.match("he here her"))### long matchprint(actree.match_long("he here her"))### all match with match pathprint(actree.match("he here her",True))### long match with match pathprint(actree.match_long("he here her",True))
acb
Asynchronous Component BaseAsynchronous Component Base, or 'acb', is a collection of modular components (actions / adapters) that provide the building blocks for rapid, asynchronous, application development. This codebase should be considered alpha right now as it is under heavy development. A majority of the components are currently being back-ported from other apps and may not currently work as intended.More documentation is on the way!Installationpdm add acbActionsAdaptersConfigurationDependency InjectionDebugAcknowledgementsLicenseBSD-3-Clause
acb-mse
ACB-MSEAutomatic-Class-Balanced MSE Loss function for PyTorch (ACB-MSE) to combat class imbalanced datasets.Table of ContentsIntroductionInstallationUsageMethodologyBenefitsLicenseContributionsContactIntroductionThis repository contains the PyTorch implementation of the ACB-MSE loss function, which stands for Automatic Class Balanced Mean Squared Error, originally developed for theDEEPCLEAN3D Denoiserto combat class imbalance and stabilise loss gradient fluctuation due to dramatically varying class frequencies.InstallationAvailable on PyPipipinstallacb_mseRequirementsPython 3.xPyTorch (tested with version 2.0.1)UsageClass Parameterszero_weighting(float, optional): Weighting coefficient for MSE loss of zero pixels. Default is 1.nonzero_weighting(float, optional): Weighting coefficient for MSE loss of non-zero pixels. Default is 1.InputsInput (torch.Tensor): $( * )$, where $( * )$ means any number of dimensions.Target (torch.Tensor): $( * )$, same shape as the input.ReturnsOutput (float): Calculated loss value.Example Codeimporttorchfromacb_mseimportACBLoss# Select weighting for each class if not wanting to use the defualt 1:1 weightingzero_weighting=1.0nonzero_weighting=1.2# Create an instance of the ACBMSE loss function with specified weighting coefficientsloss_function=ACBLoss(zero_weighting,nonzero_weighting)# Dummy target image and reconstructed image tensors (assuming B=10, C=3, H=256, W=256)target_image=torch.rand(10,3,256,256)reconstructed_image=torch.rand(10,3,256,256)# Calculate the ACBMSE lossloss=loss_function(reconstructed_image,target_image)print("ACB-MSE Loss:",loss)Methodology and EquationsTwo masks are created from the target (label) image:zero_mask: A boolean mask where elements areTruefor zero-valued pixels in the target image.nonzero_mask: A boolean mask where elements areTruefor non-zero-valued pixels in the target image.The pixel values from both the target image and the reconstructed image corresponding to zero and non-zero masks are extracted.The mean squared error loss as calculated between the target and the input for each mask.The two loss values are then multiplied by the corresponding weighting coefficients (zero_weightingandnonzero_weighting) to allow user to adjust the balance from default 1:1.The weighted balanced MSE loss is returned as the final value.The function relies on the knowledge of the indices for all hits and non-hits in the true label image, which are then compared to the values in the corresponding index's in the recovered image. Therefore, ACB-MSE is unsuitable for unsupervised learning tasks. The ACB-MSE loss function is given by:$$ \text{Loss} = A(\frac{1}{N _ h}\sum _ {i = 1} ^ {N _ h}(y _ i - \hat{y} _ i) ^ 2) + B(\frac{1}{N _ n}\sum _ {i = 1} ^ {N _ n}(y _ i - \hat{y} _ i) ^ 2) $$where $y_i$ is the true value of the $i$-th pixel in the class, $\hat{y}_i$ is the predicted value of the $i$-th pixel in the class, and $n$ is the total number of pixels in the class (in our case labeled as $N_h$ and $N_n$ corresponding to 'hits' and 'no hits' classes, but can be extended to n classes). This approach to the loss function calculation takes the mean square of each class separately, when summing the separate classes errors back together they are automatically scaled by the inverse of the class frequency, normalising the class balance to 1:1. The additional coefficients $A$ and $B$ allow the user to manually adjust the balance to fine tune the balance.BenefitsThe ACB-MSE loss function was designed for data taken from particle detectors which often have a majority of 'pixels' which are unlit and a very sparse pattern of lit pixels. In this scenario the ACB-MSE loss provides two main benefits, addressing the class imbalance beteen lit and unlit pixels whilst also stabilising the loss gradient during training. Additonal parameters, 'A' and 'B', are provided to allow the user to set a custom balance between classes.Variable Class Size - Training StabilityFluctuations in the number of hit pixels across images during training can disrupt loss stability. ACB-MSE remedies this by dynamically adjusting loss function weights to reflect class frequencies in the target.The above plot demonstrates how each of the loss functions (ACB-MSE, MSE and MAE) behave based on the number of hits in the true signal. Two dummy images were created, the first image contains a simulated signal and the recovered image is created with 50% of that signal correctly identified, simulating a 50% signal recovery by the network. To generate the plot the first image was filled in two pixel increments with the second image following at a constant 50% recovery, and at each iteration the loss is calculated for the pair of images. We can see how the MSE and MAE functions loss varies as the size of the signal is increased with the recovery percentage fixed at 50%, whereas the ACB-MSE loss stays constant regardless of the frequency of the signal class.Class Imbalance - Local MinimaClass imbalance is an issue that can arise where the interesting features are contained in the minority class. In the case of theDEEPCLEAN3Ddata, the input images contained 11,264 total pixels with only around 200 of them being hits. For the network, guessing that all the pixels are non-hits (zero valued) yields a very respectable reconstruction loss and is a simple transfer function for the network to learn, this local minima proved hard for the network to escape from. Class balancing based on class frequency is a simple solution to this problem that shifts the loss landscape, making it less favorable for the network to guess all pixels as non-hits. This enabled theDEEPCLEAN3Dnetwork to escape the local minima and begin to learn a usefull transfer function for the input fetures.LicenseThis project is licensed under the MIT License - see theLICENSE.mdfile for details.ContributionsContributions to this codebase are welcome! If you encounter any issues or have suggestions for improvements please open an issue or a pull request on theGitHub repository.ContactFor any inquiries, feel free to reach out to me [email protected].
acb-py
acb.pyFor all your ACB extracting needs. Based onVGMToolbox.HCA decryption is based on the 2ch HCA decoder. Thanks also to Headcrabbed who documented the new extra keyhere.Usage:pipinstallacb-py python3-macbsomefile.acboutput# equivalentacbextractsomefile.acboutputYou can also pass--disarm-with=key1,key2to have the library decrypt (but not decode) files for you. The key format--disarm-with=k1,k2is equivalent tohca_decoder -a k1 -b k2, but you can also combine them into a 64-bit hex integer. This also supports AWB embedded keys (seehere). If you use disarm heavily, you should also install the_acb_speedupC extension in thefast_subdirectory. It will substantially speed up the decryption process.
acbrlib-python
Camada de abstração para acesso àACBrLibem Python.ACBrLibé um conjunto de bibliotecas voltadas para o mercado de automação comercial que oferece acesso à um conjunto rico de abstrações que facilitam o desenvolvimento de aplicações como pontos-de-venda (PDV) e aplicações relacionadas. Esta biblioteca fornece uma camada que torna trivial a utilização daACBrLibem seus próprios aplicativos escritos emlinguagem Python.NoteEsta biblioteca está em seus primeiros estágios de desenvolvimento, portanto, não espere encontrar toda a riqueza que os componentesACBrpossuem, por enquanto. Mas estamos totalmente abertos asujestõese estamos aceitandopull-requests.InstalaçãoInstale, preferencialmente em um ambiente virtual, usandopip:pipinstallacbrlib-pythonACBrLibCEPDá acesso a consultas de CEP utilizando dezenas de serviços de consulta disponíveis. Alguns desses serviços podem ser gratuítos ou gratuítos para desenvolvimento. Vejaeste linkpara ter uma ideia dos serviços que podem ser utilizados.Para fazer uma consulta baseada no CEP:fromacbrlib_pythonimportACBrLibCEPwithACBrLibCEP.usando('/caminho/para/libacbrcep64.so')ascep:enderecos=cep.buscar_por_cep('18270170')forenderecoinenderecos:print(endereco)O trecho acima resultará em uma lista de objetosEnderecocomo resultado da busca, prontos para serem usados. A consulta acima trará um único resultado (usando o serviçoViaCEP):Endereco(tipo_logradouro='',logradouro='Rua Coronel Aureliano de Camargo',complemento='',bairro='Centro',municipio='Tatuí',uf='SP',cep='18270-170',ibge_municipio='3554003',ibge_uf='35')Para mais exemplos de uso, veja a pastaexemplosneste repositório.Sobre Nomenclatura e Estilo de CódigoUma questão muito relevante é a maneira como esta abstração se refere aos nomes dos métodos disponíveis na bibliotecaACBrLibque utiliza uma convenção de nomenclatura para variáveis e nomes de argumentos ou parâmetros de funções conhecida comoNotação Húngara. Porém, em Python é utilizada a convençãosnake casetal como descrito naPEP8.Assim, para manter o estilo de Python, os nomes de variáveis e argumentos de função deverão descartar o prefixo que indica o tipo de dado e converter o restante para snake case, assim como nomes de métodos e funções também, por exemplo:eArqConfigparaarq_config;ConfigLerValorparaconfig_ler_valor;eArquivoXmlEventoparaarquivo_xml_evento;etc…Métodos de bibliotecas que são prefixados com o nome da lib, será descartado o prefixo e o restante do nome do método convertido para snake case, por exemplo:(ACBrLibNFe)NFE_CarregarINIparacarregar_ini;(ACBrLibNFe)NFE_ValidarRegrasdeNegociosparavalidar_regras_de_negocios(note a correção do conectordeque está em minúsculo no original);(ACBrLibCEP)CEP_BuscarPorLogradouroparabuscar_por_logradouro;etc…Esperamos que essa explicação faça sentido, senão, envia suassujestões.DesenvolvimentoVocê é bem-vindo para ajudar no desenvolvimento desta biblioteca enviando suas contribuições através depull-requests. Faça umforkdeste repositório e execute os testes antes de começar a implementar alguma coisa. A gestão de dependências é feita viaPoetrye recomendamos a utilização depyenv$gitclonehttps://github.com/base4sistemas/acbrlib-python.git$cdacbrlib-python$poetryinstall$poetryrunpytest