id
stringlengths
1
8
text
stringlengths
6
1.05M
dataset_id
stringclasses
1 value
/concise-0.6.9.tar.gz/concise-0.6.9/docs/templates/getting-started/getting_started.md
# **TODO*** - how to nicely show and compile a jupiter notebook as markdown? - [ ] This notebook (add to issues): - get the DeepBind data - maybe use RNACompete? - show one example - write down where to get this notebook (concise/nbs/getting_started.ipynb) - [ ] (add to issues) Notebook: hyper-parameter optimization - quick-introduction to hyperopt (write as a blog post?) - how to use it with keras - how to deploy it - sample hyper-param optimizer - [ ] (add to issues) Notebook: RBP model - show the external positional effeect -------------------------------------------- # Getting started with concise ## Become familiar with Keras In order to successfully use concise, please make sure you are familiar with keras. I strongly advise everyone to read the excellent [keras documentation](http://keras.io) first. Being a keras extension, concise namely tightly follows the keras API. ## DeepBind model in concise DeepBind model is extremely straightforward. It's main architecture: - conv - maxpool - dense - dense can be expressed in concise in the following way. Note that I prefer to use the functional API of keras. ```python import concise.layers as cl import keras.layers as kl from concise.preprocessing import encodeDNA from keras.models import Model, load_model # get the data seq_list = read_fasta - TODO seq_list[:5] y = ...as_matrix() # encode sequences as one-hot encoded arrays x_seq = encodeDNA(seq_list) # specify the model in_dna = cl.InputDNA(seq_length=100, name="seq") x = cl.ConvDNA(filters=15, kernel_width=12, activation="relu")(in_dna) x = kl.MaxPool1D(pool_size=4)(x) # TODO - check x = kl.Flatten()(x) x = kl.Dense(100, activation="relu")(x) out = kl.Dense(1, activation="sigmoid")(x) m = Model(in_dna, out) m.compile("adam", loss="binary_crossentropy", metrics=["acc"]) # train the model m.fit(x_seq, y, epochs=3) # save the model m.save("/tmp/model.h5") # load the model m2 = load_model(m) # visualize the filters m2.layers[1].plot_weights() m2.layers[1].plot_weights("pwm_info") ``` ## Initializing filters on known motifs In the scenario where data is scarse, it is often useful to initialize the filters to some known position weights matrices (PWM's). That way, the model already starts with a parameter configuration much closes to the 'right' solution. Concise provides access to (TODO how many) transcription factor PWM's from ENCODE and rna-binding proteins PWM's from ATtrACT (1000 - TODO - show numbers). ### List all the motifs ```python from concise.data import attract dfa = attract.get_metadata() dfa ``` Let's choose a PWMs of the following transcription factors: ```python from concise.data import attract dfa = attract.get_metadata() dfa dfa.name.contains(["asdas", "asdasdas"]) ``` ```python from concise.utils.pwm import PWM pwm_list = get_pwm_list([123, 3213, 312]) for pwm in pwm_list: pwm.plotPWM() ``` Initializing on the known-PWM's and training the model: ... Show again the PWM's Note that the with initialization, the filters can be easier interpreted as motifs. -------------------------------------------- - TODO - show how to do this in concise easily - write the data function - write the model function
PypiClean
/pycap-2.4.0-py3-none-any.whl/redcap/methods/users.py
from typing import TYPE_CHECKING, Any, Dict, List, Literal, Optional, Union, cast from redcap.methods.base import Base, Json if TYPE_CHECKING: import pandas as pd class Users(Base): """Responsible for all API methods under 'Users & User Privileges' in the API Playground""" def export_users( self, format_type: Literal["json", "csv", "xml", "df"] = "json", df_kwargs: Optional[Dict[str, Any]] = None, ): """ Export the users of the Project Args: format_type: Response return format df_kwargs: Passed to `pandas.read_csv` to control construction of returned DataFrame. By default, nothing Returns: Union[List[Dict[str, Any]], str, pandas.DataFrame]: List of users with metadata Examples: >>> proj.export_users() [{'username': ..., 'email': ..., 'expiration': '', 'data_access_group': '', 'data_access_group_id': '', 'design': 1, 'user_rights': 1, 'data_access_groups': 1, 'reports': 1, ...}] """ payload = self._initialize_payload(content="user", format_type=format_type) return_type = self._lookup_return_type(format_type, request_type="export") response = cast(Union[Json, str], self._call_api(payload, return_type)) return self._return_data( response=response, content="user", format_type=format_type, df_kwargs=df_kwargs, ) def import_users( self, to_import: Union[str, List[Dict[str, Any]], "pd.DataFrame"], return_format_type: Literal["json", "csv", "xml"] = "json", import_format: Literal["json", "csv", "xml", "df"] = "json", ): """ Import users/user rights into the REDCap Project Args: to_import: array of dicts, csv/xml string, `pandas.DataFrame` Note: If you pass a csv or xml string, you should use the `import format` parameter appropriately. return_format_type: Response format. By default, response will be json-decoded. import_format: Format of incoming data. By default, to_import will be json-encoded Returns: Union[int, str]: Number of users added or updated Examples: Add test user. Only username is required >>> test_user = [{"username": "[email protected]"}] >>> proj.import_users(test_user) 1 All currently valid options for user rights >>> test_user = [ ... {"username": "[email protected]", "email": "[email protected]", ... "firstname": "REDCap Trial", "lastname": "User", "expiration": "", ... "data_access_group": "", "data_access_group_id": "", "design": 0, ... "user_rights": 0, "data_export": 2, "reports": 1, "stats_and_charts": 1, ... "manage_survey_participants": 1, "calendar": 1, "data_access_groups": 0, ... "data_import_tool": 0, "data_comparison_tool": 0, "logging": 0, ... "file_repository": 1, "data_quality_create": 0, "data_quality_execute": 0, ... "api_export": 0, "api_import": 0, "mobile_app": 0, ... "mobile_app_download_data": 0, "record_create": 1, "record_rename": 0, ... "record_delete": 0, "lock_records_all_forms": 0, "lock_records": 0, ... "lock_records_customization": 0, "forms": {"form_1": 3}} ... ] >>> proj.import_users(test_user) 1 """ payload = self._initialize_import_payload( to_import=to_import, import_format=import_format, return_format_type=return_format_type, content="user", ) return_type = self._lookup_return_type( format_type=return_format_type, request_type="import" ) response = cast(Union[Json, str], self._call_api(payload, return_type)) return response def delete_users( self, users: List[str], return_format_type: Literal["json", "csv", "xml"] = "json", ): """ Delete users from the project. Args: users: List of usernames to delete from the project return_format_type: Response format. By default, response will be json-decoded. Returns: Union[int, str]: Number of users deleted Examples: >>> new_user = [{"username": "[email protected]"}] >>> proj.import_users(new_user) 1 >>> proj.delete_users(["[email protected]"], return_format_type="xml") '1' """ payload = self._initialize_payload( content="user", return_format_type=return_format_type ) payload["action"] = "delete" # Turn list of users into dict, and append to payload users_dict = {f"users[{ idx }]": user for idx, user in enumerate(users)} payload.update(users_dict) return_type = self._lookup_return_type( format_type=return_format_type, request_type="delete" ) response = cast(Union[Json, str], self._call_api(payload, return_type)) return response
PypiClean
/arginfer-0.0.1.tar.gz/arginfer-0.0.1/docs/source/tutorial.rst
.. _sec_tutorial: ======== Tutorial ======== ********************* Sampling ARGs ********************* As a simple example, we will first simulate sample data with `msprime <https://tskit.dev/msprime/docs/stable/>`_. We will then run `arginfer` on the simulated dataset. The following code simulates a tree sequence and the sequences for a sample size of `10` and sequence length of `1e5`. .. code-block:: python import msprime import os ts_full = msprime.simulate(sample_size=10, Ne=5000, length=1e5, mutation_rate=1e-8, recombination_rate=0.5e-8, record_full_arg= True, random_seed=2) os.makedirs(os.getcwd()+"/out") ts_full.dump(os.getcwd()+"/out/"+"ts_full.args") The output of this code is a ``tree sequence`` stored in "out/" directory under the name of `ts_full.args`. Next, the following command can be used to run 200 MCMC iterations with burn-in 5 and retaining every 10 samples (thinning intervals = 10). Also ``sample_size = n = 10`` is the number of sequences each ``seq_length = L = 1e5`` in length evolving in a population of effective size ``Ne = 5000``, with mutation rate ``1e-8`` mutations/generation/site and recombination rate ``0.5e-8`` recombinations/generation/site. .. code-block:: python import arginfer arginfer.infer_sim( ts_full = "out/ts_full.args", # path to simulated ts sample_size =10, # sample size iteration= 200, # number of mcmc iterations thin= 10, # thinning interval, retaining everry kth sample burn=5, # burn-in period to discard Ne =5000, # effective population size seq_length= 1e5, # sequence length in bases mutation_rate=1e-8, # mutation rate per site per generation recombination_rate=0.5e-8, # recombination rate per site per generation outpath = os.getcwd()+"/output", # output path plot = True) # plot traces or equivalently in terminal: .. code-block:: RST arginfer infer --tsfull "out/ts_full.args" \ -I 200 --thin 10 -b 5 \ -n 10 -L 1e5 --Ne 5000 \ -r 0.5e-8 -mu 1e-8 \ -O output \ --plot The output of the above command is as follows: * ``summary.h5``: A summary of some ARG properties recorded in a ``pandas dataframe`` with columns: .. code-block:: python pd.DataFrame(columns=('likelihood', 'prior', "posterior", 'ancestral recomb', 'non ancestral recomb', 'branch length')) * ``.arg`` file: The sampled ARGs, which are pickled ``ATS`` objects. * See here for more information on how manipulate these files (TODO). * | ``arginfer*.pdf``: if ``plot=True``, this `pdf` file will be generated which contains trace plots for | the log(posterior), ARG total branch length, number of ancestral recombinations, | and number of non-ancestral recombinations. ********************************* Working with ``arginfer`` outputs *********************************
PypiClean
/python_microscopy-20.12.8-cp36-cp36m-win_amd64.whl/PYME/Acquire/Hardware/Simulator/rend_im.py
################## # rend_im.py # # Copyright David Baddeley, 2009 # [email protected] # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. # ################## from PYME.Analysis.PSFGen import * from scipy import * from numpy.fft import ifftshift, fftn, ifftn from . import fluor from PYME.Analysis import MetaData from PYME.localization import cInterp try: import cPickle as pickle except ImportError: import pickle from scipy import ndimage import numpy as np from PYME.IO.FileUtils.nameUtils import getFullExistingFilename import multiprocessing import threading from PYME.Deconv.wiener import resizePSF #import threading #tLock = threading.Lock() def renderIm(X, Y, z, points, roiSize, A): #X = mgrid[xImSlice] #Y = mgrid[yImSlice] im = zeros((len(X), len(Y)), 'f') P = arange(0,1.01,.1) for (x0,y0,z0) in points: ix = abs(X - x0).argmin() iy = abs(Y - y0).argmin() imp =genWidefieldPSF(X[(ix - roiSize):(ix + roiSize + 1)], Y[(iy - roiSize):(iy + roiSize + 1)], z, P,A*1e3, x0,y0,z0,depthInSample=0) #print imp.shape im[(ix - roiSize):(ix + roiSize + 1), (iy - roiSize):(iy + roiSize + 1)] += imp[:,:,0] #print im.shape return im IntXVals = None IntYVals = None IntZVals = None #interpModel = None interpModel_by_chan = [None, None, None, None] def interpModel(chan=0): im = interpModel_by_chan[chan] if im is None and not chan == 0: return interpModel_by_chan[0] else: return im dx = None dy = None dz = None mdh = MetaData.NestedClassMDHandler(MetaData.TIRFDefault) def set_pixelsize_nm(pixelsize): mdh['voxelsize.x'] = 1e-3*pixelsize mdh['voxelsize.y'] = 1e-3 * pixelsize def genTheoreticalModel(md, zernikes={}, **kwargs): from PYME.Analysis.PSFGen import fourierHNA global IntXVals, IntYVals, IntZVals, dx, dy, dz if True:#not dx == md.voxelsize.x*1e3 or not dy == md.voxelsize.y*1e3 or not dz == md.voxelsize.z*1e3: vs = md.voxelsize_nm IntXVals = vs.x*mgrid[-150:150] IntYVals = vs.y*mgrid[-150:150] IntZVals = vs.z*mgrid[-30:30] dx, dy, dz = vs P = arange(0,1.01,.01) #interpModel = genWidefieldPSF(IntXVals, IntYVals, IntZVals, P ,1e3, 0, 0, 0, 2*pi/525, 1.47, 10e3).astype('f') im = fourierHNA.GenZernikeDPSF(IntZVals, zernikes, X=IntXVals, Y=IntYVals, dx=vs.x, **kwargs) #print('foo') #print((interpModel.strides, interpModel.shape)) for i in range(1, len(interpModel_by_chan)): interpModel_by_chan[i] = None interpModel_by_chan[0] = np.maximum(im/im[:,:,int(len(IntZVals)/2)].sum(), 0) #normalise to 1 and clip def genTheoreticalModel4Pi(md, zernikes=[{},{}], phases=[0, np.pi/2, np.pi, 3*np.pi/2], **kwargs): from PYME.Analysis.PSFGen import fourierHNA global IntXVals, IntYVals, IntZVals, dx, dy, dz if True:#not dx == md.voxelsize.x*1e3 or not dy == md.voxelsize.y*1e3 or not dz == md.voxelsize.z*1e3: vs = md.voxelsize_nm IntXVals = vs.x*mgrid[-150:150] IntYVals = vs.y*mgrid[-150:150] IntZVals = 20*mgrid[-60:60] dx, dy = vs.x, vs.y dz = 20.#md.voxelsize.z*1e3 for i, phase in enumerate(phases): print('Simulating 4Pi PSF for channel %d' % i) #interpModel = genWidefieldPSF(IntXVals, IntYVals, IntZVals, P ,1e3, 0, 0, 0, 2*pi/525, 1.47, 10e3).as im = fourierHNA.Gen4PiPSF(IntZVals, phi=phase, zernikeCoeffs=zernikes, X=IntXVals, Y=IntYVals, dx=vs.x, **kwargs) zm = int(len(IntZVals)/2) norm = im[:,:,(zm-10):(zm+10)].sum(1).sum(0).max() #due to interference we can have slices with really low sum interpModel_by_chan[i] = np.maximum(im/norm, 0) #normalise to 1 and clip def get_psf(): from PYME.IO.image import ImageStack from PYME.IO.MetaDataHandler import NestedClassMDHandler mdh = NestedClassMDHandler() mdh['ImageType'] = 'PSF' mdh['voxelsize.x'] = dx/1e3 mdh['voxelsize.y'] = dy/1e3 mdh['voxelsize.z'] = dz/1e3 im = ImageStack(data=[c for c in interpModel_by_chan if not c is None], mdh=mdh, titleStub='Simulated PSF') return im def setModel(modName, md): global IntXVals, IntYVals, IntZVals, dx, dy, dz from PYME.IO import load_psf mod, vs_nm = load_psf.load_psf(modName) mod = resizePSF(mod, interpModel().shape) IntXVals = vs_nm.x*mgrid[-(mod.shape[0]/2.):(mod.shape[0]/2.)] IntYVals = vs_nm.y*mgrid[-(mod.shape[1]/2.):(mod.shape[1]/2.)] IntZVals = vs_nm.z*mgrid[-(mod.shape[2]/2.):(mod.shape[2]/2.)] dx, dy, dz = vs_nm #interpModel = np.maximum(mod/mod.max(), 0) #normalise to 1 interpModel_by_chan[0] = np.maximum(mod/mod[:,:,len(IntZVals)/2].sum(), 0) #normalise to 1 and clip def interp(X, Y, Z): X = atleast_1d(X) Y = atleast_1d(Y) Z = atleast_1d(Z) ox = X[0] oy = Y[0] oz = Z[0] rx = (ox % dx)/dx ry = (oy % dy)/dy rz = (oz % dz)/dz fx = int(len(IntXVals)/2) + int(ox/dx) fy = int(len(IntYVals)/2) + int(oy/dy) fz = int(len(IntZVals)/2) + int(oz/dz) #print fx #print rx, ry, rz xl = len(X) yl = len(Y) zl = len(Z) #print xl im = interpModel() m000 = im[fx:(fx+xl),fy:(fy+yl),fz:(fz+zl)] m100 = im[(fx+1):(fx+xl+1),fy:(fy+yl),fz:(fz+zl)] m010 = im[fx:(fx+xl),(fy + 1):(fy+yl+1),fz:(fz+zl)] m110 = im[(fx+1):(fx+xl+1),(fy+1):(fy+yl+1),fz:(fz+zl)] m001 = im[fx:(fx+xl),fy:(fy+yl),(fz+1):(fz+zl+1)] m101 = im[(fx+1):(fx+xl+1),fy:(fy+yl),(fz+1):(fz+zl+1)] m011 = im[fx:(fx+xl),(fy + 1):(fy+yl+1),(fz+1):(fz+zl+1)] m111 = im[(fx+1):(fx+xl+1),(fy+1):(fy+yl+1),(fz+1):(fz+zl+1)] #print m000.shape # m = scipy.sum([((1-rx)*(1-ry)*(1-rz))*m000, ((rx)*(1-ry)*(1-rz))*m100, ((1-rx)*(ry)*(1-rz))*m010, ((rx)*(ry)*(1-rz))*m110, # ((1-rx)*(1-ry)*(rz))*m001, ((rx)*(1-ry)*(rz))*m101, ((1-rx)*(ry)*(rz))*m011, ((rx)*(ry)*(rz))*m111], 0) m = ((1-rx)*(1-ry)*(1-rz))*m000 + ((rx)*(1-ry)*(1-rz))*m100 + ((1-rx)*(ry)*(1-rz))*m010 + ((rx)*(ry)*(1-rz))*m110+((1-rx)*(1-ry)*(rz))*m001+ ((rx)*(1-ry)*(rz))*m101+ ((1-rx)*(ry)*(rz))*m011+ ((rx)*(ry)*(rz))*m111 #print m.shape return m def interp2(X, Y, Z): X = atleast_1d(X) Y = atleast_1d(Y) Z = atleast_1d(Z) ox = X[0] oy = Y[0] oz = Z[0] rx = (ox % dx)/dx ry = (oy % dy)/dy rz = (oz % dz)/dz fx = int(len(IntXVals)/2) + int(ox/dx) fy = int(len(IntYVals)/2) + int(oy/dy) fz = int(len(IntZVals)/2) + int(oz/dz) #print fx #print rx, ry, rz xl = len(X) yl = len(Y) zl = len(Z) #print xl im = interpModel() m000 = im[fx:(fx+xl),fy:(fy+yl),fz:(fz+zl)] m100 = im[(fx+1):(fx+xl+1),fy:(fy+yl),fz:(fz+zl)] m010 = im[fx:(fx+xl),(fy + 1):(fy+yl+1),fz:(fz+zl)] m110 = im[(fx+1):(fx+xl+1),(fy+1):(fy+yl+1),fz:(fz+zl)] m001 = im[fx:(fx+xl),fy:(fy+yl),(fz+1):(fz+zl+1)] m101 = im[(fx+1):(fx+xl+1),fy:(fy+yl),(fz+1):(fz+zl+1)] m011 = im[fx:(fx+xl),(fy + 1):(fy+yl+1),(fz+1):(fz+zl+1)] m111 = im[(fx+1):(fx+xl+1),(fy+1):(fy+yl+1),(fz+1):(fz+zl+1)] #print m000.shape # m = scipy.sum([((1-rx)*(1-ry)*(1-rz))*m000, ((rx)*(1-ry)*(1-rz))*m100, ((1-rx)*(ry)*(1-rz))*m010, ((rx)*(ry)*(1-rz))*m110, # ((1-rx)*(1-ry)*(rz))*m001, ((rx)*(1-ry)*(rz))*m101, ((1-rx)*(ry)*(rz))*m011, ((rx)*(ry)*(rz))*m111], 0) m = ((1-rx)*(1-ry)*(1-rz))*m000 + ((rx)*(1-ry)*(1-rz))*m100 + ((1-rx)*(ry)*(1-rz))*m010 + ((rx)*(ry)*(1-rz))*m110+((1-rx)*(1-ry)*(rz))*m001+ ((rx)*(1-ry)*(rz))*m101+ ((1-rx)*(ry)*(rz))*m011+ ((rx)*(ry)*(rz))*m111 r000 = ((1-rx)*(1-ry)*(1-rz)) r100 = ((rx)*(1-ry)*(1-rz)) r010 = ((1-rx)*(ry)*(1-rz)) r110 = ((rx)*(ry)*(1-rz)) r001 = ((1-rx)*(1-ry)*(rz)) r101 = ((1-rx)*(ry)*(rz)) r011 = ((1-rx)*(ry)*(rz)) r111 = ((rx)*(ry)*(rz)) m = r000*m000 m[:] = m[:] + r100*m100 m[:] = m[:] + r010*m010 m[:] = m[:] + r110*m110 m[:] = m[:] + r001*m001 m[:] = m[:] + r101*m101 m[:] = m[:] + r011*m011 m[:] = m[:] + r111*m111 m = r000*m000 + r100*m100 + r010*m010 + r110*m110 + r001*m001 + r101*m101 + r011*m011 + r111*m111 #print m.shape return m def interp3(X, Y, Z): X = atleast_1d(X) Y = atleast_1d(Y) Z = atleast_1d(Z) ox = X[0] oy = Y[0] oz = Z[0] xl = len(X) yl = len(Y) zl = len(Z) return cInterp.Interpolate(interpModel(), ox,oy,oz,xl,yl,dx,dy,dz)[:,:,None] @fluor.registerIllumFcn def PSFIllumFunction(fluors, position): im = interpModel() xi = maximum(minimum(round_((fluors['x'] - position[0])/dx + im.shape[0]/2).astype('i'), im.shape[0]-1), 0) yi = maximum(minimum(round_((fluors['y'] - position[1])/dy + im.shape[1]/2).astype('i'), im.shape[1]-1), 0) zi = maximum(minimum(round_((fluors['z'] - position[2])/dz + im.shape[2]/2).astype('i'), im.shape[2]-1), 0) return im[xi, yi, zi] illPattern = None illZOffset = 0 illPCache = None illPKey = None def setIllumPattern(pattern, z0): global illPattern, illZOffset, illPCache sx, sy = pattern.shape im = interpModel() psx, psy, sz = im.shape il = np.zeros([sx,sy,sz], 'f') il[:,:,sz/2] = pattern ps = np.zeros_like(il) if sx > psx: ps[(sx/2-psx/2):(sx/2+psx/2), (sy/2-psy/2):(sy/2+psy/2), :] = im else: ps[:,:,:] = im[(psx/2-sx/2):(psx/2+sx/2), (psy/2-sy/2):(psy/2+sy/2), :] ps= ps/ps[:,:,sz/2].sum() illPattern = abs(ifftshift(ifftn(fftn(il)*fftn(ps)))).astype('f') illPCache = None @fluor.registerIllumFcn def patternIllumFcn(fluors, postion): global illPKey, illPCache key = hash((fluors[0]['x'], fluors[0]['y'], fluors[0]['z'])) if not illPCache is None and illPKey == key: return illPCache else: illPKey = key x = fluors['x']/dx + illPattern.shape[0]/2 y = fluors['y']/dy + illPattern.shape[1]/2 z = (fluors['z'] - illZOffset)/dz + illPattern.shape[2]/2 illPCache = ndimage.map_coordinates(illPattern, [x, y, z], order=1, mode='nearest') return illPCache SIM_k = pi/180. #SIM_ky = 0# 2*pi/180. SIM_theta = 0 SIM_phi = 0 @fluor.registerIllumFcn def SIMIllumFcn(fluors, postion): x = fluors['x']#/dx + illPattern.shape[0]/2 y = fluors['y']#/dy + illPattern.shape[1]/2 #z = (fluors['z'] - illZOffset)/dz + illPattern.shape[2]/2 #return ndimage.map_coordinates(illPattern, [x, y, z], order=1, mode='nearest') kx = np.cos(SIM_theta)*SIM_k ky = np.sin(SIM_theta)*SIM_k return (1 + np.cos(x*kx + y*ky + SIM_phi))/2 # def simPalmIm(X,Y, z, fluors, intTime=.1, numSubSteps=10, roiSize=10, laserPowers = [.1,1]): # im = zeros((len(X), len(Y)), 'f') # # if fluors is None: # return im # # P = arange(0,1.01,.1) # # for f in fluors: # A = array([f.illuminate(laserPowers,intTime/numSubSteps) for n in range(numSubSteps)]).sum() # if (A > 0): # ix = abs(X - f.x).argmin() # iy = abs(Y - f.y).argmin() # # imp =genWidefieldPSF(X[(ix - roiSize):(ix + roiSize + 1)], Y[(iy - roiSize):(iy + roiSize + 1)], z, P,A*1e3, f.x, f.y, f.z,depthInSample=0) # im[(ix - roiSize):(ix + roiSize + 1), (iy - roiSize):(iy + roiSize + 1)] += imp[:,:,0] # # return im # def simPalmImF(X,Y, z, fluors, intTime=.1, numSubSteps=10, roiSize=10, laserPowers = [.1,1]): # im = zeros((len(X), len(Y)), 'f') # # if fluors is None: # return im # # P = arange(0,1.01,.1) # # A = zeros(len(fluors.fl)) # # #tLock.acquire() # # for n in range(numSubSteps): # A += fluors.illuminate(laserPowers,intTime/numSubSteps) # # #tLock.release() # # flOn = where(A > 0)[0] # # #print flOn # # for i in flOn: # ix = abs(X - fluors.fl['x'][i]).argmin() # iy = abs(Y - fluors.fl['y'][i]).argmin() # # imp =genWidefieldPSF(X[(ix - roiSize):(ix + roiSize + 1)], Y[(iy - roiSize):(iy + roiSize + 1)], z, P,A[i]*1e3, fluors.fl['x'][i], fluors.fl['y'][i], fluors.fl['z'][i],depthInSample=50e3) # im[(ix - roiSize):(ix + roiSize + 1), (iy - roiSize):(iy + roiSize + 1)] += imp[:,:,0] # # return im # def simPalmImFI_(X,Y, z, fluors, intTime=.1, numSubSteps=10, roiSize=15, laserPowers = [.1,1], position=[0,0,0], illuminationFunction='ConstIllum'): # if interpModel is None: # genTheoreticalModel(MetaData.TIRFDefault) # # im = zeros((len(X), len(Y)), 'f') # # if fluors is None: # return im # # #P = arange(0,1.01,.1) # # A = zeros(len(fluors.fl)) # # #tLock.acquire() # # for n in range(numSubSteps): # A += fluors.illuminate(laserPowers,intTime/numSubSteps, position=position, illuminationFunction=illuminationFunction) # # # #print position # #tLock.release() # # flOn = where(A > 0.1)[0] # # #print flOn # dx = X[1] - X[0] # dy = Y[1] - Y[0] # # #print interpModel.shape, interpModel.strides # # maxz = dz*interpModel.shape[2]/2. # #s= min(roiSize, 20- roiSize)*dx # s1 = min(roiSize, 20- roiSize) # # x0 = X[0] # y0 = Y[0] # ix_l = -s1 # ix_h = len(X) + s1 # iy_l = -s1 # iy_h = len(Y) + s1 # # # for i in flOn: # x = fluors.fl['x'][i] #+ position[0] # y = fluors.fl['y'][i] #+ position[1] # # #delX = abs(X - x) # #delY = abs(Y - y) # # #ix = delX.argmin() # #iy = delY.argmin() # # ix = int((x - x0)/dx) # iy = int((y - y0)/dy) # # # #if delX[ix] < s and delY[iy] < s: # if (ix > ix_l) and (ix < ix_h) and (iy > iy_l) and (iy < iy_h): # #print ix, iy # # ix0 = max(ix - roiSize, 0) # ix1 = min(ix + roiSize + 1, im.shape[0]) # iy0 = max(iy - roiSize, 0) # iy1 = min(iy + roiSize + 1, im.shape[1]) # #imp =interp3(X[max(ix - roiSize, 0):(ix + roiSize + 1)] - x, Y[max(iy - roiSize, 0):(iy + roiSize + 1)] - y, z - fluors.fl['z'][i])* A[i] # imp = cInterp.Interpolate(interpModel, -(X[ix0] - x), -(Y[iy0] - y), min(max(z - fluors.fl['z'][i], -maxz), maxz), ix1-ix0, iy1-iy0,dx,dy,dz)* A[i] # # #if imp.min() < 0 or isnan(A[i]): # # print ix0, ix1, iy0, iy1, (X[ix0] - x)/dx, (Y[iy0]- y)/dx, A[i], imp.min() # im[ix0:ix1, iy0:iy1] += imp[:,:,0] # # return im def _rFluorSubset(im, fl, A, x0, y0, z, dx, dy, dz, maxz, ChanXOffsets=[0,], ChanZOffsets=[0,], ChanSpecs = None): if ChanSpecs is None: z_ = np.clip(z - fl['z'], -maxz, maxz).astype('f') roiSize = np.minimum(8 + np.abs(z_) * (2.5 / dx), 140).astype('i') cInterp.InterpolateInplaceM(interpModel(), im, (fl['x'] - x0), (fl['y'] - y0), z_, A, roiSize,dx,dy,dz) else: for x_offset, z_offset, spec_chan, chan in zip(ChanXOffsets, ChanZOffsets, ChanSpecs, range(len(ChanSpecs))): z_ = np.clip(z - fl['z'] + z_offset, -maxz, maxz).astype('f') roiSize = np.minimum(8 + np.abs(z_) * (2.5 / dx), 140).astype('i') cInterp.InterpolateInplaceM(interpModel(chan), im, (fl['x'] - x0 + x_offset), (fl['y'] - y0), z_, A * fl['spec'][:, spec_chan], roiSize, dx, dy, dz) def simPalmImFI(X,Y, z, fluors, intTime=.1, numSubSteps=10, roiSize=100, laserPowers = [.1,1], position=[0,0,0], illuminationFunction='ConstIllum', ChanXOffsets=[0,], ChanZOffsets=[0,], ChanSpecs = None): if interpModel() is None: genTheoreticalModel(mdh) im = zeros((len(X), len(Y)), 'f') if fluors is None: return im A = zeros(len(fluors.fl), 'f') for n in range(numSubSteps): A += fluors.illuminate(laserPowers,intTime/numSubSteps, position=position, illuminationFunction=illuminationFunction) flOn = where(A > 0.1)[0] dx = X[1] - X[0] dy = Y[1] - Y[0] maxz = dz*(interpModel().shape[2]/2 - 1) x0 = X[0] y0 = Y[0] m = A > .1 fl = fluors.fl[m] A2 = A[m] #z2 = np.minimum(np.maximum(z - fl['z'], -maxz), maxz)#.astype('f') #roiS = np.minimum(3 + np.abs(z2)*(2.5/70), 100).astype('i') #roiS = np.minimum(8 + np.abs(z2)*(2.5/70), 140).astype('i') nCPUs = int(min(multiprocessing.cpu_count(), len(flOn))) if nCPUs > 0: threads = [threading.Thread(target = _rFluorSubset, args=(im, fl[i::nCPUs], A2[i::nCPUs], x0, y0, z, dx, dy, dz, maxz, ChanXOffsets, ChanZOffsets, ChanSpecs)) for i in range(nCPUs)] for p in threads: p.start() for p in threads: p.join() #for i in flOn: # x = fluors.fl['x'][i] #+ position[0] # y = fluors.fl['y'][i] #+ position[1] # cInterp.InterpolateInplace(interpModel, im, x - x0, y - y0, min(max(z - fluors.fl['z'][i], -maxz), maxz-dz), roiSize, roiSize,dx,dy,dz, A[i]) return im # def simPalmImFSpec(X,Y, z, fluors, intTime=.1, numSubSteps=10, roiSize=10, laserPowers = [.1,1], deltaY=64, deltaZ = 300): # im = zeros((len(X), len(Y)), 'f') # # deltaY = (Y[1] - Y[0])*deltaY #convert to nm # #print deltaY # # if fluors is None: # return im # # P = arange(0,1.01,.1) # # A = zeros(len(fluors.fl)) # # for n in range(numSubSteps): # A += fluors.illuminate(laserPowers,intTime/numSubSteps) # # flOn = where(A > 0)[0] # # #print flOn # # for i in flOn: # ix = abs(X - fluors.fl['x'][i]).argmin() # iy = abs(Y - deltaY - fluors.fl['y'][i]).argmin() # # imp =fluors.fl[i]['spec'][0]*genWidefieldPSF(X[(ix - roiSize):(ix + roiSize + 1)], Y[(iy - roiSize):(iy + roiSize + 1)], z, P,A[i]*1e3, fluors.fl['x'][i], fluors.fl['y'][i] + deltaY , fluors.fl['z'][i]) # im[(ix - roiSize):(ix + roiSize + 1), (iy - roiSize):(iy + roiSize + 1)] += imp[:,:,0] # # # ix = abs(X - fluors.fl['x'][i]).argmin() # iy = abs(flipud(Y) - deltaY - fluors.fl['y'][i]).argmin() # imp =fluors.fl[i]['spec'][1]*genWidefieldPSF(X[(ix - roiSize):(ix + roiSize + 1)], flipud(Y)[(iy - roiSize):(iy + roiSize + 1)], z, P,A[i]*1e3, fluors.fl['x'][i], fluors.fl['y'][i] + deltaY, fluors.fl['z'][i]) # im[(ix - roiSize):(ix + roiSize + 1), (iy - roiSize):(iy + roiSize + 1)] += imp[:,:,0] # # return im # def simPalmImFSpecI(X,Y, z, fluors, intTime=.1, numSubSteps=10, roiSize=10, laserPowers = [.1,1], deltaY=64, deltaZ = 300): # if interpModel is None: # genTheoreticalModel(MetaData.TIRFDefault) # # im = zeros((len(X), len(Y)), 'f') # # deltaY = (Y[1] - Y[0])*deltaY #convert to nm # #print deltaY # # if fluors is None: # return im # # P = arange(0,1.01,.1) # # A = zeros(len(fluors.fl)) # # for n in range(numSubSteps): # A += fluors.illuminate(laserPowers,intTime/numSubSteps) # # flOn = where(A > 0)[0] # # #print flOn # # for i in flOn: # ix = abs(X - fluors.fl['x'][i]).argmin() # iy = abs(Y - deltaY - fluors.fl['y'][i]).argmin() # # imp =fluors.fl[i]['spec'][0]*A[i]*1e3*interp3(X[(ix - roiSize):(ix + roiSize + 1)] - fluors.fl['x'][i], Y[(iy - roiSize):(iy + roiSize + 1)] - (fluors.fl['y'][i]+ deltaY), z - fluors.fl['z'][i]) # # if not imp.shape[2] == 0: # im[(ix - roiSize):(ix + roiSize + 1), (iy - roiSize):(iy + roiSize + 1)] += imp[:,:,0] # # iy2 = abs(flipud(Y) - deltaY - fluors.fl['y'][i]).argmin() # # imp =fluors.fl[i]['spec'][1]*A[i]*1e3*interp3(X[(ix - roiSize):(ix + roiSize + 1)] - fluors.fl['x'][i], Y[(iy - roiSize):(iy + roiSize + 1)] - (fluors.fl['y'][i] + deltaY), z - fluors.fl['z'][i]+deltaZ) # # if not imp.shape[2] == 0: # im[(ix - roiSize):(ix + roiSize + 1), (iy2 - roiSize):(iy2 + roiSize + 1)] += imp[:, ::-1, 0] # # # return im # def simPalmImFBP(X,Y, z, fluors, intTime=.1, numSubSteps=10, roiSize=10, laserPowers = [.1,1], deltaY=64, deltaZ = 500): # im = zeros((len(X), len(Y)), 'f') # # deltaY = (Y[1] - Y[0])*deltaY #convert to nm # #print deltaY # # if fluors is None: # return im # # P = arange(0,1.01,.1) # # A = zeros(len(fluors.fl)) # # for n in range(numSubSteps): # A += fluors.illuminate(laserPowers,intTime/numSubSteps) # # flOn = where(A > 0)[0] # # #print flOn # # for i in flOn: # ix = abs(X - fluors.fl['x'][i]).argmin() # iy = abs(Y - deltaY - fluors.fl['y'][i]).argmin() # # imp =genWidefieldPSF(X[(ix - roiSize):(ix + roiSize + 1)], Y[(iy - roiSize):(iy + roiSize + 1)], z, P,A[i]*1e3, fluors.fl['x'][i], fluors.fl['y'][i] + deltaY , fluors.fl['z'][i]) # im[(ix - roiSize):(ix + roiSize + 1), (iy - roiSize):(iy + roiSize + 1)] += imp[:,:,0] # # # ix = abs(X - fluors.fl['x'][i]).argmin() # iy = abs(flipud(Y) - deltaY - fluors.fl['y'][i]).argmin() # imp =genWidefieldPSF(X[(ix - roiSize):(ix + roiSize + 1)], flipud(Y)[(iy - roiSize):(iy + roiSize + 1)], z + deltaZ, P,A[i]*1e3, fluors.fl['x'][i], fluors.fl['y'][i] + deltaY, fluors.fl['z'][i]) # im[(ix - roiSize):(ix + roiSize + 1), (iy - roiSize):(iy + roiSize + 1)] += imp[:,:,0] # # return im
PypiClean
/spotify_confidence-3.0.2-py3-none-any.whl/spotify_confidence/analysis/abstract_base_classes/confidence_abc.py
from abc import ABC, abstractmethod from typing import Union, Iterable, Tuple, List from pandas import DataFrame from spotify_confidence.chartgrid import ChartGrid from .confidence_computer_abc import ConfidenceComputerABC from .confidence_grapher_abc import ConfidenceGrapherABC from ..constants import NIM_TYPE class ConfidenceABC(ABC): @property def _confidence_computer(self) -> ConfidenceComputerABC: return self._computer @_confidence_computer.setter def _confidence_computer(self, computer: ConfidenceComputerABC): self._computer = computer @property def _confidence_grapher(self) -> ConfidenceGrapherABC: return self._grapher @_confidence_grapher.setter def _confidence_grapher(self, grapher: ConfidenceGrapherABC): self._grapher = grapher @abstractmethod def __init__( self, data_frame: DataFrame, numerator_column: str, numerator_sum_squares_column: Union[str, None], denominator_column: str, categorical_group_columns: Union[str, Iterable, None], ordinal_group_column: Union[str, None], interval_size: float, correction_method: str, metric_column: Union[str, None], treatment_column: Union[str, None], power: float, ): pass @abstractmethod def summary(self, verbose: bool) -> DataFrame: """Args: verbose (bool): include columns used in intermediate steps in the calculations in returned dataframe. Returns: Dataframe containing summary statistics """ pass @abstractmethod def difference( self, level_1: Union[str, Tuple], level_2: Union[str, Tuple], absolute: bool, groupby: Union[str, Iterable], non_inferiority_margins: NIM_TYPE, final_expected_sample_size_column: str, verbose: bool, minimum_detectable_effects_column: str, ) -> DataFrame: """Args: groupby (str): Name of column. If specified, will plot a separate chart for each level of the grouping. non_inferiority_margins (Union[Tuple[float, str], Dict[str, Tuple[float, str]], bool]): Pass tuple(non_inferiority_margin, preferred direction) to use the same NIM for all comparisons, e.g. (0.01, 'increase'), which means that we want level_2 to be grater than the average of level_1 times (1-0.01), or (0.05, 'decrease') which means that we want level_2 to be smaller than the average of level_1 times (1+0.01). Pass dictionary {{group:tuple(non_inferiority_margin, preferred direction}} to use different non-inferiority margins for different values of groupby column. To performe a one-sided test without nim, use (None, preffered direction). Alternatively, pass True to use the "non_inferiority_margin" and "preferred_direction" columns of dataframe that was passed to the contructor, as source of nims. final_expected_sample_size_column (str): Column in source data frame containing expected number of observations at end of experiment. Use in combination with ordinal groupby to perform a sequential test. See https://cran.r-project.org/web/packages/ldbounds/index.html for details. verbose (bool): include columns used in intermediate steps in the calculations in returned dataframe. minimum_detectable_effects_column (str): The minimum detectable effect, used for calculating required sample size. Returns: Dataframe containing the difference in means between group 1 and 2, p-values and confidence intervals for each value in the groupby column """ pass @abstractmethod def differences( self, levels: List[Tuple], absolute: bool, groupby: Union[str, Iterable], non_inferiority_margins: NIM_TYPE, final_expected_sample_size_column: str, verbose: bool, minimum_detectable_effects_column: str, ) -> DataFrame: """Args: levels: (list(tuple)): list of levels to compare groupby (str): Name of column. If specified, will plot a separate chart for each level of the grouping. non_inferiority_margins (Union[Tuple[float, str], Dict[str, Tuple[float, str]], bool]): Pass tuple(non_inferiority_margin, preferred direction) to use the same NIM for all comparisons, e.g. (0.01, 'increase'), which means that we want level_2 to be grater than the average of level_1 times (1-0.01), or (0.05, 'decrease') which means that we want level_2 to be smaller than the average of level_1 times (1+0.01). Pass dictionary {{group:tuple(non_inferiority_margin, preferred direction}} to use different non-inferiority margins for different values of groupby column. To performe a one-sided test without nim, use (None, preffered direction). Alternatively, pass True to use the "non_inferiority_margin" and "preferred_direction" columns of dataframe that was passed to the contructor, as source of nims. final_expected_sample_size_column (str): Column in source data frame containing expected number of observations at end of experiment. Use in combination with ordinal groupby to perform a sequential test. See https://cran.r-project.org/web/packages/ldbounds/index.html for details. verbose (bool): include columns used in intermediate steps in the calculations in returned dataframe. minimum_detectable_effects_column (str): The minimum detectable effect, used for calculating required sample size. Returns: Dataframe containing the difference in means between group 1 and 2, p-values and confidence intervals for each value in the groupby column """ pass @abstractmethod def multiple_difference( self, level: Union[str, Tuple], absolute: bool, groupby: Union[str, Iterable], level_as_reference: bool, non_inferiority_margins: NIM_TYPE, final_expected_sample_size_column: str, verbose: bool, minimum_detectable_effects_column: str, ) -> DataFrame: """Args: groupby (str): Name of column. If specified, will plot a separate chart for each level of the grouping. level_as_reference (bool): If false, compare level to all other groups. If true, compare all other groups to level. non_inferiority_margins (Union[Tuple[float, str], Dict[str, Tuple[float, str]]]): Pass tuple(nim, preferred direction) to use the same NIM for all comparisons, e.g. (0.01, 'increase'), which means that we want level_2 to be grater than the average of level_1 times (1-0.01), or (0.05, 'decrease') which means that we want level_2 to be smaller than the average of level_1 times (1+0.01). Pass dictionary {{group:tuple(nim, preferred direction}} to use different non-inferiority margins for different values of groupby column. To performe a one-sided test without nim, use (None, preffered direction). Alternatively, pass True to use the "non_inferiority_margin" and "preferred_direction" columns of dataframe that was passed to the contructor, as source of nims. final_expected_sample_size_column (str): Column in source data frame containing expected number of observations at end of experiment. Use in combination with ordinal groupby to perform a sequential test. See https://cran.r-project.org/web/packages/ldbounds/index.html for details. verbose (bool): include columns used in intermediate steps in the calculations in returned dataframe. minimum_detectable_effects_column (str): The minimum detectable effect, used for calculating required sample size. Returns: Dataframe containing the difference in means between group 1 and 2, p-values and confidence intervals for each value in the groupby column """ pass @abstractmethod def summary_plot(self, groupby: Union[str, Iterable]) -> ChartGrid: """Plot for each group in the data_frame: if ordinal level exists: line graph with area to represent confidence interval if categorical levels: Interval plots of confidence intervals by group Args: groupby (str): Name of column. If specified, will plot a separate chart for each level of the grouping. Returns: ChartGrid object and a DataFrame with numerical results. """ pass @abstractmethod def difference_plot( self, level_1: Union[str, Tuple], level_2: Union[str, Tuple], absolute: bool, groupby: Union[str, Iterable], non_inferiority_margins: NIM_TYPE, use_adjusted_intervals: bool, final_expected_sample_size_column: str, split_plot_by_groups: bool, ) -> ChartGrid: """Plot representing the difference between group 1 and 2. - Difference in means or proportions, depending on the response variable type. - Plot interval plot with confidence interval of the difference between groups Args: level_1 (str, tuple of str): Name of first level. level_2 (str, tuple of str): Name of second level. absolute (bool): If True then return the absolute difference (level2 - level1) otherwise return the relative difference (level2 / level1 - 1) groupby (str): Name of column, or list of columns. If specified, will return an interval for each level of the grouped dimension, or a confidence band if the grouped dimension is ordinal non_inferiority_margins (Union[Tuple[float, str], Dict[str, Tuple[float, str]]]): Pass tuple(nim, preferred direction) to use the same NIM for all comparisons, e.g. (0.01, 'increase'), which means that we want level_2 to be grater than the average of level_1 times (1-0.01), or (0.05, 'decrease') which means that we want level_2 to be smaller than the average of level_1 times (1+0.01). Pass dictionary {{group:tuple(nim, preferred direction}} to use different non-inferiority margins for different values of groupby column. To performe a one-sided test without nim, use (None, preffered direction). use_adjusted_intervals (bool): If true, use e.g. bon-ferroni corrected (or other method provided) confidence intervals final_expected_sample_size_column (str): Column in source data frame containing expected number of observations at end of experiment. Use in combination with ordinal groupby to perform a sequential test. See https://cran.r-project.org/web/packages/ldbounds/index.html for details. split_plot_by_groups (bool): create a separate chart for each group in the groupby argument. Returns: Chartify Chart object and a DataFrame with numerical results. """ @abstractmethod def differences_plot( self, levels: List[Tuple], absolute: bool, groupby: Union[str, Iterable], non_inferiority_margins: NIM_TYPE, use_adjusted_intervals: bool, final_expected_sample_size_column: str, split_plot_by_groups: bool, ) -> ChartGrid: """Plot representing the difference between group 1 and 2. - Difference in means or proportions, depending on the response variable type. - Plot interval plot with confidence interval of the difference between groups Args: levels: (list(tuple)): list of levels to compare absolute (bool): If True then return the absolute difference (level2 - level1) otherwise return the relative difference (level2 / level1 - 1) groupby (str): Name of column, or list of columns. If specified, will return an interval for each level of the grouped dimension, or a confidence band if the grouped dimension is ordinal non_inferiority_margins (Union[Tuple[float, str], Dict[str, Tuple[float, str]]]): Pass tuple(nim, preferred direction) to use the same NIM for all comparisons, e.g. (0.01, 'increase'), which means that we want level_2 to be grater than the average of level_1 times (1-0.01), or (0.05, 'decrease') which means that we want level_2 to be smaller than the average of level_1 times (1+0.01). Pass dictionary {{group:tuple(nim, preferred direction}} to use different non-inferiority margins for different values of groupby column. To performe a one-sided test without nim, use (None, preffered direction). use_adjusted_intervals (bool): If true, use e.g. bon-ferroni corrected (or other method provided) confidence intervals final_expected_sample_size_column (str): Column in source data frame containing expected number of observations at end of experiment. Use in combination with ordinal groupby to perform a sequential test. See https://cran.r-project.org/web/packages/ldbounds/index.html for details. split_plot_by_groups (bool): create a separate chart for each group in the groupby argument. Returns: Chartify Chart object and a DataFrame with numerical results. """ @abstractmethod def multiple_difference_plot( self, level: Union[str, Tuple], absolute: bool, groupby: Union[str, Iterable], level_as_reference: bool, non_inferiority_margins: NIM_TYPE, use_adjusted_intervals: bool, final_expected_sample_size_column: str, split_plot_by_groups: bool, ) -> ChartGrid: """Compare level to all other groups or, if level_as_reference = True, all other groups to level. Args: level (str, tuple of str): Name of level. absolute (bool): If True then return the absolute difference (level2 - level1) otherwise return the relative difference (level2 / level1 - 1) groupby (str): Name of column, or list of columns. If specified, will return an interval for each level of the grouped dimension, or a confidence band if the grouped dimension is ordinal level_as_reference: If false, compare level to all other groups. If true, compare all other groups to level. non_inferiority_margins (Union[Tuple[float, str], Dict[str, Tuple[float, str]]]): Pass tuple(nim, preferred direction) to use the same NIM for all comparisons, e.g. (0.01, 'increase'), which means that we want level_2 to be grater than the average of level_1 times (1-0.01), or (0.05, 'decrease') which means that we want level_2 to be smaller than the average of level_1 times (1+0.01). Pass dictionary {{group:tuple(nim, preferred direction}} to use different non-inferiority margins for different values of groupby column. To performe a one-sided test without nim, use (None, preffered direction). use_adjusted_intervals (bool): If true, use e.g. bon-ferroni corrected (or other method provided) confidence intervals final_expected_sample_size_column (str): Column in source data frame containing expected number of observations at end of experiment. Use in combination with ordinal groupby to perform a sequential test. See https://cran.r-project.org/web/packages/ldbounds/index.html for details. split_plot_by_groups (bool): create a separate chart for each group in the groupby argument. Returns: ChartGrid object and a DataFrame with numerical results. """
PypiClean
/jarvis-sdk-python-1.5.0.tar.gz/jarvis-sdk-python-1.5.0/CHANGELOG.md
# Changelog ## [1.5.0](https://github.com/indykite/jarvis-sdk-python/compare/v1.4.0...v1.5.0) (2023-01-03) ### Features * uuid to gid and files renaming ([a4ffd9e](https://github.com/indykite/jarvis-sdk-python/commit/a4ffd9e992c41374ee150c36015b029a61c065bb)) ## [1.4.0](https://github.com/indykite/jarvis-sdk-python/compare/v1.3.0...v1.4.0) (2022-12-09) ### Features * config nodes and oauth2 ([5c3147f](https://github.com/indykite/jarvis-sdk-python/commit/5c3147fae5e08de1255c68f96321f2a25e675af5)) * config nodes and oauth2 ([0d90b42](https://github.com/indykite/jarvis-sdk-python/commit/0d90b426c59293bdb3f29d46acdeea7c0cd4d9d2)) * update documentation ([0ba7528](https://github.com/indykite/jarvis-sdk-python/commit/0ba7528a134a72bcc48de14fcb168bdd99540607)) ## [1.3.0](https://github.com/indykite/jarvis-sdk-python/compare/v1.2.0...v1.3.0) (2022-11-11) ### Features * add config application application agent ([e2c1174](https://github.com/indykite/jarvis-sdk-python/commit/e2c1174ae8cab979b13fdab33041c4c77065fb49)) * add config credentials ([4eb114a](https://github.com/indykite/jarvis-sdk-python/commit/4eb114a36d4724f9192150cc480ff1bdb981b612)) * add config credentials ([d7317eb](https://github.com/indykite/jarvis-sdk-python/commit/d7317eb67a0671173c828d317a6f9b1cb9ff174c)) * add config methods ([82b7505](https://github.com/indykite/jarvis-sdk-python/commit/82b7505334950ae6f00202f2937fa285ddfe7a99)) * add config methods 2798 ([3648f19](https://github.com/indykite/jarvis-sdk-python/commit/3648f199a60d2e08edab9d28d6b1b91a5e0ecd7b)) * add config methods appspaces and tenants ([bdbfa3c](https://github.com/indykite/jarvis-sdk-python/commit/bdbfa3ce68854f59dfe75d1c1d2041bf4f80c3fb)) * add config methods appspaces tenants ([92b6c14](https://github.com/indykite/jarvis-sdk-python/commit/92b6c149131754cee234b0dff2aa880a829dcba7)) * add ingest api ([39576d9](https://github.com/indykite/jarvis-sdk-python/commit/39576d99966e0bf1666a5526a8bcb545582fda69)) * add service account credential ([9b73e92](https://github.com/indykite/jarvis-sdk-python/commit/9b73e923d0102657273753e2d5083f25e4cc3007)) * add service_accounts ([facf4bd](https://github.com/indykite/jarvis-sdk-python/commit/facf4bd07e3e726e99e833c85d73d9754ac97c98)) * update documentation ([9fe3b71](https://github.com/indykite/jarvis-sdk-python/commit/9fe3b71c05cf15cd2f14074bebae6837c665c16d)) ### Bug Fixes * return response in stream_records() ([6c4f09d](https://github.com/indykite/jarvis-sdk-python/commit/6c4f09daf2364c82efb48bce31f6aaea29b79f55)) ## [1.2.0](https://github.com/indykite/jarvis-sdk-python/compare/v1.1.0...v1.2.0) (2022-05-25) ### Features * get digital twin with properties ([c515e56](https://github.com/indykite/jarvis-sdk-python/commit/c515e56e052ec8d06824eae39540653c5eaf0145)) ## [1.1.0](https://github.com/indykite/jarvis-sdk-python/compare/v1.0.2...v1.1.0) (2022-05-04) ### Features * add enrich token method ([1298a5e](https://github.com/indykite/jarvis-sdk-python/commit/1298a5e3e0fbbd1fe5804a3ce4389145df0e9798)) ### [1.0.2](https://github.com/indykite/jarvis-sdk-python/compare/v1.0.1...v1.0.2) (2022-04-29) ### Bug Fixes * add imports from root package ([9ea91f6](https://github.com/indykite/jarvis-sdk-python/commit/9ea91f63eaa490ce173d9a23243dd13ad5bbe6c6)) * do not include tests in build files ([5d06b72](https://github.com/indykite/jarvis-sdk-python/commit/5d06b7280323cc5a81c1e5464e8504c1be0d3b8d)) ### [1.0.1](https://github.com/indykite/jarvis-sdk-python/compare/v1.0.0...v1.0.1) (2022-04-27) ### Bug Fixes * add encoding when open files ([a5406ea](https://github.com/indykite/jarvis-sdk-python/commit/a5406ea86069992ae745a9a2891430634121e952)) ### Miscellaneous Chores * add missing __init__.py files ([c2d45b1](https://github.com/indykite/jarvis-sdk-python/commit/c2d45b10d8859ddbb711d1b550a94df20c5a0cc5)) ## [1.0.0](https://github.com/indykite/jarvis-sdk-python/compare/v1.0.0...v1.0.0) (2022-04-26) ### Miscellaneous Chores * release 1.0.0 ([23325c5](https://github.com/indykite/jarvis-sdk-python/commit/23325c5c56234956d399f7426ff29a0082f4ae92))
PypiClean
/python_zwave-0.2.1-py3-none-any.whl/pyzwave/adapter.py
import abc import asyncio import enum import logging from pyzwave.commandclass import NetworkManagementInclusion, NetworkManagementProxy, Zip from .util import Listenable, MessageWaiter from .types import dsk_t from .message import Message _LOGGER = logging.getLogger(__name__) class TxOptions(enum.IntFlag): """TX Options used for adding nodes to network""" NULL = 0x00 TRANSMIT_OPTION_LOW_POWER = 0x02 TRANSMIT_OPTION_EXPLORE = 0x20 class Ack: """Class for holding session informaion""" class Status(enum.Enum): """Ack status""" PENDING = enum.auto() QUEUED = enum.auto() RECEIVED = enum.auto() def __init__(self): self._event = asyncio.Event() self._status = Ack.Status.PENDING self._expectedDelay = 0 def received(self): """Call this function when this ack has been received""" self._status = Ack.Status.RECEIVED self._event.set() def queued(self, expectedDelay: int): """Call this function when the message cannot be delivered right now""" if expectedDelay < 0: # Node should have been awake by now. Wait 2 minutes to allow # the node to be manually woken expectedDelay = 120 # Add some "wiggle room" for the wakeup self._expectedDelay = expectedDelay + 60 self._status = Ack.Status.QUEUED self._event.set() async def wait(self, timeout): """Wait until the node ack the message""" await asyncio.wait_for(self._event.wait(), timeout) # Message was queued for a sleeping node. # Wait longer! while self._status == Ack.Status.QUEUED: self._event.clear() await asyncio.wait_for(self._event.wait(), self._expectedDelay) class Adapter(Listenable, MessageWaiter, metaclass=abc.ABCMeta): """Abstract class for implementing communication with a Z-Wave chip""" def __init__(self): super().__init__() self._ackQueue = {} self._nodeId = 0 def ackReceived(self, zipPkt: Zip.ZipPacket): """Call this method when an ack message has been received""" ackId = zipPkt.seqNo if zipPkt.nackResponse and zipPkt.nackWaiting: # Message was queued, signal this and keep ack in queue ack = self._ackQueue.get(ackId, None) if not ack: return False ack.queued(zipPkt.headerExtension.expectedDelay) return True ack = self._ackQueue.pop(ackId, None) if not ack: _LOGGER.warning("Received ack %d for command not waiting for", ackId) return False ack.received() return True @abc.abstractmethod async def addNode(self, txOptions: TxOptions) -> bool: """Start inclusion mode in the controller""" raise NotImplementedError() @abc.abstractmethod async def addNodeDSKSet( self, accept: bool, inputDSKLength: int, dsk: dsk_t ) -> bool: """ This command is used to indicate the S2 bootstrapping controller if the DSK is accepted and report the user input when needed. """ raise NotImplementedError() @abc.abstractmethod async def addNodeKeysSet( self, grantCSA: bool, accept: bool, grantedKeys: NetworkManagementInclusion.Keys ) -> bool: """ This command is used to inform an S2 bootstrapping controller which keys must be granted to the node being bootstrapped. """ raise NotImplementedError() @abc.abstractmethod async def addNodeStop(self) -> bool: """Stop inclusion mode in the controller""" raise NotImplementedError() def commandReceived(self, cmd: Message): """Call this method when a command has been received""" if isinstance(cmd, Zip.ZipPacket): msg = cmd.command else: msg = cmd if not self.messageReceived(msg): self.speak("onMessageReceived", cmd) @abc.abstractmethod async def connect(self): """Connect the adapter. Must be implemented by subclass""" raise NotImplementedError() @abc.abstractmethod async def getFailedNodeList(self) -> list: """Return a list of failing nodes""" raise NotImplementedError() @abc.abstractmethod async def getMultiChannelCapability( self, nodeId: int, endpoint: int ) -> NetworkManagementProxy.MultiChannelCapabilityReport: """Return the multi channel capabilities for an endpoint in a node""" raise NotImplementedError() @abc.abstractmethod async def getMultiChannelEndPoints(self, nodeId: int) -> int: """Return the number of multi channel end points implemented by a node""" raise NotImplementedError() @abc.abstractmethod async def getNodeInfo( self, nodeId: int ) -> NetworkManagementProxy.NodeInfoCachedReport: """Return the node info from this node. Possibly cached""" raise NotImplementedError() @abc.abstractmethod async def getNodeList(self) -> set: """Return a list of nodes included in the network""" raise NotImplementedError() @property def nodeId(self) -> int: """Return the node id of the controller""" return self._nodeId @nodeId.setter def nodeId(self, nodeId: int): self._nodeId = nodeId @abc.abstractmethod async def removeFailedNode( self, nodeId: int ) -> NetworkManagementInclusion.FailedNodeRemoveStatus.Status: """Remove a non-responding node""" raise NotImplementedError() @abc.abstractmethod async def removeNode(self) -> bool: """Start exclusion mode in the controller""" raise NotImplementedError() @abc.abstractmethod async def removeNodeStop(self) -> bool: """Stop exclusion mode in the controller""" raise NotImplementedError() @abc.abstractmethod async def send( self, cmd: Message, sourceEP: int = 0, destEP: int = 0, timeout: int = 3 ) -> bool: """ Send message to Z-Wave chip. Must be implemented in subclass. .. warning:: This command will block until the message has been ACKed by the node. When talking to battery operated (sleeping) nodes this command will block until the nodes wakes up or is considered dead. This can be a long time (week or even months). Please make sure the code can handle this. """ raise NotImplementedError() async def sendToNode(self, nodeId: int, cmd: Message, **kwargs) -> bool: """Send message to node. Must be implemented in subclass""" raise NotImplementedError() async def sendAndReceive( self, cmd: Message, waitFor: Message, timeout: int = 3, **kwargs ) -> Message: """Send a message and wait for the response""" session = self.addWaitingSession(waitFor) await self.send(cmd, **kwargs) return await self.waitForMessage(waitFor, session=session, timeout=timeout) @abc.abstractmethod async def setNodeInfo(self, generic, specific, cmdClasses): """ Set the application NIF (Node Information Frame). This method should not be called directly. Use the corresponding function in Application instead. """ raise NotImplementedError() async def waitForAck(self, ackId: int, timeout: int = 3): """Async method for waiting for the adapter to receive a specific ack id""" if ackId in self._ackQueue: raise Exception("Duplicate ackid used!") ack = Ack() self._ackQueue[ackId] = ack try: await ack.wait(timeout) except asyncio.TimeoutError: _LOGGER.warning("Timeout waiting for response for ack %s", ackId) del self._ackQueue[ackId] raise
PypiClean
/eureka-opensource-1.0.3.tar.gz/eureka-opensource-1.0.3/static/yui-3.4.1/event-custom-base/event-custom-base-debug.js
YUI.add('event-custom-base', function(Y) { /** * Custom event engine, DOM event listener abstraction layer, synthetic DOM * events. * @module event-custom */ Y.Env.evt = { handles: {}, plugins: {} }; /** * Custom event engine, DOM event listener abstraction layer, synthetic DOM * events. * @module event-custom * @submodule event-custom-base */ /** * Allows for the insertion of methods that are executed before or after * a specified method * @class Do * @static */ var DO_BEFORE = 0, DO_AFTER = 1, DO = { /** * Cache of objects touched by the utility * @property objs * @static */ objs: {}, /** * <p>Execute the supplied method before the specified function. Wrapping * function may optionally return an instance of the following classes to * further alter runtime behavior:</p> * <dl> * <dt></code>Y.Do.Halt(message, returnValue)</code></dt> * <dd>Immediatly stop execution and return * <code>returnValue</code>. No other wrapping functions will be * executed.</dd> * <dt></code>Y.Do.AlterArgs(message, newArgArray)</code></dt> * <dd>Replace the arguments that the original function will be * called with.</dd> * <dt></code>Y.Do.Prevent(message)</code></dt> * <dd>Don't execute the wrapped function. Other before phase * wrappers will be executed.</dd> * </dl> * * @method before * @param fn {Function} the function to execute * @param obj the object hosting the method to displace * @param sFn {string} the name of the method to displace * @param c The execution context for fn * @param arg* {mixed} 0..n additional arguments to supply to the subscriber * when the event fires. * @return {string} handle for the subscription * @static */ before: function(fn, obj, sFn, c) { // Y.log('Do before: ' + sFn, 'info', 'event'); var f = fn, a; if (c) { a = [fn, c].concat(Y.Array(arguments, 4, true)); f = Y.rbind.apply(Y, a); } return this._inject(DO_BEFORE, f, obj, sFn); }, /** * <p>Execute the supplied method after the specified function. Wrapping * function may optionally return an instance of the following classes to * further alter runtime behavior:</p> * <dl> * <dt></code>Y.Do.Halt(message, returnValue)</code></dt> * <dd>Immediatly stop execution and return * <code>returnValue</code>. No other wrapping functions will be * executed.</dd> * <dt></code>Y.Do.AlterReturn(message, returnValue)</code></dt> * <dd>Return <code>returnValue</code> instead of the wrapped * method's original return value. This can be further altered by * other after phase wrappers.</dd> * </dl> * * <p>The static properties <code>Y.Do.originalRetVal</code> and * <code>Y.Do.currentRetVal</code> will be populated for reference.</p> * * @method after * @param fn {Function} the function to execute * @param obj the object hosting the method to displace * @param sFn {string} the name of the method to displace * @param c The execution context for fn * @param arg* {mixed} 0..n additional arguments to supply to the subscriber * @return {string} handle for the subscription * @static */ after: function(fn, obj, sFn, c) { var f = fn, a; if (c) { a = [fn, c].concat(Y.Array(arguments, 4, true)); f = Y.rbind.apply(Y, a); } return this._inject(DO_AFTER, f, obj, sFn); }, /** * Execute the supplied method before or after the specified function. * Used by <code>before</code> and <code>after</code>. * * @method _inject * @param when {string} before or after * @param fn {Function} the function to execute * @param obj the object hosting the method to displace * @param sFn {string} the name of the method to displace * @param c The execution context for fn * @return {string} handle for the subscription * @private * @static */ _inject: function(when, fn, obj, sFn) { // object id var id = Y.stamp(obj), o, sid; if (! this.objs[id]) { // create a map entry for the obj if it doesn't exist this.objs[id] = {}; } o = this.objs[id]; if (! o[sFn]) { // create a map entry for the method if it doesn't exist o[sFn] = new Y.Do.Method(obj, sFn); // re-route the method to our wrapper obj[sFn] = function() { return o[sFn].exec.apply(o[sFn], arguments); }; } // subscriber id sid = id + Y.stamp(fn) + sFn; // register the callback o[sFn].register(sid, fn, when); return new Y.EventHandle(o[sFn], sid); }, /** * Detach a before or after subscription. * * @method detach * @param handle {string} the subscription handle * @static */ detach: function(handle) { if (handle.detach) { handle.detach(); } }, _unload: function(e, me) { } }; Y.Do = DO; ////////////////////////////////////////////////////////////////////////// /** * Contains the return value from the wrapped method, accessible * by 'after' event listeners. * * @property originalRetVal * @static * @since 3.2.0 */ /** * Contains the current state of the return value, consumable by * 'after' event listeners, and updated if an after subscriber * changes the return value generated by the wrapped function. * * @property currentRetVal * @static * @since 3.2.0 */ ////////////////////////////////////////////////////////////////////////// /** * Wrapper for a displaced method with aop enabled * @class Do.Method * @constructor * @param obj The object to operate on * @param sFn The name of the method to displace */ DO.Method = function(obj, sFn) { this.obj = obj; this.methodName = sFn; this.method = obj[sFn]; this.before = {}; this.after = {}; }; /** * Register a aop subscriber * @method register * @param sid {string} the subscriber id * @param fn {Function} the function to execute * @param when {string} when to execute the function */ DO.Method.prototype.register = function (sid, fn, when) { if (when) { this.after[sid] = fn; } else { this.before[sid] = fn; } }; /** * Unregister a aop subscriber * @method delete * @param sid {string} the subscriber id * @param fn {Function} the function to execute * @param when {string} when to execute the function */ DO.Method.prototype._delete = function (sid) { // Y.log('Y.Do._delete: ' + sid, 'info', 'Event'); delete this.before[sid]; delete this.after[sid]; }; /** * <p>Execute the wrapped method. All arguments are passed into the wrapping * functions. If any of the before wrappers return an instance of * <code>Y.Do.Halt</code> or <code>Y.Do.Prevent</code>, neither the wrapped * function nor any after phase subscribers will be executed.</p> * * <p>The return value will be the return value of the wrapped function or one * provided by a wrapper function via an instance of <code>Y.Do.Halt</code> or * <code>Y.Do.AlterReturn</code>. * * @method exec * @param arg* {any} Arguments are passed to the wrapping and wrapped functions * @return {any} Return value of wrapped function unless overwritten (see above) */ DO.Method.prototype.exec = function () { var args = Y.Array(arguments, 0, true), i, ret, newRet, bf = this.before, af = this.after, prevented = false; // execute before for (i in bf) { if (bf.hasOwnProperty(i)) { ret = bf[i].apply(this.obj, args); if (ret) { switch (ret.constructor) { case DO.Halt: return ret.retVal; case DO.AlterArgs: args = ret.newArgs; break; case DO.Prevent: prevented = true; break; default: } } } } // execute method if (!prevented) { ret = this.method.apply(this.obj, args); } DO.originalRetVal = ret; DO.currentRetVal = ret; // execute after methods. for (i in af) { if (af.hasOwnProperty(i)) { newRet = af[i].apply(this.obj, args); // Stop processing if a Halt object is returned if (newRet && newRet.constructor == DO.Halt) { return newRet.retVal; // Check for a new return value } else if (newRet && newRet.constructor == DO.AlterReturn) { ret = newRet.newRetVal; // Update the static retval state DO.currentRetVal = ret; } } } return ret; }; ////////////////////////////////////////////////////////////////////////// /** * Return an AlterArgs object when you want to change the arguments that * were passed into the function. Useful for Do.before subscribers. An * example would be a service that scrubs out illegal characters prior to * executing the core business logic. * @class Do.AlterArgs * @constructor * @param msg {String} (optional) Explanation of the altered return value * @param newArgs {Array} Call parameters to be used for the original method * instead of the arguments originally passed in. */ DO.AlterArgs = function(msg, newArgs) { this.msg = msg; this.newArgs = newArgs; }; /** * Return an AlterReturn object when you want to change the result returned * from the core method to the caller. Useful for Do.after subscribers. * @class Do.AlterReturn * @constructor * @param msg {String} (optional) Explanation of the altered return value * @param newRetVal {any} Return value passed to code that invoked the wrapped * function. */ DO.AlterReturn = function(msg, newRetVal) { this.msg = msg; this.newRetVal = newRetVal; }; /** * Return a Halt object when you want to terminate the execution * of all subsequent subscribers as well as the wrapped method * if it has not exectued yet. Useful for Do.before subscribers. * @class Do.Halt * @constructor * @param msg {String} (optional) Explanation of why the termination was done * @param retVal {any} Return value passed to code that invoked the wrapped * function. */ DO.Halt = function(msg, retVal) { this.msg = msg; this.retVal = retVal; }; /** * Return a Prevent object when you want to prevent the wrapped function * from executing, but want the remaining listeners to execute. Useful * for Do.before subscribers. * @class Do.Prevent * @constructor * @param msg {String} (optional) Explanation of why the termination was done */ DO.Prevent = function(msg) { this.msg = msg; }; /** * Return an Error object when you want to terminate the execution * of all subsequent method calls. * @class Do.Error * @constructor * @param msg {String} (optional) Explanation of the altered return value * @param retVal {any} Return value passed to code that invoked the wrapped * function. * @deprecated use Y.Do.Halt or Y.Do.Prevent */ DO.Error = DO.Halt; ////////////////////////////////////////////////////////////////////////// // Y["Event"] && Y.Event.addListener(window, "unload", Y.Do._unload, Y.Do); /** * Custom event engine, DOM event listener abstraction layer, synthetic DOM * events. * @module event-custom * @submodule event-custom-base */ // var onsubscribeType = "_event:onsub", var AFTER = 'after', CONFIGS = [ 'broadcast', 'monitored', 'bubbles', 'context', 'contextFn', 'currentTarget', 'defaultFn', 'defaultTargetOnly', 'details', 'emitFacade', 'fireOnce', 'async', 'host', 'preventable', 'preventedFn', 'queuable', 'silent', 'stoppedFn', 'target', 'type' ], YUI3_SIGNATURE = 9, YUI_LOG = 'yui:log'; /** * The CustomEvent class lets you define events for your application * that can be subscribed to by one or more independent component. * * @param {String} type The type of event, which is passed to the callback * when the event fires. * @param {object} o configuration object. * @class CustomEvent * @constructor */ Y.CustomEvent = function(type, o) { // if (arguments.length > 2) { // this.log('CustomEvent context and silent are now in the config', 'warn', 'Event'); // } o = o || {}; this.id = Y.stamp(this); /** * The type of event, returned to subscribers when the event fires * @property type * @type string */ this.type = type; /** * The context the the event will fire from by default. Defaults to the YUI * instance. * @property context * @type object */ this.context = Y; /** * Monitor when an event is attached or detached. * * @property monitored * @type boolean */ // this.monitored = false; this.logSystem = (type == YUI_LOG); /** * If 0, this event does not broadcast. If 1, the YUI instance is notified * every time this event fires. If 2, the YUI instance and the YUI global * (if event is enabled on the global) are notified every time this event * fires. * @property broadcast * @type int */ // this.broadcast = 0; /** * By default all custom events are logged in the debug build, set silent * to true to disable debug outpu for this event. * @property silent * @type boolean */ this.silent = this.logSystem; /** * Specifies whether this event should be queued when the host is actively * processing an event. This will effect exectution order of the callbacks * for the various events. * @property queuable * @type boolean * @default false */ // this.queuable = false; /** * The subscribers to this event * @property subscribers * @type Subscriber {} */ this.subscribers = {}; /** * 'After' subscribers * @property afters * @type Subscriber {} */ this.afters = {}; /** * This event has fired if true * * @property fired * @type boolean * @default false; */ // this.fired = false; /** * An array containing the arguments the custom event * was last fired with. * @property firedWith * @type Array */ // this.firedWith; /** * This event should only fire one time if true, and if * it has fired, any new subscribers should be notified * immediately. * * @property fireOnce * @type boolean * @default false; */ // this.fireOnce = false; /** * fireOnce listeners will fire syncronously unless async * is set to true * @property async * @type boolean * @default false */ //this.async = false; /** * Flag for stopPropagation that is modified during fire() * 1 means to stop propagation to bubble targets. 2 means * to also stop additional subscribers on this target. * @property stopped * @type int */ // this.stopped = 0; /** * Flag for preventDefault that is modified during fire(). * if it is not 0, the default behavior for this event * @property prevented * @type int */ // this.prevented = 0; /** * Specifies the host for this custom event. This is used * to enable event bubbling * @property host * @type EventTarget */ // this.host = null; /** * The default function to execute after event listeners * have fire, but only if the default action was not * prevented. * @property defaultFn * @type Function */ // this.defaultFn = null; /** * The function to execute if a subscriber calls * stopPropagation or stopImmediatePropagation * @property stoppedFn * @type Function */ // this.stoppedFn = null; /** * The function to execute if a subscriber calls * preventDefault * @property preventedFn * @type Function */ // this.preventedFn = null; /** * Specifies whether or not this event's default function * can be cancelled by a subscriber by executing preventDefault() * on the event facade * @property preventable * @type boolean * @default true */ this.preventable = true; /** * Specifies whether or not a subscriber can stop the event propagation * via stopPropagation(), stopImmediatePropagation(), or halt() * * Events can only bubble if emitFacade is true. * * @property bubbles * @type boolean * @default true */ this.bubbles = true; /** * Supports multiple options for listener signatures in order to * port YUI 2 apps. * @property signature * @type int * @default 9 */ this.signature = YUI3_SIGNATURE; this.subCount = 0; this.afterCount = 0; // this.hasSubscribers = false; // this.hasAfters = false; /** * If set to true, the custom event will deliver an EventFacade object * that is similar to a DOM event object. * @property emitFacade * @type boolean * @default false */ // this.emitFacade = false; this.applyConfig(o, true); // this.log("Creating " + this.type); }; Y.CustomEvent.prototype = { constructor: Y.CustomEvent, /** * Returns the number of subscribers for this event as the sum of the on() * subscribers and after() subscribers. * * @method hasSubs * @return Number */ hasSubs: function(when) { var s = this.subCount, a = this.afterCount, sib = this.sibling; if (sib) { s += sib.subCount; a += sib.afterCount; } if (when) { return (when == 'after') ? a : s; } return (s + a); }, /** * Monitor the event state for the subscribed event. The first parameter * is what should be monitored, the rest are the normal parameters when * subscribing to an event. * @method monitor * @param what {string} what to monitor ('detach', 'attach', 'publish'). * @return {EventHandle} return value from the monitor event subscription. */ monitor: function(what) { this.monitored = true; var type = this.id + '|' + this.type + '_' + what, args = Y.Array(arguments, 0, true); args[0] = type; return this.host.on.apply(this.host, args); }, /** * Get all of the subscribers to this event and any sibling event * @method getSubs * @return {Array} first item is the on subscribers, second the after. */ getSubs: function() { var s = Y.merge(this.subscribers), a = Y.merge(this.afters), sib = this.sibling; if (sib) { Y.mix(s, sib.subscribers); Y.mix(a, sib.afters); } return [s, a]; }, /** * Apply configuration properties. Only applies the CONFIG whitelist * @method applyConfig * @param o hash of properties to apply. * @param force {boolean} if true, properties that exist on the event * will be overwritten. */ applyConfig: function(o, force) { if (o) { Y.mix(this, o, force, CONFIGS); } }, /** * Create the Subscription for subscribing function, context, and bound * arguments. If this is a fireOnce event, the subscriber is immediately * notified. * * @method _on * @param fn {Function} Subscription callback * @param [context] {Object} Override `this` in the callback * @param [args] {Array} bound arguments that will be passed to the callback after the arguments generated by fire() * @param [when] {String} "after" to slot into after subscribers * @return {EventHandle} * @protected */ _on: function(fn, context, args, when) { if (!fn) { this.log('Invalid callback for CE: ' + this.type); } var s = new Y.Subscriber(fn, context, args, when); if (this.fireOnce && this.fired) { if (this.async) { setTimeout(Y.bind(this._notify, this, s, this.firedWith), 0); } else { this._notify(s, this.firedWith); } } if (when == AFTER) { this.afters[s.id] = s; this.afterCount++; } else { this.subscribers[s.id] = s; this.subCount++; } return new Y.EventHandle(this, s); }, /** * Listen for this event * @method subscribe * @param {Function} fn The function to execute. * @return {EventHandle} Unsubscribe handle. * @deprecated use on. */ subscribe: function(fn, context) { Y.log('ce.subscribe deprecated, use "on"', 'warn', 'deprecated'); var a = (arguments.length > 2) ? Y.Array(arguments, 2, true) : null; return this._on(fn, context, a, true); }, /** * Listen for this event * @method on * @param {Function} fn The function to execute. * @param {object} context optional execution context. * @param {mixed} arg* 0..n additional arguments to supply to the subscriber * when the event fires. * @return {EventHandle} An object with a detach method to detch the handler(s). */ on: function(fn, context) { var a = (arguments.length > 2) ? Y.Array(arguments, 2, true) : null; if (this.host) { this.host._monitor('attach', this.type, { args: arguments }); } return this._on(fn, context, a, true); }, /** * Listen for this event after the normal subscribers have been notified and * the default behavior has been applied. If a normal subscriber prevents the * default behavior, it also prevents after listeners from firing. * @method after * @param {Function} fn The function to execute. * @param {object} context optional execution context. * @param {mixed} arg* 0..n additional arguments to supply to the subscriber * when the event fires. * @return {EventHandle} handle Unsubscribe handle. */ after: function(fn, context) { var a = (arguments.length > 2) ? Y.Array(arguments, 2, true) : null; return this._on(fn, context, a, AFTER); }, /** * Detach listeners. * @method detach * @param {Function} fn The subscribed function to remove, if not supplied * all will be removed. * @param {Object} context The context object passed to subscribe. * @return {int} returns the number of subscribers unsubscribed. */ detach: function(fn, context) { // unsubscribe handle if (fn && fn.detach) { return fn.detach(); } var i, s, found = 0, subs = Y.merge(this.subscribers, this.afters); for (i in subs) { if (subs.hasOwnProperty(i)) { s = subs[i]; if (s && (!fn || fn === s.fn)) { this._delete(s); found++; } } } return found; }, /** * Detach listeners. * @method unsubscribe * @param {Function} fn The subscribed function to remove, if not supplied * all will be removed. * @param {Object} context The context object passed to subscribe. * @return {int|undefined} returns the number of subscribers unsubscribed. * @deprecated use detach. */ unsubscribe: function() { return this.detach.apply(this, arguments); }, /** * Notify a single subscriber * @method _notify * @param {Subscriber} s the subscriber. * @param {Array} args the arguments array to apply to the listener. * @protected */ _notify: function(s, args, ef) { this.log(this.type + '->' + 'sub: ' + s.id); var ret; ret = s.notify(args, this); if (false === ret || this.stopped > 1) { this.log(this.type + ' cancelled by subscriber'); return false; } return true; }, /** * Logger abstraction to centralize the application of the silent flag * @method log * @param {string} msg message to log. * @param {string} cat log category. */ log: function(msg, cat) { if (!this.silent) { Y.log(this.id + ': ' + msg, cat || 'info', 'event'); } }, /** * Notifies the subscribers. The callback functions will be executed * from the context specified when the event was created, and with the * following parameters: * <ul> * <li>The type of event</li> * <li>All of the arguments fire() was executed with as an array</li> * <li>The custom object (if any) that was passed into the subscribe() * method</li> * </ul> * @method fire * @param {Object*} arguments an arbitrary set of parameters to pass to * the handler. * @return {boolean} false if one of the subscribers returned false, * true otherwise. * */ fire: function() { if (this.fireOnce && this.fired) { this.log('fireOnce event: ' + this.type + ' already fired'); return true; } else { var args = Y.Array(arguments, 0, true); // this doesn't happen if the event isn't published // this.host._monitor('fire', this.type, args); this.fired = true; this.firedWith = args; if (this.emitFacade) { return this.fireComplex(args); } else { return this.fireSimple(args); } } }, /** * Set up for notifying subscribers of non-emitFacade events. * * @method fireSimple * @param args {Array} Arguments passed to fire() * @return Boolean false if a subscriber returned false * @protected */ fireSimple: function(args) { this.stopped = 0; this.prevented = 0; if (this.hasSubs()) { // this._procSubs(Y.merge(this.subscribers, this.afters), args); var subs = this.getSubs(); this._procSubs(subs[0], args); this._procSubs(subs[1], args); } this._broadcast(args); return this.stopped ? false : true; }, // Requires the event-custom-complex module for full funcitonality. fireComplex: function(args) { Y.log('Missing event-custom-complex needed to emit a facade for: ' + this.type); args[0] = args[0] || {}; return this.fireSimple(args); }, /** * Notifies a list of subscribers. * * @method _procSubs * @param subs {Array} List of subscribers * @param args {Array} Arguments passed to fire() * @param ef {} * @return Boolean false if a subscriber returns false or stops the event * propagation via e.stopPropagation(), * e.stopImmediatePropagation(), or e.halt() * @private */ _procSubs: function(subs, args, ef) { var s, i; for (i in subs) { if (subs.hasOwnProperty(i)) { s = subs[i]; if (s && s.fn) { if (false === this._notify(s, args, ef)) { this.stopped = 2; } if (this.stopped == 2) { return false; } } } } return true; }, /** * Notifies the YUI instance if the event is configured with broadcast = 1, * and both the YUI instance and Y.Global if configured with broadcast = 2. * * @method _broadcast * @param args {Array} Arguments sent to fire() * @private */ _broadcast: function(args) { if (!this.stopped && this.broadcast) { var a = Y.Array(args); a.unshift(this.type); if (this.host !== Y) { Y.fire.apply(Y, a); } if (this.broadcast == 2) { Y.Global.fire.apply(Y.Global, a); } } }, /** * Removes all listeners * @method unsubscribeAll * @return {int} The number of listeners unsubscribed. * @deprecated use detachAll. */ unsubscribeAll: function() { return this.detachAll.apply(this, arguments); }, /** * Removes all listeners * @method detachAll * @return {int} The number of listeners unsubscribed. */ detachAll: function() { return this.detach(); }, /** * Deletes the subscriber from the internal store of on() and after() * subscribers. * * @method _delete * @param subscriber object. * @private */ _delete: function(s) { if (s) { if (this.subscribers[s.id]) { delete this.subscribers[s.id]; this.subCount--; } if (this.afters[s.id]) { delete this.afters[s.id]; this.afterCount--; } } if (this.host) { this.host._monitor('detach', this.type, { ce: this, sub: s }); } if (s) { // delete s.fn; // delete s.context; s.deleted = true; } } }; /** * Stores the subscriber information to be used when the event fires. * @param {Function} fn The wrapped function to execute. * @param {Object} context The value of the keyword 'this' in the listener. * @param {Array} args* 0..n additional arguments to supply the listener. * * @class Subscriber * @constructor */ Y.Subscriber = function(fn, context, args) { /** * The callback that will be execute when the event fires * This is wrapped by Y.rbind if obj was supplied. * @property fn * @type Function */ this.fn = fn; /** * Optional 'this' keyword for the listener * @property context * @type Object */ this.context = context; /** * Unique subscriber id * @property id * @type String */ this.id = Y.stamp(this); /** * Additional arguments to propagate to the subscriber * @property args * @type Array */ this.args = args; /** * Custom events for a given fire transaction. * @property events * @type {EventTarget} */ // this.events = null; /** * This listener only reacts to the event once * @property once */ // this.once = false; }; Y.Subscriber.prototype = { constructor: Y.Subscriber, _notify: function(c, args, ce) { if (this.deleted && !this.postponed) { if (this.postponed) { delete this.fn; delete this.context; } else { delete this.postponed; return null; } } var a = this.args, ret; switch (ce.signature) { case 0: ret = this.fn.call(c, ce.type, args, c); break; case 1: ret = this.fn.call(c, args[0] || null, c); break; default: if (a || args) { args = args || []; a = (a) ? args.concat(a) : args; ret = this.fn.apply(c, a); } else { ret = this.fn.call(c); } } if (this.once) { ce._delete(this); } return ret; }, /** * Executes the subscriber. * @method notify * @param args {Array} Arguments array for the subscriber. * @param ce {CustomEvent} The custom event that sent the notification. */ notify: function(args, ce) { var c = this.context, ret = true; if (!c) { c = (ce.contextFn) ? ce.contextFn() : ce.context; } // only catch errors if we will not re-throw them. if (Y.config.throwFail) { ret = this._notify(c, args, ce); } else { try { ret = this._notify(c, args, ce); } catch (e) { Y.error(this + ' failed: ' + e.message, e); } } return ret; }, /** * Returns true if the fn and obj match this objects properties. * Used by the unsubscribe method to match the right subscriber. * * @method contains * @param {Function} fn the function to execute. * @param {Object} context optional 'this' keyword for the listener. * @return {boolean} true if the supplied arguments match this * subscriber's signature. */ contains: function(fn, context) { if (context) { return ((this.fn == fn) && this.context == context); } else { return (this.fn == fn); } } }; /** * Return value from all subscribe operations * @class EventHandle * @constructor * @param {CustomEvent} evt the custom event. * @param {Subscriber} sub the subscriber. */ Y.EventHandle = function(evt, sub) { /** * The custom event * * @property evt * @type CustomEvent */ this.evt = evt; /** * The subscriber object * * @property sub * @type Subscriber */ this.sub = sub; }; Y.EventHandle.prototype = { batch: function(f, c) { f.call(c || this, this); if (Y.Lang.isArray(this.evt)) { Y.Array.each(this.evt, function(h) { h.batch.call(c || h, f); }); } }, /** * Detaches this subscriber * @method detach * @return {int} the number of detached listeners */ detach: function() { var evt = this.evt, detached = 0, i; if (evt) { // Y.log('EventHandle.detach: ' + this.sub, 'info', 'Event'); if (Y.Lang.isArray(evt)) { for (i = 0; i < evt.length; i++) { detached += evt[i].detach(); } } else { evt._delete(this.sub); detached = 1; } } return detached; }, /** * Monitor the event state for the subscribed event. The first parameter * is what should be monitored, the rest are the normal parameters when * subscribing to an event. * @method monitor * @param what {string} what to monitor ('attach', 'detach', 'publish'). * @return {EventHandle} return value from the monitor event subscription. */ monitor: function(what) { return this.evt.monitor.apply(this.evt, arguments); } }; /** * Custom event engine, DOM event listener abstraction layer, synthetic DOM * events. * @module event-custom * @submodule event-custom-base */ /** * EventTarget provides the implementation for any object to * publish, subscribe and fire to custom events, and also * alows other EventTargets to target the object with events * sourced from the other object. * EventTarget is designed to be used with Y.augment to wrap * EventCustom in an interface that allows events to be listened to * and fired by name. This makes it possible for implementing code to * subscribe to an event that either has not been created yet, or will * not be created at all. * @class EventTarget * @param opts a configuration object * @config emitFacade {boolean} if true, all events will emit event * facade payloads by default (default false) * @config prefix {String} the prefix to apply to non-prefixed event names */ var L = Y.Lang, PREFIX_DELIMITER = ':', CATEGORY_DELIMITER = '|', AFTER_PREFIX = '~AFTER~', YArray = Y.Array, _wildType = Y.cached(function(type) { return type.replace(/(.*)(:)(.*)/, "*$2$3"); }), /** * If the instance has a prefix attribute and the * event type is not prefixed, the instance prefix is * applied to the supplied type. * @method _getType * @private */ _getType = Y.cached(function(type, pre) { if (!pre || !L.isString(type) || type.indexOf(PREFIX_DELIMITER) > -1) { return type; } return pre + PREFIX_DELIMITER + type; }), /** * Returns an array with the detach key (if provided), * and the prefixed event name from _getType * Y.on('detachcategory| menu:click', fn) * @method _parseType * @private */ _parseType = Y.cached(function(type, pre) { var t = type, detachcategory, after, i; if (!L.isString(t)) { return t; } i = t.indexOf(AFTER_PREFIX); if (i > -1) { after = true; t = t.substr(AFTER_PREFIX.length); // Y.log(t); } i = t.indexOf(CATEGORY_DELIMITER); if (i > -1) { detachcategory = t.substr(0, (i)); t = t.substr(i+1); if (t == '*') { t = null; } } // detach category, full type with instance prefix, is this an after listener, short type return [detachcategory, (pre) ? _getType(t, pre) : t, after, t]; }), ET = function(opts) { // Y.log('EventTarget constructor executed: ' + this._yuid); var o = (L.isObject(opts)) ? opts : {}; this._yuievt = this._yuievt || { id: Y.guid(), events: {}, targets: {}, config: o, chain: ('chain' in o) ? o.chain : Y.config.chain, bubbling: false, defaults: { context: o.context || this, host: this, emitFacade: o.emitFacade, fireOnce: o.fireOnce, queuable: o.queuable, monitored: o.monitored, broadcast: o.broadcast, defaultTargetOnly: o.defaultTargetOnly, bubbles: ('bubbles' in o) ? o.bubbles : true } }; }; ET.prototype = { constructor: ET, /** * Listen to a custom event hosted by this object one time. * This is the equivalent to <code>on</code> except the * listener is immediatelly detached when it is executed. * @method once * @param {String} type The name of the event * @param {Function} fn The callback to execute in response to the event * @param {Object} [context] Override `this` object in callback * @param {Any} [arg*] 0..n additional arguments to supply to the subscriber * @return {EventHandle} A subscription handle capable of detaching the * subscription */ once: function() { var handle = this.on.apply(this, arguments); handle.batch(function(hand) { if (hand.sub) { hand.sub.once = true; } }); return handle; }, /** * Listen to a custom event hosted by this object one time. * This is the equivalent to <code>after</code> except the * listener is immediatelly detached when it is executed. * @method onceAfter * @param {String} type The name of the event * @param {Function} fn The callback to execute in response to the event * @param {Object} [context] Override `this` object in callback * @param {Any} [arg*] 0..n additional arguments to supply to the subscriber * @return {EventHandle} A subscription handle capable of detaching that * subscription */ onceAfter: function() { var handle = this.after.apply(this, arguments); handle.batch(function(hand) { if (hand.sub) { hand.sub.once = true; } }); return handle; }, /** * Takes the type parameter passed to 'on' and parses out the * various pieces that could be included in the type. If the * event type is passed without a prefix, it will be expanded * to include the prefix one is supplied or the event target * is configured with a default prefix. * @method parseType * @param {String} type the type * @param {String} [pre=this._yuievt.config.prefix] the prefix * @since 3.3.0 * @return {Array} an array containing: * * the detach category, if supplied, * * the prefixed event type, * * whether or not this is an after listener, * * the supplied event type */ parseType: function(type, pre) { return _parseType(type, pre || this._yuievt.config.prefix); }, /** * Subscribe a callback function to a custom event fired by this object or * from an object that bubbles its events to this object. * * Callback functions for events published with `emitFacade = true` will * receive an `EventFacade` as the first argument (typically named "e"). * These callbacks can then call `e.preventDefault()` to disable the * behavior published to that event's `defaultFn`. See the `EventFacade` * API for all available properties and methods. Subscribers to * non-`emitFacade` events will receive the arguments passed to `fire()` * after the event name. * * To subscribe to multiple events at once, pass an object as the first * argument, where the key:value pairs correspond to the eventName:callback, * or pass an array of event names as the first argument to subscribe to * all listed events with the same callback. * * Returning `false` from a callback is supported as an alternative to * calling `e.preventDefault(); e.stopPropagation();`. However, it is * recommended to use the event methods whenever possible. * * @method on * @param {String} type The name of the event * @param {Function} fn The callback to execute in response to the event * @param {Object} [context] Override `this` object in callback * @param {Any} [arg*] 0..n additional arguments to supply to the subscriber * @return {EventHandle} A subscription handle capable of detaching that * subscription */ on: function(type, fn, context) { var parts = _parseType(type, this._yuievt.config.prefix), f, c, args, ret, ce, detachcategory, handle, store = Y.Env.evt.handles, after, adapt, shorttype, Node = Y.Node, n, domevent, isArr; // full name, args, detachcategory, after this._monitor('attach', parts[1], { args: arguments, category: parts[0], after: parts[2] }); if (L.isObject(type)) { if (L.isFunction(type)) { return Y.Do.before.apply(Y.Do, arguments); } f = fn; c = context; args = YArray(arguments, 0, true); ret = []; if (L.isArray(type)) { isArr = true; } after = type._after; delete type._after; Y.each(type, function(v, k) { if (L.isObject(v)) { f = v.fn || ((L.isFunction(v)) ? v : f); c = v.context || c; } var nv = (after) ? AFTER_PREFIX : ''; args[0] = nv + ((isArr) ? v : k); args[1] = f; args[2] = c; ret.push(this.on.apply(this, args)); }, this); return (this._yuievt.chain) ? this : new Y.EventHandle(ret); } detachcategory = parts[0]; after = parts[2]; shorttype = parts[3]; // extra redirection so we catch adaptor events too. take a look at this. if (Node && Y.instanceOf(this, Node) && (shorttype in Node.DOM_EVENTS)) { args = YArray(arguments, 0, true); args.splice(2, 0, Node.getDOMNode(this)); // Y.log("Node detected, redirecting with these args: " + args); return Y.on.apply(Y, args); } type = parts[1]; if (Y.instanceOf(this, YUI)) { adapt = Y.Env.evt.plugins[type]; args = YArray(arguments, 0, true); args[0] = shorttype; if (Node) { n = args[2]; if (Y.instanceOf(n, Y.NodeList)) { n = Y.NodeList.getDOMNodes(n); } else if (Y.instanceOf(n, Node)) { n = Node.getDOMNode(n); } domevent = (shorttype in Node.DOM_EVENTS); // Captures both DOM events and event plugins. if (domevent) { args[2] = n; } } // check for the existance of an event adaptor if (adapt) { Y.log('Using adaptor for ' + shorttype + ', ' + n, 'info', 'event'); handle = adapt.on.apply(Y, args); } else if ((!type) || domevent) { handle = Y.Event._attach(args); } } if (!handle) { ce = this._yuievt.events[type] || this.publish(type); handle = ce._on(fn, context, (arguments.length > 3) ? YArray(arguments, 3, true) : null, (after) ? 'after' : true); } if (detachcategory) { store[detachcategory] = store[detachcategory] || {}; store[detachcategory][type] = store[detachcategory][type] || []; store[detachcategory][type].push(handle); } return (this._yuievt.chain) ? this : handle; }, /** * subscribe to an event * @method subscribe * @deprecated use on */ subscribe: function() { Y.log('EventTarget subscribe() is deprecated, use on()', 'warn', 'deprecated'); return this.on.apply(this, arguments); }, /** * Detach one or more listeners the from the specified event * @method detach * @param type {string|Object} Either the handle to the subscriber or the * type of event. If the type * is not specified, it will attempt to remove * the listener from all hosted events. * @param fn {Function} The subscribed function to unsubscribe, if not * supplied, all subscribers will be removed. * @param context {Object} The custom object passed to subscribe. This is * optional, but if supplied will be used to * disambiguate multiple listeners that are the same * (e.g., you subscribe many object using a function * that lives on the prototype) * @return {EventTarget} the host */ detach: function(type, fn, context) { var evts = this._yuievt.events, i, Node = Y.Node, isNode = Node && (Y.instanceOf(this, Node)); // detachAll disabled on the Y instance. if (!type && (this !== Y)) { for (i in evts) { if (evts.hasOwnProperty(i)) { evts[i].detach(fn, context); } } if (isNode) { Y.Event.purgeElement(Node.getDOMNode(this)); } return this; } var parts = _parseType(type, this._yuievt.config.prefix), detachcategory = L.isArray(parts) ? parts[0] : null, shorttype = (parts) ? parts[3] : null, adapt, store = Y.Env.evt.handles, detachhost, cat, args, ce, keyDetacher = function(lcat, ltype, host) { var handles = lcat[ltype], ce, i; if (handles) { for (i = handles.length - 1; i >= 0; --i) { ce = handles[i].evt; if (ce.host === host || ce.el === host) { handles[i].detach(); } } } }; if (detachcategory) { cat = store[detachcategory]; type = parts[1]; detachhost = (isNode) ? Y.Node.getDOMNode(this) : this; if (cat) { if (type) { keyDetacher(cat, type, detachhost); } else { for (i in cat) { if (cat.hasOwnProperty(i)) { keyDetacher(cat, i, detachhost); } } } return this; } // If this is an event handle, use it to detach } else if (L.isObject(type) && type.detach) { type.detach(); return this; // extra redirection so we catch adaptor events too. take a look at this. } else if (isNode && ((!shorttype) || (shorttype in Node.DOM_EVENTS))) { args = YArray(arguments, 0, true); args[2] = Node.getDOMNode(this); Y.detach.apply(Y, args); return this; } adapt = Y.Env.evt.plugins[shorttype]; // The YUI instance handles DOM events and adaptors if (Y.instanceOf(this, YUI)) { args = YArray(arguments, 0, true); // use the adaptor specific detach code if if (adapt && adapt.detach) { adapt.detach.apply(Y, args); return this; // DOM event fork } else if (!type || (!adapt && Node && (type in Node.DOM_EVENTS))) { args[0] = type; Y.Event.detach.apply(Y.Event, args); return this; } } // ce = evts[type]; ce = evts[parts[1]]; if (ce) { ce.detach(fn, context); } return this; }, /** * detach a listener * @method unsubscribe * @deprecated use detach */ unsubscribe: function() { Y.log('EventTarget unsubscribe() is deprecated, use detach()', 'warn', 'deprecated'); return this.detach.apply(this, arguments); }, /** * Removes all listeners from the specified event. If the event type * is not specified, all listeners from all hosted custom events will * be removed. * @method detachAll * @param type {String} The type, or name of the event */ detachAll: function(type) { return this.detach(type); }, /** * Removes all listeners from the specified event. If the event type * is not specified, all listeners from all hosted custom events will * be removed. * @method unsubscribeAll * @param type {String} The type, or name of the event * @deprecated use detachAll */ unsubscribeAll: function() { Y.log('EventTarget unsubscribeAll() is deprecated, use detachAll()', 'warn', 'deprecated'); return this.detachAll.apply(this, arguments); }, /** * Creates a new custom event of the specified type. If a custom event * by that name already exists, it will not be re-created. In either * case the custom event is returned. * * @method publish * * @param type {String} the type, or name of the event * @param opts {object} optional config params. Valid properties are: * * <ul> * <li> * 'broadcast': whether or not the YUI instance and YUI global are notified when the event is fired (false) * </li> * <li> * 'bubbles': whether or not this event bubbles (true) * Events can only bubble if emitFacade is true. * </li> * <li> * 'context': the default execution context for the listeners (this) * </li> * <li> * 'defaultFn': the default function to execute when this event fires if preventDefault was not called * </li> * <li> * 'emitFacade': whether or not this event emits a facade (false) * </li> * <li> * 'prefix': the prefix for this targets events, e.g., 'menu' in 'menu:click' * </li> * <li> * 'fireOnce': if an event is configured to fire once, new subscribers after * the fire will be notified immediately. * </li> * <li> * 'async': fireOnce event listeners will fire synchronously if the event has already * fired unless async is true. * </li> * <li> * 'preventable': whether or not preventDefault() has an effect (true) * </li> * <li> * 'preventedFn': a function that is executed when preventDefault is called * </li> * <li> * 'queuable': whether or not this event can be queued during bubbling (false) * </li> * <li> * 'silent': if silent is true, debug messages are not provided for this event. * </li> * <li> * 'stoppedFn': a function that is executed when stopPropagation is called * </li> * * <li> * 'monitored': specifies whether or not this event should send notifications about * when the event has been attached, detached, or published. * </li> * <li> * 'type': the event type (valid option if not provided as the first parameter to publish) * </li> * </ul> * * @return {CustomEvent} the custom event * */ publish: function(type, opts) { var events, ce, ret, defaults, edata = this._yuievt, pre = edata.config.prefix; type = (pre) ? _getType(type, pre) : type; this._monitor('publish', type, { args: arguments }); if (L.isObject(type)) { ret = {}; Y.each(type, function(v, k) { ret[k] = this.publish(k, v || opts); }, this); return ret; } events = edata.events; ce = events[type]; if (ce) { // ce.log("publish applying new config to published event: '"+type+"' exists", 'info', 'event'); if (opts) { ce.applyConfig(opts, true); } } else { defaults = edata.defaults; // apply defaults ce = new Y.CustomEvent(type, (opts) ? Y.merge(defaults, opts) : defaults); events[type] = ce; } // make sure we turn the broadcast flag off if this // event was published as a result of bubbling // if (opts instanceof Y.CustomEvent) { // events[type].broadcast = false; // } return events[type]; }, /** * This is the entry point for the event monitoring system. * You can monitor 'attach', 'detach', 'fire', and 'publish'. * When configured, these events generate an event. click -> * click_attach, click_detach, click_publish -- these can * be subscribed to like other events to monitor the event * system. Inividual published events can have monitoring * turned on or off (publish can't be turned off before it * it published) by setting the events 'monitor' config. * * @method _monitor * @param what {String} 'attach', 'detach', 'fire', or 'publish' * @param type {String} Name of the event being monitored * @param o {Object} Information about the event interaction, such as * fire() args, subscription category, publish config * @private */ _monitor: function(what, type, o) { var monitorevt, ce = this.getEvent(type); if ((this._yuievt.config.monitored && (!ce || ce.monitored)) || (ce && ce.monitored)) { monitorevt = type + '_' + what; // Y.log('monitoring: ' + monitorevt); o.monitored = what; this.fire.call(this, monitorevt, o); } }, /** * Fire a custom event by name. The callback functions will be executed * from the context specified when the event was created, and with the * following parameters. * * If the custom event object hasn't been created, then the event hasn't * been published and it has no subscribers. For performance sake, we * immediate exit in this case. This means the event won't bubble, so * if the intention is that a bubble target be notified, the event must * be published on this object first. * * The first argument is the event type, and any additional arguments are * passed to the listeners as parameters. If the first of these is an * object literal, and the event is configured to emit an event facade, * that object is mixed into the event facade and the facade is provided * in place of the original object. * * @method fire * @param type {String|Object} The type of the event, or an object that contains * a 'type' property. * @param arguments {Object*} an arbitrary set of parameters to pass to * the handler. If the first of these is an object literal and the event is * configured to emit an event facade, the event facade will replace that * parameter after the properties the object literal contains are copied to * the event facade. * @return {EventTarget} the event host * */ fire: function(type) { var typeIncluded = L.isString(type), t = (typeIncluded) ? type : (type && type.type), ce, ret, pre = this._yuievt.config.prefix, ce2, args = (typeIncluded) ? YArray(arguments, 1, true) : arguments; t = (pre) ? _getType(t, pre) : t; this._monitor('fire', t, { args: args }); ce = this.getEvent(t, true); ce2 = this.getSibling(t, ce); if (ce2 && !ce) { ce = this.publish(t); } // this event has not been published or subscribed to if (!ce) { if (this._yuievt.hasTargets) { return this.bubble({ type: t }, args, this); } // otherwise there is nothing to be done ret = true; } else { ce.sibling = ce2; ret = ce.fire.apply(ce, args); } return (this._yuievt.chain) ? this : ret; }, getSibling: function(type, ce) { var ce2; // delegate to *:type events if there are subscribers if (type.indexOf(PREFIX_DELIMITER) > -1) { type = _wildType(type); // console.log(type); ce2 = this.getEvent(type, true); if (ce2) { // console.log("GOT ONE: " + type); ce2.applyConfig(ce); ce2.bubbles = false; ce2.broadcast = 0; // ret = ce2.fire.apply(ce2, a); } } return ce2; }, /** * Returns the custom event of the provided type has been created, a * falsy value otherwise * @method getEvent * @param type {String} the type, or name of the event * @param prefixed {String} if true, the type is prefixed already * @return {CustomEvent} the custom event or null */ getEvent: function(type, prefixed) { var pre, e; if (!prefixed) { pre = this._yuievt.config.prefix; type = (pre) ? _getType(type, pre) : type; } e = this._yuievt.events; return e[type] || null; }, /** * Subscribe to a custom event hosted by this object. The * supplied callback will execute after any listeners add * via the subscribe method, and after the default function, * if configured for the event, has executed. * * @method after * @param {String} type The name of the event * @param {Function} fn The callback to execute in response to the event * @param {Object} [context] Override `this` object in callback * @param {Any} [arg*] 0..n additional arguments to supply to the subscriber * @return {EventHandle} A subscription handle capable of detaching the * subscription */ after: function(type, fn) { var a = YArray(arguments, 0, true); switch (L.type(type)) { case 'function': return Y.Do.after.apply(Y.Do, arguments); case 'array': // YArray.each(a[0], function(v) { // v = AFTER_PREFIX + v; // }); // break; case 'object': a[0]._after = true; break; default: a[0] = AFTER_PREFIX + type; } return this.on.apply(this, a); }, /** * Executes the callback before a DOM event, custom event * or method. If the first argument is a function, it * is assumed the target is a method. For DOM and custom * events, this is an alias for Y.on. * * For DOM and custom events: * type, callback, context, 0-n arguments * * For methods: * callback, object (method host), methodName, context, 0-n arguments * * @method before * @return detach handle */ before: function() { return this.on.apply(this, arguments); } }; Y.EventTarget = ET; // make Y an event target Y.mix(Y, ET.prototype); ET.call(Y, { bubbles: false }); YUI.Env.globalEvents = YUI.Env.globalEvents || new ET(); /** * Hosts YUI page level events. This is where events bubble to * when the broadcast config is set to 2. This property is * only available if the custom event module is loaded. * @property Global * @type EventTarget * @for YUI */ Y.Global = YUI.Env.globalEvents; // @TODO implement a global namespace function on Y.Global? /** `Y.on()` can do many things: <ul> <li>Subscribe to custom events `publish`ed and `fire`d from Y</li> <li>Subscribe to custom events `publish`ed with `broadcast` 1 or 2 and `fire`d from any object in the YUI instance sandbox</li> <li>Subscribe to DOM events</li> <li>Subscribe to the execution of a method on any object, effectively treating that method as an event</li> </ul> For custom event subscriptions, pass the custom event name as the first argument and callback as the second. The `this` object in the callback will be `Y` unless an override is passed as the third argument. Y.on('io:complete', function () { Y.MyApp.updateStatus('Transaction complete'); }); To subscribe to DOM events, pass the name of a DOM event as the first argument and a CSS selector string as the third argument after the callback function. Alternately, the third argument can be a `Node`, `NodeList`, `HTMLElement`, array, or simply omitted (the default is the `window` object). Y.on('click', function (e) { e.preventDefault(); // proceed with ajax form submission var url = this.get('action'); ... }, '#my-form'); The `this` object in DOM event callbacks will be the `Node` targeted by the CSS selector or other identifier. `on()` subscribers for DOM events or custom events `publish`ed with a `defaultFn` can prevent the default behavior with `e.preventDefault()` from the event object passed as the first parameter to the subscription callback. To subscribe to the execution of an object method, pass arguments corresponding to the call signature for <a href="../classes/Do.html#methods_before">`Y.Do.before(...)`</a>. NOTE: The formal parameter list below is for events, not for function injection. See `Y.Do.before` for that signature. @method on @param {String} type DOM or custom event name @param {Function} fn The callback to execute in response to the event @param {Object} [context] Override `this` object in callback @param {Any} [arg*] 0..n additional arguments to supply to the subscriber @return {EventHandle} A subscription handle capable of detaching the subscription @see Do.before @for YUI **/ /** Listen for an event one time. Equivalent to `on()`, except that the listener is immediately detached when executed. See the <a href="#methods_on">`on()` method</a> for additional subscription options. @see on @method once @param {String} type DOM or custom event name @param {Function} fn The callback to execute in response to the event @param {Object} [context] Override `this` object in callback @param {Any} [arg*] 0..n additional arguments to supply to the subscriber @return {EventHandle} A subscription handle capable of detaching the subscription @for YUI **/ /** Listen for an event one time. Equivalent to `once()`, except, like `after()`, the subscription callback executes after all `on()` subscribers and the event's `defaultFn` (if configured) have executed. Like `after()` if any `on()` phase subscriber calls `e.preventDefault()`, neither the `defaultFn` nor the `after()` subscribers will execute. The listener is immediately detached when executed. See the <a href="#methods_on">`on()` method</a> for additional subscription options. @see once @method onceAfter @param {String} type The custom event name @param {Function} fn The callback to execute in response to the event @param {Object} [context] Override `this` object in callback @param {Any} [arg*] 0..n additional arguments to supply to the subscriber @return {EventHandle} A subscription handle capable of detaching the subscription @for YUI **/ /** Like `on()`, this method creates a subscription to a custom event or to the execution of a method on an object. For events, `after()` subscribers are executed after the event's `defaultFn` unless `e.preventDefault()` was called from an `on()` subscriber. See the <a href="#methods_on">`on()` method</a> for additional subscription options. NOTE: The subscription signature shown is for events, not for function injection. See <a href="../classes/Do.html#methods_after">`Y.Do.after`</a> for that signature. @see on @see Do.after @method after @param {String} type The custom event name @param {Function} fn The callback to execute in response to the event @param {Object} [context] Override `this` object in callback @param {Any} [args*] 0..n additional arguments to supply to the subscriber @return {EventHandle} A subscription handle capable of detaching the subscription @for YUI **/ }, '3.4.1' ,{requires:['oop']});
PypiClean
/uberspace_takeout-0.3.0.tar.gz/uberspace_takeout-0.3.0/uberspace_takeout/storage.py
import datetime import errno import os import tarfile try: from BytesIO import BytesIO except ImportError: from io import BytesIO class Storage: def __init__(self, destination, mode): if mode not in ("takein", "takeout"): raise Exception( 'Invalid mode {}, expected "takein" or "takeout".'.format(mode) ) self.destination = str(destination) self.mode = mode def __enter__(self): raise NotImplementedError() def __exit__(self, exception_type, exception_value, traceback): raise NotImplementedError() def list_files(self, storage_path): raise NotImplementedError() def store_text(self, content, storage_path): raise NotImplementedError() def unstore_text(self, storage_path): raise NotImplementedError() def store_file(self, system_path, storage_path): raise NotImplementedError() def unstore_file(self, storage_path, system_path): raise NotImplementedError() def store_directory(self, system_path, storage_path): return self.store_file(system_path, storage_path) def unstore_directory(self, storage_path, system_path): return self.unstore_file(storage_path, system_path) class TarStorage(Storage): def __enter__(self): mode = "w:bz2" if self.mode == "takeout" else "r:bz2" self.tar = tarfile.open(self.destination, mode) return self def __exit__(self, exception_type, exception_value, traceback): self.tar.close() def _check_member_type(self, member): if member.type not in (tarfile.REGTYPE, tarfile.SYMTYPE, tarfile.DIRTYPE): raise Exception( "tar member has illegal type: {}. " "Must be tarfile.REGTYPE/file, SYMTYPE/symlink or DIRTYPE/directory, " "but is {}".format(member.name, member.type) ) def clone_tarinfo(self, tarinfo): # "clone" the object so we don't modify names inside the tar tarinfo2 = tarfile.TarInfo() for attr in (*tarinfo.get_info().keys(), "offset", "offset_data"): setattr(tarinfo2, attr, getattr(tarinfo, attr)) return tarinfo2 def get_members_in(self, directory): directory = directory.rstrip("/") + "/" for m in self.tar.getmembers(): if ".." in m.name: raise Exception( 'tar member has illegal name (contains ".."): ' + m.name ) if m.name.startswith("/"): raise Exception( 'tar member has illegal name (starts with "/"): ' + m.name ) if m.name.startswith("./"): raise Exception( 'tar member has illegal name (starts with "./"): ' + m.name ) self._check_member_type(m) if m.name.startswith(directory): m = self.clone_tarinfo(m) # files might be stored as /www/domain.com/something.html, but need to be extracted # as domain.com/something.html. m.name = m.name[len(directory) :] yield m def has_member(self, path): for m in self.tar.getmembers(): self._check_member_type(m) if m.name == path: return True return False def get_member(self, path): matching = [] for m in self.tar.getmembers(): self._check_member_type(m) if m.name == path: matching.append(m) if len(matching) == 0: raise FileNotFoundError() if len(matching) > 1: raise Exception( "There are {} files matching the path {}. Expected only one.".format( len(matching), path ) ) return self.clone_tarinfo(matching[0]) @classmethod def _len(cls, f): old_position = f.tell() f.seek(0, os.SEEK_END) length = f.tell() f.seek(old_position) return length def list_files(self, storage_path): members = list(self.get_members_in(storage_path)) if not members: raise FileNotFoundError() return [m.name for m in members if "/" not in m.name] def store_text(self, content, storage_path): storage_path = str(storage_path).lstrip("/") content = BytesIO(content.encode("utf-8")) info = tarfile.TarInfo(storage_path) info.size = self._len(content) info.mtime = int(datetime.datetime.now().strftime("%s")) self.tar.addfile(info, content) def unstore_text(self, storage_path): storage_path = str(storage_path).lstrip("/") if not self.has_member(storage_path): raise FileNotFoundError() return self.tar.extractfile(storage_path).read().decode("utf-8") def store_file(self, system_path, storage_path): storage_path = str(storage_path).lstrip("/") if self.has_member(storage_path): raise FileExistsError() self.tar.add(str(system_path), storage_path) def unstore_directory(self, storage_path, system_path): storage_path = str(storage_path).lstrip("/") members = list(self.get_members_in(storage_path)) if not members: raise FileNotFoundError() self.tar.extractall(system_path, members) def unstore_file(self, storage_path, system_path): storage_path = str(storage_path).lstrip("/") member = self.get_member(storage_path) member.name = os.path.basename(system_path) self.tar.extractall(os.path.dirname(system_path), [member]) class LocalMoveStorage(Storage): def __enter__(self): return self def __exit__(self, exception_type, exception_value, traceback): pass def _storage_path(self, storage_path): storage_path = str(storage_path).lstrip("/") return self.destination + "/" + storage_path def _mkdir_p(self, path): if not path: return try: os.makedirs(path) except OSError as exc: if exc.errno == errno.EEXIST and os.path.isdir(path): pass else: raise def list_files(self, storage_path): storage_path = self._storage_path(storage_path) if not os.path.exists(storage_path): raise FileNotFoundError() return os.listdir(storage_path) def store_text(self, content, storage_path): storage_path = self._storage_path(storage_path) if os.path.exists(storage_path): raise FileExistsError() self._mkdir_p(os.path.dirname(storage_path)) with open(storage_path, "w") as f: f.write(content) def unstore_text(self, storage_path): with open(self._storage_path(storage_path)) as f: return f.read() def store_file(self, system_path, storage_path): storage_path = self._storage_path(storage_path) if os.path.exists(storage_path): raise FileExistsError() self._mkdir_p(os.path.dirname(storage_path)) os.rename(system_path, storage_path) def unstore_file(self, storage_path, system_path): storage_path = self._storage_path(storage_path) if not os.path.exists(storage_path): raise FileNotFoundError() self._mkdir_p(os.path.dirname(system_path)) os.rename(storage_path, system_path)
PypiClean
/collective.js.angular-1.5.5.0.tar.gz/collective.js.angular-1.5.5.0/src/collective/js/angular/resources/i18n/angular-locale_fr-ga.js
'use strict'; angular.module("ngLocale", [], ["$provide", function($provide) { var PLURAL_CATEGORY = {ZERO: "zero", ONE: "one", TWO: "two", FEW: "few", MANY: "many", OTHER: "other"}; $provide.value("$locale", { "DATETIME_FORMATS": { "AMPMS": [ "AM", "PM" ], "DAY": [ "dimanche", "lundi", "mardi", "mercredi", "jeudi", "vendredi", "samedi" ], "ERANAMES": [ "avant J\u00e9sus-Christ", "apr\u00e8s J\u00e9sus-Christ" ], "ERAS": [ "av. J.-C.", "ap. J.-C." ], "FIRSTDAYOFWEEK": 0, "MONTH": [ "janvier", "f\u00e9vrier", "mars", "avril", "mai", "juin", "juillet", "ao\u00fbt", "septembre", "octobre", "novembre", "d\u00e9cembre" ], "SHORTDAY": [ "dim.", "lun.", "mar.", "mer.", "jeu.", "ven.", "sam." ], "SHORTMONTH": [ "janv.", "f\u00e9vr.", "mars", "avr.", "mai", "juin", "juil.", "ao\u00fbt", "sept.", "oct.", "nov.", "d\u00e9c." ], "STANDALONEMONTH": [ "Janvier", "F\u00e9vrier", "Mars", "Avril", "Mai", "Juin", "Juillet", "Ao\u00fbt", "Septembre", "Octobre", "Novembre", "D\u00e9cembre" ], "WEEKENDRANGE": [ 5, 6 ], "fullDate": "EEEE d MMMM y", "longDate": "d MMMM y", "medium": "d MMM y HH:mm:ss", "mediumDate": "d MMM y", "mediumTime": "HH:mm:ss", "short": "dd/MM/y HH:mm", "shortDate": "dd/MM/y", "shortTime": "HH:mm" }, "NUMBER_FORMATS": { "CURRENCY_SYM": "FCFA", "DECIMAL_SEP": ",", "GROUP_SEP": "\u00a0", "PATTERNS": [ { "gSize": 3, "lgSize": 3, "maxFrac": 3, "minFrac": 0, "minInt": 1, "negPre": "-", "negSuf": "", "posPre": "", "posSuf": "" }, { "gSize": 3, "lgSize": 3, "maxFrac": 2, "minFrac": 2, "minInt": 1, "negPre": "-", "negSuf": "\u00a0\u00a4", "posPre": "", "posSuf": "\u00a0\u00a4" } ] }, "id": "fr-ga", "localeID": "fr_GA", "pluralCat": function(n, opt_precision) { var i = n | 0; if (i == 0 || i == 1) { return PLURAL_CATEGORY.ONE; } return PLURAL_CATEGORY.OTHER;} }); }]);
PypiClean
/isc-py-common-0.1.27.tar.gz/isc-py-common-0.1.27/react/views/fragment_param_types.py
from isc_common.http.DSResponse import DSResponseUpdate, DSResponseAdd, DSResponse, JsonResponseWithException from isc_common.http.RPCResponse import RPCResponseConstant from isc_common.http.response import JsonResponse from react.models.fragment_param_types import Fragment_param_types, Fragment_param_typesManager @JsonResponseWithException() def Fragment_param_types_Fetch(request): return JsonResponse( DSResponse( request=request, data=Fragment_param_types.objects. select_related(). get_range_rows1( request=request, function=Fragment_param_typesManager.getRecord ), status=RPCResponseConstant.statusSuccess).response) @JsonResponseWithException() def Fragment_param_types_Add(request): return JsonResponse(DSResponseAdd(data=Fragment_param_types.objects.createFromRequest(request=request), status=RPCResponseConstant.statusSuccess).response) @JsonResponseWithException() def Fragment_param_types_Update(request): return JsonResponse(DSResponseUpdate(data=Fragment_param_types.objects.updateFromRequest(request), status=RPCResponseConstant.statusSuccess).response) @JsonResponseWithException() def Fragment_param_types_Remove(request): return JsonResponse(DSResponse(request=request, data=Fragment_param_types.objects.deleteFromRequest(request=request), status=RPCResponseConstant.statusSuccess).response) @JsonResponseWithException() def Fragment_param_types_Lookup(request): return JsonResponse(DSResponse(request=request, data=Fragment_param_types.objects.lookupFromRequest(request=request), status=RPCResponseConstant.statusSuccess).response) @JsonResponseWithException() def Fragment_param_types_Info(request): return JsonResponse(DSResponse(request=request, data=Fragment_param_types.objects.get_queryset().get_info(request=request), status=RPCResponseConstant.statusSuccess).response) @JsonResponseWithException() def Fragment_param_types_Copy(request): return JsonResponse(DSResponse(request=request, data=Fragment_param_types.objects.copyFromRequest(request=request), status=RPCResponseConstant.statusSuccess).response)
PypiClean
/dtscalibration-2.0.0.tar.gz/dtscalibration-2.0.0/docs/notebooks/15Matching_sections.ipynb
# 15. Calibration using matching sections In notebook 14 we showed how you can take splices or connectors within your calibration into account. To then calibrate the cable we used reference sections on both sides of the splice. If these are not available, or in other cases where you have a lack of reference sections, matching sections can be used to improve the calibration. For matching sections you need two sections of fiber than you know will be the exact same temperature. This can be, for example, in duplex cables or twisted pairs of cable. ### Demonstration To demonstrate matching sections, we'll load the same dataset that was used in previous notebooks, and modify the data to simulate a lossy splice, just as in notebook 14. ``` import os from dtscalibration import read_silixa_files import matplotlib.pyplot as plt %matplotlib inline filepath = os.path.join("..", "..", "tests", "data", "double_ended2") ds_ = read_silixa_files(directory=filepath, timezone_netcdf="UTC", file_ext="*.xml") ds = ds_.sel(x=slice(0, 110)) # only calibrate parts of the fiber sections = { "probe1Temperature": [slice(7.5, 17.0)], # cold bath "probe2Temperature": [slice(24.0, 34.0)], # warm bath } ds.sections = sections ``` Again, we introduce a step loss in the signal strength at x = 50 m. For the forward channel, this means all data beyond 50 meters is reduced with a 'random' factor. For the backward channel, this means all data up to 50 meters is reduced with a 'random' factor. ``` ds["st"] = ds.st.where(ds.x < 50, ds.st * 0.8) ds["ast"] = ds.ast.where(ds.x < 50, ds.ast * 0.82) ds["rst"] = ds.rst.where(ds.x > 50, ds.rst * 0.85) ds["rast"] = ds.rast.where(ds.x > 50, ds.rast * 0.81) ``` We will first run a calibration without adding the transient attenuation location or matching sections. A big jump in the calibrated temperature is visible at x = 50. As all calibration sections are before 50 meters, the first 50 m will be calibrated correctly. ``` ds_a = ds.copy(deep=True) st_var, resid = ds_a.variance_stokes(st_label="st") ast_var, _ = ds_a.variance_stokes(st_label="ast") rst_var, _ = ds_a.variance_stokes(st_label="rst") rast_var, _ = ds_a.variance_stokes(st_label="rast") ds_a.calibration_double_ended( st_var=st_var, ast_var=ast_var, rst_var=rst_var, rast_var=rast_var, store_tmpw="tmpw", method="wls", solver="sparse", ) ds_a.isel(time=0).tmpw.plot(label="calibrated") ``` Now we run a calibration, adding the keyword argument '**trans_att**', and provide a list of floats containing the locations of the splices. In this case we only add a single one at x = 50 m. We will also define the matching sections of cable. The matching sections have to be provided as a list of tuples. A tuple per matching section. Each tuple has three items, the first two items are the slices of the sections that are matching. The third item is a bool and is True if the two sections have a reverse direction (as in the "J-configuration"). In this example we match the two cold baths to each other. After running the calibration you will see that by adding the transient attenuation and matching sections the calibration returns the correct temperature, without the big jump. *In single-ended calibration the keyword is called '**trans_att**'.* ``` matching_sections = [(slice(7.5, 17.6), slice(69, 79.1), False)] st_var, resid = ds.variance_stokes(st_label="st") ast_var, _ = ds.variance_stokes(st_label="ast") rst_var, _ = ds.variance_stokes(st_label="rst") rast_var, _ = ds.variance_stokes(st_label="rast") ds.calibration_double_ended( st_var=st_var, ast_var=ast_var, rst_var=rst_var, rast_var=rast_var, trans_att=[50.0], matching_sections=matching_sections, store_tmpw="tmpw", method="wls", solver="sparse", ) ds_a.isel(time=0).tmpw.plot(label="normal calibration") ds.isel(time=0).tmpw.plot(label="matching sections") plt.legend() ```
PypiClean
/azure-mgmt-appcontainers-3.0.0b1.zip/azure-mgmt-appcontainers-3.0.0b1/azure/mgmt/appcontainers/operations/_container_apps_revisions_operations.py
from typing import Any, Callable, Dict, Iterable, Optional, TypeVar import urllib.parse from azure.core.exceptions import ( ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, ResourceNotModifiedError, map_error, ) from azure.core.paging import ItemPaged from azure.core.pipeline import PipelineResponse from azure.core.pipeline.transport import HttpResponse from azure.core.rest import HttpRequest from azure.core.tracing.decorator import distributed_trace from azure.core.utils import case_insensitive_dict from azure.mgmt.core.exceptions import ARMErrorFormat from .. import models as _models from .._serialization import Serializer from .._vendor import _convert_request, _format_url_section T = TypeVar("T") ClsType = Optional[Callable[[PipelineResponse[HttpRequest, HttpResponse], T, Dict[str, Any]], Any]] _SERIALIZER = Serializer() _SERIALIZER.client_side_validation = False def build_list_revisions_request( resource_group_name: str, container_app_name: str, subscription_id: str, *, filter: Optional[str] = None, **kwargs: Any ) -> HttpRequest: _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: str = kwargs.pop("api_version", _params.pop("api-version", "2022-11-01-preview")) accept = _headers.pop("Accept", "application/json") # Construct URL _url = kwargs.pop( "template_url", "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.App/containerApps/{containerAppName}/revisions", ) # pylint: disable=line-too-long path_format_arguments = { "subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, "str", min_length=1), "resourceGroupName": _SERIALIZER.url( "resource_group_name", resource_group_name, "str", max_length=90, min_length=1 ), "containerAppName": _SERIALIZER.url("container_app_name", container_app_name, "str"), } _url: str = _format_url_section(_url, **path_format_arguments) # type: ignore # Construct parameters _params["api-version"] = _SERIALIZER.query("api_version", api_version, "str") if filter is not None: _params["$filter"] = _SERIALIZER.query("filter", filter, "str") # Construct headers _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs) def build_get_revision_request( resource_group_name: str, container_app_name: str, revision_name: str, subscription_id: str, **kwargs: Any ) -> HttpRequest: _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: str = kwargs.pop("api_version", _params.pop("api-version", "2022-11-01-preview")) accept = _headers.pop("Accept", "application/json") # Construct URL _url = kwargs.pop( "template_url", "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.App/containerApps/{containerAppName}/revisions/{revisionName}", ) # pylint: disable=line-too-long path_format_arguments = { "subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, "str", min_length=1), "resourceGroupName": _SERIALIZER.url( "resource_group_name", resource_group_name, "str", max_length=90, min_length=1 ), "containerAppName": _SERIALIZER.url("container_app_name", container_app_name, "str"), "revisionName": _SERIALIZER.url("revision_name", revision_name, "str"), } _url: str = _format_url_section(_url, **path_format_arguments) # type: ignore # Construct parameters _params["api-version"] = _SERIALIZER.query("api_version", api_version, "str") # Construct headers _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs) def build_activate_revision_request( resource_group_name: str, container_app_name: str, revision_name: str, subscription_id: str, **kwargs: Any ) -> HttpRequest: _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: str = kwargs.pop("api_version", _params.pop("api-version", "2022-11-01-preview")) accept = _headers.pop("Accept", "application/json") # Construct URL _url = kwargs.pop( "template_url", "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.App/containerApps/{containerAppName}/revisions/{revisionName}/activate", ) # pylint: disable=line-too-long path_format_arguments = { "subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, "str", min_length=1), "resourceGroupName": _SERIALIZER.url( "resource_group_name", resource_group_name, "str", max_length=90, min_length=1 ), "containerAppName": _SERIALIZER.url("container_app_name", container_app_name, "str"), "revisionName": _SERIALIZER.url("revision_name", revision_name, "str"), } _url: str = _format_url_section(_url, **path_format_arguments) # type: ignore # Construct parameters _params["api-version"] = _SERIALIZER.query("api_version", api_version, "str") # Construct headers _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") return HttpRequest(method="POST", url=_url, params=_params, headers=_headers, **kwargs) def build_deactivate_revision_request( resource_group_name: str, container_app_name: str, revision_name: str, subscription_id: str, **kwargs: Any ) -> HttpRequest: _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: str = kwargs.pop("api_version", _params.pop("api-version", "2022-11-01-preview")) accept = _headers.pop("Accept", "application/json") # Construct URL _url = kwargs.pop( "template_url", "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.App/containerApps/{containerAppName}/revisions/{revisionName}/deactivate", ) # pylint: disable=line-too-long path_format_arguments = { "subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, "str", min_length=1), "resourceGroupName": _SERIALIZER.url( "resource_group_name", resource_group_name, "str", max_length=90, min_length=1 ), "containerAppName": _SERIALIZER.url("container_app_name", container_app_name, "str"), "revisionName": _SERIALIZER.url("revision_name", revision_name, "str"), } _url: str = _format_url_section(_url, **path_format_arguments) # type: ignore # Construct parameters _params["api-version"] = _SERIALIZER.query("api_version", api_version, "str") # Construct headers _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") return HttpRequest(method="POST", url=_url, params=_params, headers=_headers, **kwargs) def build_restart_revision_request( resource_group_name: str, container_app_name: str, revision_name: str, subscription_id: str, **kwargs: Any ) -> HttpRequest: _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: str = kwargs.pop("api_version", _params.pop("api-version", "2022-11-01-preview")) accept = _headers.pop("Accept", "application/json") # Construct URL _url = kwargs.pop( "template_url", "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.App/containerApps/{containerAppName}/revisions/{revisionName}/restart", ) # pylint: disable=line-too-long path_format_arguments = { "subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, "str", min_length=1), "resourceGroupName": _SERIALIZER.url( "resource_group_name", resource_group_name, "str", max_length=90, min_length=1 ), "containerAppName": _SERIALIZER.url("container_app_name", container_app_name, "str"), "revisionName": _SERIALIZER.url("revision_name", revision_name, "str"), } _url: str = _format_url_section(_url, **path_format_arguments) # type: ignore # Construct parameters _params["api-version"] = _SERIALIZER.query("api_version", api_version, "str") # Construct headers _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") return HttpRequest(method="POST", url=_url, params=_params, headers=_headers, **kwargs) class ContainerAppsRevisionsOperations: """ .. warning:: **DO NOT** instantiate this class directly. Instead, you should access the following operations through :class:`~azure.mgmt.appcontainers.ContainerAppsAPIClient`'s :attr:`container_apps_revisions` attribute. """ models = _models def __init__(self, *args, **kwargs): input_args = list(args) self._client = input_args.pop(0) if input_args else kwargs.pop("client") self._config = input_args.pop(0) if input_args else kwargs.pop("config") self._serialize = input_args.pop(0) if input_args else kwargs.pop("serializer") self._deserialize = input_args.pop(0) if input_args else kwargs.pop("deserializer") @distributed_trace def list_revisions( self, resource_group_name: str, container_app_name: str, filter: Optional[str] = None, **kwargs: Any ) -> Iterable["_models.Revision"]: """Get the Revisions for a given Container App. Get the Revisions for a given Container App. :param resource_group_name: The name of the resource group. The name is case insensitive. Required. :type resource_group_name: str :param container_app_name: Name of the Container App for which Revisions are needed. Required. :type container_app_name: str :param filter: The filter to apply on the operation. Default value is None. :type filter: str :keyword callable cls: A custom type or function that will be passed the direct response :return: An iterator like instance of either Revision or the result of cls(response) :rtype: ~azure.core.paging.ItemPaged[~azure.mgmt.appcontainers.models.Revision] :raises ~azure.core.exceptions.HttpResponseError: """ _headers = kwargs.pop("headers", {}) or {} _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: str = kwargs.pop("api_version", _params.pop("api-version", self._config.api_version)) cls: ClsType[_models.RevisionCollection] = kwargs.pop("cls", None) error_map = { 401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError, 304: ResourceNotModifiedError, } error_map.update(kwargs.pop("error_map", {}) or {}) def prepare_request(next_link=None): if not next_link: request = build_list_revisions_request( resource_group_name=resource_group_name, container_app_name=container_app_name, subscription_id=self._config.subscription_id, filter=filter, api_version=api_version, template_url=self.list_revisions.metadata["url"], headers=_headers, params=_params, ) request = _convert_request(request) request.url = self._client.format_url(request.url) else: # make call to next link with the client's api-version _parsed_next_link = urllib.parse.urlparse(next_link) _next_request_params = case_insensitive_dict( { key: [urllib.parse.quote(v) for v in value] for key, value in urllib.parse.parse_qs(_parsed_next_link.query).items() } ) _next_request_params["api-version"] = self._config.api_version request = HttpRequest( "GET", urllib.parse.urljoin(next_link, _parsed_next_link.path), params=_next_request_params ) request = _convert_request(request) request.url = self._client.format_url(request.url) request.method = "GET" return request def extract_data(pipeline_response): deserialized = self._deserialize("RevisionCollection", pipeline_response) list_of_elem = deserialized.value if cls: list_of_elem = cls(list_of_elem) # type: ignore return deserialized.next_link or None, iter(list_of_elem) def get_next(next_link=None): request = prepare_request(next_link) _stream = False pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access request, stream=_stream, **kwargs ) response = pipeline_response.http_response if response.status_code not in [200]: map_error(status_code=response.status_code, response=response, error_map=error_map) error = self._deserialize.failsafe_deserialize(_models.DefaultErrorResponse, pipeline_response) raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat) return pipeline_response return ItemPaged(get_next, extract_data) list_revisions.metadata = { "url": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.App/containerApps/{containerAppName}/revisions" } @distributed_trace def get_revision( self, resource_group_name: str, container_app_name: str, revision_name: str, **kwargs: Any ) -> _models.Revision: """Get a revision of a Container App. Get a revision of a Container App. :param resource_group_name: The name of the resource group. The name is case insensitive. Required. :type resource_group_name: str :param container_app_name: Name of the Container App. Required. :type container_app_name: str :param revision_name: Name of the Container App Revision. Required. :type revision_name: str :keyword callable cls: A custom type or function that will be passed the direct response :return: Revision or the result of cls(response) :rtype: ~azure.mgmt.appcontainers.models.Revision :raises ~azure.core.exceptions.HttpResponseError: """ error_map = { 401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError, 304: ResourceNotModifiedError, } error_map.update(kwargs.pop("error_map", {}) or {}) _headers = kwargs.pop("headers", {}) or {} _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: str = kwargs.pop("api_version", _params.pop("api-version", self._config.api_version)) cls: ClsType[_models.Revision] = kwargs.pop("cls", None) request = build_get_revision_request( resource_group_name=resource_group_name, container_app_name=container_app_name, revision_name=revision_name, subscription_id=self._config.subscription_id, api_version=api_version, template_url=self.get_revision.metadata["url"], headers=_headers, params=_params, ) request = _convert_request(request) request.url = self._client.format_url(request.url) _stream = False pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access request, stream=_stream, **kwargs ) response = pipeline_response.http_response if response.status_code not in [200]: map_error(status_code=response.status_code, response=response, error_map=error_map) error = self._deserialize.failsafe_deserialize(_models.DefaultErrorResponse, pipeline_response) raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat) deserialized = self._deserialize("Revision", pipeline_response) if cls: return cls(pipeline_response, deserialized, {}) return deserialized get_revision.metadata = { "url": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.App/containerApps/{containerAppName}/revisions/{revisionName}" } @distributed_trace def activate_revision( # pylint: disable=inconsistent-return-statements self, resource_group_name: str, container_app_name: str, revision_name: str, **kwargs: Any ) -> None: """Activates a revision for a Container App. Activates a revision for a Container App. :param resource_group_name: The name of the resource group. The name is case insensitive. Required. :type resource_group_name: str :param container_app_name: Name of the Container App. Required. :type container_app_name: str :param revision_name: Name of the Container App Revision. Required. :type revision_name: str :keyword callable cls: A custom type or function that will be passed the direct response :return: None or the result of cls(response) :rtype: None :raises ~azure.core.exceptions.HttpResponseError: """ error_map = { 401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError, 304: ResourceNotModifiedError, } error_map.update(kwargs.pop("error_map", {}) or {}) _headers = kwargs.pop("headers", {}) or {} _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: str = kwargs.pop("api_version", _params.pop("api-version", self._config.api_version)) cls: ClsType[None] = kwargs.pop("cls", None) request = build_activate_revision_request( resource_group_name=resource_group_name, container_app_name=container_app_name, revision_name=revision_name, subscription_id=self._config.subscription_id, api_version=api_version, template_url=self.activate_revision.metadata["url"], headers=_headers, params=_params, ) request = _convert_request(request) request.url = self._client.format_url(request.url) _stream = False pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access request, stream=_stream, **kwargs ) response = pipeline_response.http_response if response.status_code not in [200]: map_error(status_code=response.status_code, response=response, error_map=error_map) error = self._deserialize.failsafe_deserialize(_models.DefaultErrorResponse, pipeline_response) raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat) if cls: return cls(pipeline_response, None, {}) activate_revision.metadata = { "url": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.App/containerApps/{containerAppName}/revisions/{revisionName}/activate" } @distributed_trace def deactivate_revision( # pylint: disable=inconsistent-return-statements self, resource_group_name: str, container_app_name: str, revision_name: str, **kwargs: Any ) -> None: """Deactivates a revision for a Container App. Deactivates a revision for a Container App. :param resource_group_name: The name of the resource group. The name is case insensitive. Required. :type resource_group_name: str :param container_app_name: Name of the Container App. Required. :type container_app_name: str :param revision_name: Name of the Container App Revision. Required. :type revision_name: str :keyword callable cls: A custom type or function that will be passed the direct response :return: None or the result of cls(response) :rtype: None :raises ~azure.core.exceptions.HttpResponseError: """ error_map = { 401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError, 304: ResourceNotModifiedError, } error_map.update(kwargs.pop("error_map", {}) or {}) _headers = kwargs.pop("headers", {}) or {} _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: str = kwargs.pop("api_version", _params.pop("api-version", self._config.api_version)) cls: ClsType[None] = kwargs.pop("cls", None) request = build_deactivate_revision_request( resource_group_name=resource_group_name, container_app_name=container_app_name, revision_name=revision_name, subscription_id=self._config.subscription_id, api_version=api_version, template_url=self.deactivate_revision.metadata["url"], headers=_headers, params=_params, ) request = _convert_request(request) request.url = self._client.format_url(request.url) _stream = False pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access request, stream=_stream, **kwargs ) response = pipeline_response.http_response if response.status_code not in [200]: map_error(status_code=response.status_code, response=response, error_map=error_map) error = self._deserialize.failsafe_deserialize(_models.DefaultErrorResponse, pipeline_response) raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat) if cls: return cls(pipeline_response, None, {}) deactivate_revision.metadata = { "url": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.App/containerApps/{containerAppName}/revisions/{revisionName}/deactivate" } @distributed_trace def restart_revision( # pylint: disable=inconsistent-return-statements self, resource_group_name: str, container_app_name: str, revision_name: str, **kwargs: Any ) -> None: """Restarts a revision for a Container App. Restarts a revision for a Container App. :param resource_group_name: The name of the resource group. The name is case insensitive. Required. :type resource_group_name: str :param container_app_name: Name of the Container App. Required. :type container_app_name: str :param revision_name: Name of the Container App Revision. Required. :type revision_name: str :keyword callable cls: A custom type or function that will be passed the direct response :return: None or the result of cls(response) :rtype: None :raises ~azure.core.exceptions.HttpResponseError: """ error_map = { 401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError, 304: ResourceNotModifiedError, } error_map.update(kwargs.pop("error_map", {}) or {}) _headers = kwargs.pop("headers", {}) or {} _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: str = kwargs.pop("api_version", _params.pop("api-version", self._config.api_version)) cls: ClsType[None] = kwargs.pop("cls", None) request = build_restart_revision_request( resource_group_name=resource_group_name, container_app_name=container_app_name, revision_name=revision_name, subscription_id=self._config.subscription_id, api_version=api_version, template_url=self.restart_revision.metadata["url"], headers=_headers, params=_params, ) request = _convert_request(request) request.url = self._client.format_url(request.url) _stream = False pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access request, stream=_stream, **kwargs ) response = pipeline_response.http_response if response.status_code not in [200]: map_error(status_code=response.status_code, response=response, error_map=error_map) error = self._deserialize.failsafe_deserialize(_models.DefaultErrorResponse, pipeline_response) raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat) if cls: return cls(pipeline_response, None, {}) restart_revision.metadata = { "url": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.App/containerApps/{containerAppName}/revisions/{revisionName}/restart" }
PypiClean
/django_parcel_ssr-0.7.0-py3-none-any.whl/ssr/template.py
from os.path import relpath, splitext, basename, join, dirname, exists from urllib.parse import urljoin from collections import namedtuple from django.template import TemplateSyntaxError from django.http import HttpRequest from ssr.settings import ENTRIES_DIR, BUNDLES_DIR, CACHE_DIR, SOCKETS_DIR from ssr.server import Server class Template: Env = namedtuple('Env', [ 'entry', 'out_file', 'out_dir', 'script_relpath', 'socket', 'cache_dir' ]) def __init__(self, path: str, root: str, production_mode: bool, server: Server, server_script: str, client_script: str, build_hash: str, static_url: str, dist_dir: str) -> None: self.render_server = server self.relpath = relpath(path, root) root_relpath = splitext(self.relpath)[0] socket_relpath = root_relpath + '-bundler.sock' out_relpath = root_relpath + '.js' out_file = basename(out_relpath) if production_mode: hashed_out_relpath = root_relpath hashed_out_relpath += '-' + build_hash + '.js' else: hashed_out_relpath = out_relpath hashed_out_file = basename(hashed_out_relpath) out_dir = dirname(self.relpath) self.url = urljoin(static_url, out_dir) self.component_relpath = relpath(path, ENTRIES_DIR) self.server = self.Env( entry=join(ENTRIES_DIR, 'server.js'), out_file=out_file, out_dir=join(BUNDLES_DIR, out_dir), script_relpath=relpath(server_script, ENTRIES_DIR), socket=join(SOCKETS_DIR, 'client', socket_relpath), cache_dir=join(CACHE_DIR, 'server') ) self.client = self.Env( entry=join(ENTRIES_DIR, 'client.js'), out_file=hashed_out_file, out_dir=join(dist_dir, out_dir), script_relpath=relpath(client_script, ENTRIES_DIR), socket=join(SOCKETS_DIR, 'server', socket_relpath), cache_dir=join(CACHE_DIR, 'client') ) self.script = urljoin(static_url, hashed_out_relpath) stylesheet_relpath = splitext(out_relpath)[0] + '.css' hashed_stylesheet_relpath = splitext(hashed_out_relpath)[0] + '.css' if exists(join(BUNDLES_DIR, stylesheet_relpath)): self.stylesheet = urljoin(static_url, hashed_stylesheet_relpath) else: self.stylesheet = '' def render(self, context: dict = None, request: HttpRequest = None) -> str: template_path = join(self.server.out_dir, self.server.out_file) try: return self.render_server.render( template_path, self.script, self.stylesheet, context) except Exception as exception: raise TemplateSyntaxError(exception)
PypiClean
/onshape_client-1.6.3-py3-none-any.whl/onshape_client/oas/models/btm_parameter_configured2222_all_of.py
from __future__ import absolute_import import re # noqa: F401 import sys # noqa: F401 import six # noqa: F401 import nulltype # noqa: F401 from onshape_client.oas.model_utils import ( # noqa: F401 ModelComposed, ModelNormal, ModelSimple, date, datetime, file_type, int, none_type, str, validate_get_composed_info, ) try: from onshape_client.oas.models import btm_configured_value1341 except ImportError: btm_configured_value1341 = sys.modules[ "onshape_client.oas.models.btm_configured_value1341" ] class BTMParameterConfigured2222AllOf(ModelNormal): """NOTE: This class is auto generated by OpenAPI Generator. Ref: https://openapi-generator.tech Do not edit the class manually. Attributes: allowed_values (dict): The key is the tuple path to the attribute and the for var_name this is (var_name,). The value is a dict with a capitalized key describing the allowed value and an allowed value. These dicts store the allowed enum values. attribute_map (dict): The key is attribute name and the value is json key in definition. discriminator_value_class_map (dict): A dict to go from the discriminator variable value to the discriminator class name. validations (dict): The key is the tuple path to the attribute and the for var_name this is (var_name,). The value is a dict that stores validations for max_length, min_length, max_items, min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum, inclusive_minimum, and regex. additional_properties_type (tuple): A tuple of classes accepted as additional properties values. """ allowed_values = {} validations = {} additional_properties_type = None @staticmethod def openapi_types(): """ This must be a class method so a model may have properties that are of type self, this ensures that we don't create a cyclic import Returns openapi_types (dict): The key is attribute name and the value is attribute type. """ return { "bt_type": (str,), # noqa: E501 "configuration_parameter_id": (str,), # noqa: E501 "configuration_parameter_id_field_index": (int,), # noqa: E501 "values": ( [btm_configured_value1341.BTMConfiguredValue1341], ), # noqa: E501 "values_field_index": (int,), # noqa: E501 } @staticmethod def discriminator(): return None attribute_map = { "bt_type": "btType", # noqa: E501 "configuration_parameter_id": "configurationParameterId", # noqa: E501 "configuration_parameter_id_field_index": "configurationParameterIdFieldIndex", # noqa: E501 "values": "values", # noqa: E501 "values_field_index": "valuesFieldIndex", # noqa: E501 } @staticmethod def _composed_schemas(): return None required_properties = set( [ "_data_store", "_check_type", "_from_server", "_path_to_item", "_configuration", ] ) def __init__( self, _check_type=True, _from_server=False, _path_to_item=(), _configuration=None, **kwargs ): # noqa: E501 """btm_parameter_configured2222_all_of.BTMParameterConfigured2222AllOf - a model defined in OpenAPI Keyword Args: _check_type (bool): if True, values for parameters in openapi_types will be type checked and a TypeError will be raised if the wrong type is input. Defaults to True _path_to_item (tuple/list): This is a list of keys or values to drill down to the model in received_data when deserializing a response _from_server (bool): True if the data is from the server False if the data is from the client (default) _configuration (Configuration): the instance to use when deserializing a file_type parameter. If passed, type conversion is attempted If omitted no type conversion is done. bt_type (str): [optional] # noqa: E501 configuration_parameter_id (str): [optional] # noqa: E501 configuration_parameter_id_field_index (int): [optional] # noqa: E501 values ([btm_configured_value1341.BTMConfiguredValue1341]): [optional] # noqa: E501 values_field_index (int): [optional] # noqa: E501 """ self._data_store = {} self._check_type = _check_type self._from_server = _from_server self._path_to_item = _path_to_item self._configuration = _configuration for var_name, var_value in six.iteritems(kwargs): if ( var_name not in self.attribute_map and self._configuration is not None and self._configuration.discard_unknown_keys and self.additional_properties_type is None ): # discard variable. continue setattr(self, var_name, var_value)
PypiClean
/micral_name_core-2.0.tar.gz/micral_name_core-2.0/README.rst
Micral (MICROstructure anALysis) Name module - core module ----- Description : The aim is to give a name between 7 available This tools output the name of the sample, and some others details as the accuracy. ----- Installation : pip install micral_name ----- Usage : In a python file or directly in the interpreter, type : import micral_name print(micral_name.analyse(<your image>)) Where <your image> is the name of the image to analyse (with extension (ie jpg, png, bmp etc...)) The output is an unique value. It's also possible to send multiple image at once : in that case, provide a list of name : <your image> = [<image1>, <image2>, ...] And the return will be also a list of all the results Note that the image must be in the same directory than the python file (or in the current directory of the command)
PypiClean
/py-pure-client-1.38.0.tar.gz/py-pure-client-1.38.0/pypureclient/flashblade/FB_2_8/models/relationship_performance_replication_get_resp.py
import pprint import re import six import typing from ....properties import Property if typing.TYPE_CHECKING: from pypureclient.flashblade.FB_2_8 import models class RelationshipPerformanceReplicationGetResp(object): """ Attributes: swagger_types (dict): The key is attribute name and the value is attribute type. attribute_map (dict): The key is attribute name and the value is json key in definition. """ swagger_types = { 'continuation_token': 'str', 'total_item_count': 'int', 'items': 'list[RelationshipPerformanceReplication]', 'total': 'list[RelationshipPerformanceReplication]' } attribute_map = { 'continuation_token': 'continuation_token', 'total_item_count': 'total_item_count', 'items': 'items', 'total': 'total' } required_args = { } def __init__( self, continuation_token=None, # type: str total_item_count=None, # type: int items=None, # type: List[models.RelationshipPerformanceReplication] total=None, # type: List[models.RelationshipPerformanceReplication] ): """ Keyword args: continuation_token (str): Continuation token that can be provided in the `continuation_token` query param to get the next page of data. If you use the `continuation_token` to page through data you are guaranteed to get all items exactly once regardless of how items are modified. If an item is added or deleted during the pagination then it may or may not be returned. The `continuation_token` is generated if the `limit` is less than the remaining number of items, and the default sort is used (no sort is specified). total_item_count (int): Total number of items after applying `filter` params. items (list[RelationshipPerformanceReplication]) total (list[RelationshipPerformanceReplication]): Total of all records after filtering. If `total_only` query param is `true`, then no items will be returned. """ if continuation_token is not None: self.continuation_token = continuation_token if total_item_count is not None: self.total_item_count = total_item_count if items is not None: self.items = items if total is not None: self.total = total def __setattr__(self, key, value): if key not in self.attribute_map: raise KeyError("Invalid key `{}` for `RelationshipPerformanceReplicationGetResp`".format(key)) self.__dict__[key] = value def __getattribute__(self, item): value = object.__getattribute__(self, item) if isinstance(value, Property): return None else: return value def to_dict(self): """Returns the model properties as a dict""" result = {} for attr, _ in six.iteritems(self.swagger_types): if hasattr(self, attr): value = getattr(self, attr) if isinstance(value, list): result[attr] = list(map( lambda x: x.to_dict() if hasattr(x, "to_dict") else x, value )) elif hasattr(value, "to_dict"): result[attr] = value.to_dict() elif isinstance(value, dict): result[attr] = dict(map( lambda item: (item[0], item[1].to_dict()) if hasattr(item[1], "to_dict") else item, value.items() )) else: result[attr] = value if issubclass(RelationshipPerformanceReplicationGetResp, dict): for key, value in self.items(): result[key] = value return result def to_str(self): """Returns the string representation of the model""" return pprint.pformat(self.to_dict()) def __repr__(self): """For `print` and `pprint`""" return self.to_str() def __eq__(self, other): """Returns true if both objects are equal""" if not isinstance(other, RelationshipPerformanceReplicationGetResp): return False return self.__dict__ == other.__dict__ def __ne__(self, other): """Returns true if both objects are not equal""" return not self == other
PypiClean
/rainforest_mch-1.3.3.tar.gz/rainforest_mch-1.3.3/rainforest/database/db_populate.py
# Global imports import sys import os import datetime import logging from pathlib import Path logging.basicConfig(level=logging.INFO) dir_path = os.path.dirname(os.path.realpath(__file__)) sys.path.append(dir_path) from optparse import OptionParser # Local imports from rainforest.database.database import Database from rainforest.common import constants def main(): parser = OptionParser() parser.add_option("-t", "--type", dest = "type", type = str, help="Type of table to populate, either 'gauge', 'reference' or 'radar'", metavar="TYPE") parser.add_option("-o", "--outputfolder", dest = "outputfolder", type = str, default = None, help="Path of the output folder, default is /store/msrad/radar/radar_database/<type>", metavar="OUTPUT") parser.add_option("-s", "--start", dest = "start", type = str, help="Specify the start time in the format YYYYddmmHHMM, it is mandatory only if type == 'gauge', otherwise if not provided, will be inferred from gauge data", metavar = "START", default = None) parser.add_option("-e", "--end", dest = "end", type = str, help="Specify the end time in the format YYYYddmmHHMM, it is mandatory only if type == 'gauge', otherwise if not provided, will be inferred from gauge data", metavar = "END", default = None) parser.add_option("-c", "--config", dest = "config", type = str, default = None, help="Path of the config file, the default will be default_config.yml in the database module", metavar="CONFIG") parser.add_option("-g", "--gauge", dest = "gauge", type = str, default = '/store/msrad/radar/radar_database/gauge/*.csv.gz', help="Needed only if type == reference or radar, path pattern (with wildcards) of the gauge data (from database) to be used, " + "default = '/store/msrad/radar/radar_database/gauge/*.csv.gz', IMPORTANT you have to put this statement into quotes (due to wildcard)!") (options, args) = parser.parse_args() if options.type not in ['gauge','radar','reference']: raise ValueError("Type (-t) must be either 'radar', 'gauge' or 'reference'") if options.type == 'gauge' and (options.end == None or options.start == None): raise ValueError("Please enter start and time when type == 'gauge'") if options.start != None: options.start = datetime.datetime.strptime(options.start, '%Y%m%d%H%M') if options.end != None: options.end = datetime.datetime.strptime(options.end, '%Y%m%d%H%M') if options.outputfolder == None: options.outputfolder = str(Path(constants.FOLDER_DATABASE, options.type)) if not os.path.exists(options.outputfolder): os.makedirs(options.outputfolder) if options.config == None: script_path = os.path.dirname(os.path.realpath(__file__)) default_config_path = str(Path(script_path, 'default_config.yml')) options.config = default_config_path dbase = Database(config_file = options.config) if options.type != 'gauge': logging.info('Trying to read gauge table...') try: dbase.add_tables({'gauge':options.gauge}) except: logging.error('Could not read gauge table, please check -g pattern') raise logging.info('Starting database update, leave the script running!') if options.type == 'gauge': dbase.update_station_data(options.start, options.end, options.outputfolder) elif options.type == 'reference': dbase.update_reference_data('gauge', options.outputfolder, options.start, options.end) elif options.type == 'radar': dbase.update_radar_data('gauge', options.outputfolder, options.start, options.end)
PypiClean
/architect-api-0.4.tar.gz/architect-api-0.4/architect/utils.py
import os import re import json import yaml import importlib import datetime import rfc3339 import iso8601 from django.conf import settings from architect import exceptions _schema_dir = os.path.join( os.path.dirname(os.path.realpath(__file__)), 'schemas') def load_yaml_json_file(path): if os.path.exists(path): with open(path, 'r') as f: if path.endswith('json'): return json.load(f) else: return yaml.safe_load(f) return {} def get_node_icon(icon): family, character = icon.split(":") icon_file = os.path.join(_schema_dir, '_icon.yaml') icon_mapping = load_yaml_json_file(icon_file) output = icon_mapping['character'][family][character].copy() output["family"] = icon_mapping['family'][family] output['name'] = character output["char"] = int("0x{}".format(output["char"]), 0) return output def get_resource_schema(name): schema_file = os.path.join(_schema_dir, '{}.yaml'.format(name)) return load_yaml_json_file(schema_file) def to_camel_case(snake_str, first=True): components = snake_str.split('_') if first: return "".join(x.title() for x in components) else: return components[0] + "".join(x.title() for x in components[1:]) def to_snake_case(name): s1 = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', name) return re.sub('([a-z0-9])([A-Z])', r'\1_\2', s1).lower() def get_date_object(date_string): return iso8601.parse_date(date_string) def get_date_string(date_object): return rfc3339.rfc3339(date_object) seconds_per_unit_time = {"s": 1, "m": 60, "h": 3600, "d": 86400, "w": 604800} def unit_time_to_seconds(s): return int(s[:-1]) * seconds_per_unit_time[s[-1]] def get_module(module_key, module_type='manager'): if module_type == 'manager': class_mapping = settings.MANAGER_CLASS_MAPPINGS elif module_type == 'inventory': class_mapping = settings.INVENTORY_CLASS_MAPPINGS elif module_type == 'monitor': class_mapping = settings.MONITOR_CLASS_MAPPINGS elif module_type == 'repository': class_mapping = settings.REPOSITORY_CLASS_MAPPINGS if module_key not in class_mapping: raise exceptions.ArchitectException( "Service {module_key} is unkown. Please pass in a client" " constructor or submit a patch to Architect".format( module_key=module_key)) mod_name, ctr_name = class_mapping[module_key].rsplit('.', 1) lib_name = mod_name.split('.')[0] try: mod = importlib.import_module(mod_name) except ImportError: raise exceptions.ArchitectException( "Client for '{module_key}' was requested, but" " {mod_name} was unable to be imported. Either import" " the module yourself and pass the constructor in as an argument," " or perhaps you do not have module {lib_name} installed.".format( module_key=module_key, mod_name=mod_name, lib_name=lib_name)) try: ctr = getattr(mod, ctr_name) except AttributeError: raise exceptions.ArchitectException( "Client for '{module_key}' was requested, but although" " {mod_name} imported fine, the constructor at {fullname}" " as not found.".format( module_key=module_key, mod_name=mod_name, fullname=class_mapping[module_key])) return ctr class ClassRegistry: def __init__(self): self._classes = {} def add(self, cls): self._classes[cls.__name__] = cls def get_type(self, name): return self._classes.get(name)
PypiClean
/idealoom-0.1.0-py3-none-any.whl/assembl/static/widget/card/assembl/static/js/bower/angular-route/angular-route.js
(function(window, angular, undefined) {'use strict'; /** * @ngdoc module * @name ngRoute * @description * * # ngRoute * * The `ngRoute` module provides routing and deeplinking services and directives for angular apps. * * ## Example * See {@link ngRoute.$route#example $route} for an example of configuring and using `ngRoute`. * * * <div doc-module-components="ngRoute"></div> */ /* global -ngRouteModule */ var ngRouteModule = angular.module('ngRoute', ['ng']). provider('$route', $RouteProvider), $routeMinErr = angular.$$minErr('ngRoute'); /** * @ngdoc provider * @name $routeProvider * * @description * * Used for configuring routes. * * ## Example * See {@link ngRoute.$route#example $route} for an example of configuring and using `ngRoute`. * * ## Dependencies * Requires the {@link ngRoute `ngRoute`} module to be installed. */ function $RouteProvider() { function inherit(parent, extra) { return angular.extend(Object.create(parent), extra); } var routes = {}; /** * @ngdoc method * @name $routeProvider#when * * @param {string} path Route path (matched against `$location.path`). If `$location.path` * contains redundant trailing slash or is missing one, the route will still match and the * `$location.path` will be updated to add or drop the trailing slash to exactly match the * route definition. * * * `path` can contain named groups starting with a colon: e.g. `:name`. All characters up * to the next slash are matched and stored in `$routeParams` under the given `name` * when the route matches. * * `path` can contain named groups starting with a colon and ending with a star: * e.g.`:name*`. All characters are eagerly stored in `$routeParams` under the given `name` * when the route matches. * * `path` can contain optional named groups with a question mark: e.g.`:name?`. * * For example, routes like `/color/:color/largecode/:largecode*\/edit` will match * `/color/brown/largecode/code/with/slashes/edit` and extract: * * * `color: brown` * * `largecode: code/with/slashes`. * * * @param {Object} route Mapping information to be assigned to `$route.current` on route * match. * * Object properties: * * - `controller` – `{(string|function()=}` – Controller fn that should be associated with * newly created scope or the name of a {@link angular.Module#controller registered * controller} if passed as a string. * - `controllerAs` – `{string=}` – An identifier name for a reference to the controller. * If present, the controller will be published to scope under the `controllerAs` name. * - `template` – `{string=|function()=}` – html template as a string or a function that * returns an html template as a string which should be used by {@link * ngRoute.directive:ngView ngView} or {@link ng.directive:ngInclude ngInclude} directives. * This property takes precedence over `templateUrl`. * * If `template` is a function, it will be called with the following parameters: * * - `{Array.<Object>}` - route parameters extracted from the current * `$location.path()` by applying the current route * * - `templateUrl` – `{string=|function()=}` – path or function that returns a path to an html * template that should be used by {@link ngRoute.directive:ngView ngView}. * * If `templateUrl` is a function, it will be called with the following parameters: * * - `{Array.<Object>}` - route parameters extracted from the current * `$location.path()` by applying the current route * * - `resolve` - `{Object.<string, function>=}` - An optional map of dependencies which should * be injected into the controller. If any of these dependencies are promises, the router * will wait for them all to be resolved or one to be rejected before the controller is * instantiated. * If all the promises are resolved successfully, the values of the resolved promises are * injected and {@link ngRoute.$route#$routeChangeSuccess $routeChangeSuccess} event is * fired. If any of the promises are rejected the * {@link ngRoute.$route#$routeChangeError $routeChangeError} event is fired. The map object * is: * * - `key` – `{string}`: a name of a dependency to be injected into the controller. * - `factory` - `{string|function}`: If `string` then it is an alias for a service. * Otherwise if function, then it is {@link auto.$injector#invoke injected} * and the return value is treated as the dependency. If the result is a promise, it is * resolved before its value is injected into the controller. Be aware that * `ngRoute.$routeParams` will still refer to the previous route within these resolve * functions. Use `$route.current.params` to access the new route parameters, instead. * * - `redirectTo` – {(string|function())=} – value to update * {@link ng.$location $location} path with and trigger route redirection. * * If `redirectTo` is a function, it will be called with the following parameters: * * - `{Object.<string>}` - route parameters extracted from the current * `$location.path()` by applying the current route templateUrl. * - `{string}` - current `$location.path()` * - `{Object}` - current `$location.search()` * * The custom `redirectTo` function is expected to return a string which will be used * to update `$location.path()` and `$location.search()`. * * - `[reloadOnSearch=true]` - {boolean=} - reload route when only `$location.search()` * or `$location.hash()` changes. * * If the option is set to `false` and url in the browser changes, then * `$routeUpdate` event is broadcasted on the root scope. * * - `[caseInsensitiveMatch=false]` - {boolean=} - match routes without being case sensitive * * If the option is set to `true`, then the particular route can be matched without being * case sensitive * * @returns {Object} self * * @description * Adds a new route definition to the `$route` service. */ this.when = function(path, route) { //copy original route object to preserve params inherited from proto chain var routeCopy = angular.copy(route); if (angular.isUndefined(routeCopy.reloadOnSearch)) { routeCopy.reloadOnSearch = true; } if (angular.isUndefined(routeCopy.caseInsensitiveMatch)) { routeCopy.caseInsensitiveMatch = this.caseInsensitiveMatch; } routes[path] = angular.extend( routeCopy, path && pathRegExp(path, routeCopy) ); // create redirection for trailing slashes if (path) { var redirectPath = (path[path.length - 1] == '/') ? path.substr(0, path.length - 1) : path + '/'; routes[redirectPath] = angular.extend( {redirectTo: path}, pathRegExp(redirectPath, routeCopy) ); } return this; }; /** * @ngdoc property * @name $routeProvider#caseInsensitiveMatch * @description * * A boolean property indicating if routes defined * using this provider should be matched using a case insensitive * algorithm. Defaults to `false`. */ this.caseInsensitiveMatch = false; /** * @param path {string} path * @param opts {Object} options * @return {?Object} * * @description * Normalizes the given path, returning a regular expression * and the original path. * * Inspired by pathRexp in visionmedia/express/lib/utils.js. */ function pathRegExp(path, opts) { var insensitive = opts.caseInsensitiveMatch, ret = { originalPath: path, regexp: path }, keys = ret.keys = []; path = path .replace(/([().])/g, '\\$1') .replace(/(\/)?:(\w+)([\?\*])?/g, function(_, slash, key, option) { var optional = option === '?' ? option : null; var star = option === '*' ? option : null; keys.push({ name: key, optional: !!optional }); slash = slash || ''; return '' + (optional ? '' : slash) + '(?:' + (optional ? slash : '') + (star && '(.+?)' || '([^/]+)') + (optional || '') + ')' + (optional || ''); }) .replace(/([\/$\*])/g, '\\$1'); ret.regexp = new RegExp('^' + path + '$', insensitive ? 'i' : ''); return ret; } /** * @ngdoc method * @name $routeProvider#otherwise * * @description * Sets route definition that will be used on route change when no other route definition * is matched. * * @param {Object|string} params Mapping information to be assigned to `$route.current`. * If called with a string, the value maps to `redirectTo`. * @returns {Object} self */ this.otherwise = function(params) { if (typeof params === 'string') { params = {redirectTo: params}; } this.when(null, params); return this; }; this.$get = ['$rootScope', '$location', '$routeParams', '$q', '$injector', '$templateRequest', '$sce', function($rootScope, $location, $routeParams, $q, $injector, $templateRequest, $sce) { /** * @ngdoc service * @name $route * @requires $location * @requires $routeParams * * @property {Object} current Reference to the current route definition. * The route definition contains: * * - `controller`: The controller constructor as define in route definition. * - `locals`: A map of locals which is used by {@link ng.$controller $controller} service for * controller instantiation. The `locals` contain * the resolved values of the `resolve` map. Additionally the `locals` also contain: * * - `$scope` - The current route scope. * - `$template` - The current route template HTML. * * @property {Object} routes Object with all route configuration Objects as its properties. * * @description * `$route` is used for deep-linking URLs to controllers and views (HTML partials). * It watches `$location.url()` and tries to map the path to an existing route definition. * * Requires the {@link ngRoute `ngRoute`} module to be installed. * * You can define routes through {@link ngRoute.$routeProvider $routeProvider}'s API. * * The `$route` service is typically used in conjunction with the * {@link ngRoute.directive:ngView `ngView`} directive and the * {@link ngRoute.$routeParams `$routeParams`} service. * * @example * This example shows how changing the URL hash causes the `$route` to match a route against the * URL, and the `ngView` pulls in the partial. * * <example name="$route-service" module="ngRouteExample" * deps="angular-route.js" fixBase="true"> * <file name="index.html"> * <div ng-controller="MainController"> * Choose: * <a href="Book/Moby">Moby</a> | * <a href="Book/Moby/ch/1">Moby: Ch1</a> | * <a href="Book/Gatsby">Gatsby</a> | * <a href="Book/Gatsby/ch/4?key=value">Gatsby: Ch4</a> | * <a href="Book/Scarlet">Scarlet Letter</a><br/> * * <div ng-view></div> * * <hr /> * * <pre>$location.path() = {{$location.path()}}</pre> * <pre>$route.current.templateUrl = {{$route.current.templateUrl}}</pre> * <pre>$route.current.params = {{$route.current.params}}</pre> * <pre>$route.current.scope.name = {{$route.current.scope.name}}</pre> * <pre>$routeParams = {{$routeParams}}</pre> * </div> * </file> * * <file name="book.html"> * controller: {{name}}<br /> * Book Id: {{params.bookId}}<br /> * </file> * * <file name="chapter.html"> * controller: {{name}}<br /> * Book Id: {{params.bookId}}<br /> * Chapter Id: {{params.chapterId}} * </file> * * <file name="script.js"> * angular.module('ngRouteExample', ['ngRoute']) * * .controller('MainController', function($scope, $route, $routeParams, $location) { * $scope.$route = $route; * $scope.$location = $location; * $scope.$routeParams = $routeParams; * }) * * .controller('BookController', function($scope, $routeParams) { * $scope.name = "BookController"; * $scope.params = $routeParams; * }) * * .controller('ChapterController', function($scope, $routeParams) { * $scope.name = "ChapterController"; * $scope.params = $routeParams; * }) * * .config(function($routeProvider, $locationProvider) { * $routeProvider * .when('/Book/:bookId', { * templateUrl: 'book.html', * controller: 'BookController', * resolve: { * // I will cause a 1 second delay * delay: function($q, $timeout) { * var delay = $q.defer(); * $timeout(delay.resolve, 1000); * return delay.promise; * } * } * }) * .when('/Book/:bookId/ch/:chapterId', { * templateUrl: 'chapter.html', * controller: 'ChapterController' * }); * * // configure html5 to get links working on jsfiddle * $locationProvider.html5Mode(true); * }); * * </file> * * <file name="protractor.js" type="protractor"> * it('should load and compile correct template', function() { * element(by.linkText('Moby: Ch1')).click(); * var content = element(by.css('[ng-view]')).getText(); * expect(content).toMatch(/controller\: ChapterController/); * expect(content).toMatch(/Book Id\: Moby/); * expect(content).toMatch(/Chapter Id\: 1/); * * element(by.partialLinkText('Scarlet')).click(); * * content = element(by.css('[ng-view]')).getText(); * expect(content).toMatch(/controller\: BookController/); * expect(content).toMatch(/Book Id\: Scarlet/); * }); * </file> * </example> */ /** * @ngdoc event * @name $route#$routeChangeStart * @eventType broadcast on root scope * @description * Broadcasted before a route change. At this point the route services starts * resolving all of the dependencies needed for the route change to occur. * Typically this involves fetching the view template as well as any dependencies * defined in `resolve` route property. Once all of the dependencies are resolved * `$routeChangeSuccess` is fired. * * The route change (and the `$location` change that triggered it) can be prevented * by calling `preventDefault` method of the event. See {@link ng.$rootScope.Scope#$on} * for more details about event object. * * @param {Object} angularEvent Synthetic event object. * @param {Route} next Future route information. * @param {Route} current Current route information. */ /** * @ngdoc event * @name $route#$routeChangeSuccess * @eventType broadcast on root scope * @description * Broadcasted after a route change has happened successfully. * The `resolve` dependencies are now available in the `current.locals` property. * * {@link ngRoute.directive:ngView ngView} listens for the directive * to instantiate the controller and render the view. * * @param {Object} angularEvent Synthetic event object. * @param {Route} current Current route information. * @param {Route|Undefined} previous Previous route information, or undefined if current is * first route entered. */ /** * @ngdoc event * @name $route#$routeChangeError * @eventType broadcast on root scope * @description * Broadcasted if any of the resolve promises are rejected. * * @param {Object} angularEvent Synthetic event object * @param {Route} current Current route information. * @param {Route} previous Previous route information. * @param {Route} rejection Rejection of the promise. Usually the error of the failed promise. */ /** * @ngdoc event * @name $route#$routeUpdate * @eventType broadcast on root scope * @description * The `reloadOnSearch` property has been set to false, and we are reusing the same * instance of the Controller. * * @param {Object} angularEvent Synthetic event object * @param {Route} current Current/previous route information. */ var forceReload = false, preparedRoute, preparedRouteIsUpdateOnly, $route = { routes: routes, /** * @ngdoc method * @name $route#reload * * @description * Causes `$route` service to reload the current route even if * {@link ng.$location $location} hasn't changed. * * As a result of that, {@link ngRoute.directive:ngView ngView} * creates new scope and reinstantiates the controller. */ reload: function() { forceReload = true; $rootScope.$evalAsync(function() { // Don't support cancellation of a reload for now... prepareRoute(); commitRoute(); }); }, /** * @ngdoc method * @name $route#updateParams * * @description * Causes `$route` service to update the current URL, replacing * current route parameters with those specified in `newParams`. * Provided property names that match the route's path segment * definitions will be interpolated into the location's path, while * remaining properties will be treated as query params. * * @param {!Object<string, string>} newParams mapping of URL parameter names to values */ updateParams: function(newParams) { if (this.current && this.current.$$route) { newParams = angular.extend({}, this.current.params, newParams); $location.path(interpolate(this.current.$$route.originalPath, newParams)); // interpolate modifies newParams, only query params are left $location.search(newParams); } else { throw $routeMinErr('norout', 'Tried updating route when with no current route'); } } }; $rootScope.$on('$locationChangeStart', prepareRoute); $rootScope.$on('$locationChangeSuccess', commitRoute); return $route; ///////////////////////////////////////////////////// /** * @param on {string} current url * @param route {Object} route regexp to match the url against * @return {?Object} * * @description * Check if the route matches the current url. * * Inspired by match in * visionmedia/express/lib/router/router.js. */ function switchRouteMatcher(on, route) { var keys = route.keys, params = {}; if (!route.regexp) return null; var m = route.regexp.exec(on); if (!m) return null; for (var i = 1, len = m.length; i < len; ++i) { var key = keys[i - 1]; var val = m[i]; if (key && val) { params[key.name] = val; } } return params; } function prepareRoute($locationEvent) { var lastRoute = $route.current; preparedRoute = parseRoute(); preparedRouteIsUpdateOnly = preparedRoute && lastRoute && preparedRoute.$$route === lastRoute.$$route && angular.equals(preparedRoute.pathParams, lastRoute.pathParams) && !preparedRoute.reloadOnSearch && !forceReload; if (!preparedRouteIsUpdateOnly && (lastRoute || preparedRoute)) { if ($rootScope.$broadcast('$routeChangeStart', preparedRoute, lastRoute).defaultPrevented) { if ($locationEvent) { $locationEvent.preventDefault(); } } } } function commitRoute() { var lastRoute = $route.current; var nextRoute = preparedRoute; if (preparedRouteIsUpdateOnly) { lastRoute.params = nextRoute.params; angular.copy(lastRoute.params, $routeParams); $rootScope.$broadcast('$routeUpdate', lastRoute); } else if (nextRoute || lastRoute) { forceReload = false; $route.current = nextRoute; if (nextRoute) { if (nextRoute.redirectTo) { if (angular.isString(nextRoute.redirectTo)) { $location.path(interpolate(nextRoute.redirectTo, nextRoute.params)).search(nextRoute.params) .replace(); } else { $location.url(nextRoute.redirectTo(nextRoute.pathParams, $location.path(), $location.search())) .replace(); } } } $q.when(nextRoute). then(function() { if (nextRoute) { var locals = angular.extend({}, nextRoute.resolve), template, templateUrl; angular.forEach(locals, function(value, key) { locals[key] = angular.isString(value) ? $injector.get(value) : $injector.invoke(value, null, null, key); }); if (angular.isDefined(template = nextRoute.template)) { if (angular.isFunction(template)) { template = template(nextRoute.params); } } else if (angular.isDefined(templateUrl = nextRoute.templateUrl)) { if (angular.isFunction(templateUrl)) { templateUrl = templateUrl(nextRoute.params); } if (angular.isDefined(templateUrl)) { nextRoute.loadedTemplateUrl = $sce.valueOf(templateUrl); template = $templateRequest(templateUrl); } } if (angular.isDefined(template)) { locals['$template'] = template; } return $q.all(locals); } }). then(function(locals) { // after route change if (nextRoute == $route.current) { if (nextRoute) { nextRoute.locals = locals; angular.copy(nextRoute.params, $routeParams); } $rootScope.$broadcast('$routeChangeSuccess', nextRoute, lastRoute); } }, function(error) { if (nextRoute == $route.current) { $rootScope.$broadcast('$routeChangeError', nextRoute, lastRoute, error); } }); } } /** * @returns {Object} the current active route, by matching it against the URL */ function parseRoute() { // Match a route var params, match; angular.forEach(routes, function(route, path) { if (!match && (params = switchRouteMatcher($location.path(), route))) { match = inherit(route, { params: angular.extend({}, $location.search(), params), pathParams: params}); match.$$route = route; } }); // No route matched; fallback to "otherwise" route return match || routes[null] && inherit(routes[null], {params: {}, pathParams:{}}); } /** * @returns {string} interpolation of the redirect path with the parameters */ function interpolate(string, params) { var result = []; angular.forEach((string || '').split(':'), function(segment, i) { if (i === 0) { result.push(segment); } else { var segmentMatch = segment.match(/(\w+)(?:[?*])?(.*)/); var key = segmentMatch[1]; result.push(params[key]); result.push(segmentMatch[2] || ''); delete params[key]; } }); return result.join(''); } }]; } ngRouteModule.provider('$routeParams', $RouteParamsProvider); /** * @ngdoc service * @name $routeParams * @requires $route * * @description * The `$routeParams` service allows you to retrieve the current set of route parameters. * * Requires the {@link ngRoute `ngRoute`} module to be installed. * * The route parameters are a combination of {@link ng.$location `$location`}'s * {@link ng.$location#search `search()`} and {@link ng.$location#path `path()`}. * The `path` parameters are extracted when the {@link ngRoute.$route `$route`} path is matched. * * In case of parameter name collision, `path` params take precedence over `search` params. * * The service guarantees that the identity of the `$routeParams` object will remain unchanged * (but its properties will likely change) even when a route change occurs. * * Note that the `$routeParams` are only updated *after* a route change completes successfully. * This means that you cannot rely on `$routeParams` being correct in route resolve functions. * Instead you can use `$route.current.params` to access the new route's parameters. * * @example * ```js * // Given: * // URL: http://server.com/index.html#/Chapter/1/Section/2?search=moby * // Route: /Chapter/:chapterId/Section/:sectionId * // * // Then * $routeParams ==> {chapterId:'1', sectionId:'2', search:'moby'} * ``` */ function $RouteParamsProvider() { this.$get = function() { return {}; }; } ngRouteModule.directive('ngView', ngViewFactory); ngRouteModule.directive('ngView', ngViewFillContentFactory); /** * @ngdoc directive * @name ngView * @restrict ECA * * @description * # Overview * `ngView` is a directive that complements the {@link ngRoute.$route $route} service by * including the rendered template of the current route into the main layout (`index.html`) file. * Every time the current route changes, the included view changes with it according to the * configuration of the `$route` service. * * Requires the {@link ngRoute `ngRoute`} module to be installed. * * @animations * enter - animation is used to bring new content into the browser. * leave - animation is used to animate existing content away. * * The enter and leave animation occur concurrently. * * @scope * @priority 400 * @param {string=} onload Expression to evaluate whenever the view updates. * * @param {string=} autoscroll Whether `ngView` should call {@link ng.$anchorScroll * $anchorScroll} to scroll the viewport after the view is updated. * * - If the attribute is not set, disable scrolling. * - If the attribute is set without value, enable scrolling. * - Otherwise enable scrolling only if the `autoscroll` attribute value evaluated * as an expression yields a truthy value. * @example <example name="ngView-directive" module="ngViewExample" deps="angular-route.js;angular-animate.js" animations="true" fixBase="true"> <file name="index.html"> <div ng-controller="MainCtrl as main"> Choose: <a href="Book/Moby">Moby</a> | <a href="Book/Moby/ch/1">Moby: Ch1</a> | <a href="Book/Gatsby">Gatsby</a> | <a href="Book/Gatsby/ch/4?key=value">Gatsby: Ch4</a> | <a href="Book/Scarlet">Scarlet Letter</a><br/> <div class="view-animate-container"> <div ng-view class="view-animate"></div> </div> <hr /> <pre>$location.path() = {{main.$location.path()}}</pre> <pre>$route.current.templateUrl = {{main.$route.current.templateUrl}}</pre> <pre>$route.current.params = {{main.$route.current.params}}</pre> <pre>$routeParams = {{main.$routeParams}}</pre> </div> </file> <file name="book.html"> <div> controller: {{book.name}}<br /> Book Id: {{book.params.bookId}}<br /> </div> </file> <file name="chapter.html"> <div> controller: {{chapter.name}}<br /> Book Id: {{chapter.params.bookId}}<br /> Chapter Id: {{chapter.params.chapterId}} </div> </file> <file name="animations.css"> .view-animate-container { position:relative; height:100px!important; background:white; border:1px solid black; height:40px; overflow:hidden; } .view-animate { padding:10px; } .view-animate.ng-enter, .view-animate.ng-leave { transition:all cubic-bezier(0.250, 0.460, 0.450, 0.940) 1.5s; display:block; width:100%; border-left:1px solid black; position:absolute; top:0; left:0; right:0; bottom:0; padding:10px; } .view-animate.ng-enter { left:100%; } .view-animate.ng-enter.ng-enter-active { left:0; } .view-animate.ng-leave.ng-leave-active { left:-100%; } </file> <file name="script.js"> angular.module('ngViewExample', ['ngRoute', 'ngAnimate']) .config(['$routeProvider', '$locationProvider', function($routeProvider, $locationProvider) { $routeProvider .when('/Book/:bookId', { templateUrl: 'book.html', controller: 'BookCtrl', controllerAs: 'book' }) .when('/Book/:bookId/ch/:chapterId', { templateUrl: 'chapter.html', controller: 'ChapterCtrl', controllerAs: 'chapter' }); $locationProvider.html5Mode(true); }]) .controller('MainCtrl', ['$route', '$routeParams', '$location', function($route, $routeParams, $location) { this.$route = $route; this.$location = $location; this.$routeParams = $routeParams; }]) .controller('BookCtrl', ['$routeParams', function($routeParams) { this.name = "BookCtrl"; this.params = $routeParams; }]) .controller('ChapterCtrl', ['$routeParams', function($routeParams) { this.name = "ChapterCtrl"; this.params = $routeParams; }]); </file> <file name="protractor.js" type="protractor"> it('should load and compile correct template', function() { element(by.linkText('Moby: Ch1')).click(); var content = element(by.css('[ng-view]')).getText(); expect(content).toMatch(/controller\: ChapterCtrl/); expect(content).toMatch(/Book Id\: Moby/); expect(content).toMatch(/Chapter Id\: 1/); element(by.partialLinkText('Scarlet')).click(); content = element(by.css('[ng-view]')).getText(); expect(content).toMatch(/controller\: BookCtrl/); expect(content).toMatch(/Book Id\: Scarlet/); }); </file> </example> */ /** * @ngdoc event * @name ngView#$viewContentLoaded * @eventType emit on the current ngView scope * @description * Emitted every time the ngView content is reloaded. */ ngViewFactory.$inject = ['$route', '$anchorScroll', '$animate']; function ngViewFactory($route, $anchorScroll, $animate) { return { restrict: 'ECA', terminal: true, priority: 400, transclude: 'element', link: function(scope, $element, attr, ctrl, $transclude) { var currentScope, currentElement, previousLeaveAnimation, autoScrollExp = attr.autoscroll, onloadExp = attr.onload || ''; scope.$on('$routeChangeSuccess', update); update(); function cleanupLastView() { if (previousLeaveAnimation) { $animate.cancel(previousLeaveAnimation); previousLeaveAnimation = null; } if (currentScope) { currentScope.$destroy(); currentScope = null; } if (currentElement) { previousLeaveAnimation = $animate.leave(currentElement); previousLeaveAnimation.then(function() { previousLeaveAnimation = null; }); currentElement = null; } } function update() { var locals = $route.current && $route.current.locals, template = locals && locals.$template; if (angular.isDefined(template)) { var newScope = scope.$new(); var current = $route.current; // Note: This will also link all children of ng-view that were contained in the original // html. If that content contains controllers, ... they could pollute/change the scope. // However, using ng-view on an element with additional content does not make sense... // Note: We can't remove them in the cloneAttchFn of $transclude as that // function is called before linking the content, which would apply child // directives to non existing elements. var clone = $transclude(newScope, function(clone) { $animate.enter(clone, null, currentElement || $element).then(function onNgViewEnter() { if (angular.isDefined(autoScrollExp) && (!autoScrollExp || scope.$eval(autoScrollExp))) { $anchorScroll(); } }); cleanupLastView(); }); currentElement = clone; currentScope = current.scope = newScope; currentScope.$emit('$viewContentLoaded'); currentScope.$eval(onloadExp); } else { cleanupLastView(); } } } }; } // This directive is called during the $transclude call of the first `ngView` directive. // It will replace and compile the content of the element with the loaded template. // We need this directive so that the element content is already filled when // the link function of another directive on the same element as ngView // is called. ngViewFillContentFactory.$inject = ['$compile', '$controller', '$route']; function ngViewFillContentFactory($compile, $controller, $route) { return { restrict: 'ECA', priority: -400, link: function(scope, $element) { var current = $route.current, locals = current.locals; $element.html(locals.$template); var link = $compile($element.contents()); if (current.controller) { locals.$scope = scope; var controller = $controller(current.controller, locals); if (current.controllerAs) { scope[current.controllerAs] = controller; } $element.data('$ngControllerController', controller); $element.children().data('$ngControllerController', controller); } link(scope); } }; } })(window, window.angular);
PypiClean
/odoo13_addon_stock_request-13.0.1.7.3-py3-none-any.whl/odoo/addons/stock_request/models/stock_request_abstract.py
from odoo import _, api, fields, models from odoo.exceptions import ValidationError class StockRequest(models.AbstractModel): _name = "stock.request.abstract" _description = "Stock Request Template" _inherit = ["mail.thread", "mail.activity.mixin"] @api.model def default_get(self, fields): res = super(StockRequest, self).default_get(fields) warehouse = None if "warehouse_id" not in res and res.get("company_id"): warehouse = self.env["stock.warehouse"].search( [("company_id", "=", res["company_id"])], limit=1 ) if warehouse: res["warehouse_id"] = warehouse.id res["location_id"] = warehouse.lot_stock_id.id return res @api.depends( "product_id", "product_uom_id", "product_uom_qty", "product_id.product_tmpl_id.uom_id", ) def _compute_product_qty(self): for rec in self: rec.product_qty = rec.product_uom_id._compute_quantity( rec.product_uom_qty, rec.product_id.product_tmpl_id.uom_id ) name = fields.Char("Name", copy=False, required=True, readonly=True, default="/") warehouse_id = fields.Many2one( "stock.warehouse", "Warehouse", ondelete="cascade", required=True ) location_id = fields.Many2one( "stock.location", "Location", domain=[("usage", "in", ["internal", "transit"])], ondelete="cascade", required=True, ) product_id = fields.Many2one( "product.product", "Product", domain=[("type", "in", ["product", "consu"])], ondelete="cascade", required=True, ) allow_virtual_location = fields.Boolean( related="company_id.stock_request_allow_virtual_loc", readonly=True ) product_uom_id = fields.Many2one( "uom.uom", "Product Unit of Measure", required=True, default=lambda self: self._context.get("product_uom_id", False), ) product_uom_qty = fields.Float( "Quantity", digits="Product Unit of Measure", required=True, help="Quantity, specified in the unit of measure indicated in the request.", ) product_qty = fields.Float( "Real Quantity", compute="_compute_product_qty", store=True, copy=False, digits="Product Unit of Measure", help="Quantity in the default UoM of the product", ) procurement_group_id = fields.Many2one( "procurement.group", "Procurement Group", help="Moves created through this stock request will be put in this " "procurement group. If none is given, the moves generated by " "procurement rules will be grouped into one big picking.", ) company_id = fields.Many2one( "res.company", "Company", required=True, default=lambda self: self.env.company ) route_id = fields.Many2one( "stock.location.route", string="Route", domain="[('id', 'in', route_ids)]", ondelete="restrict", ) route_ids = fields.Many2many( "stock.location.route", string="Routes", compute="_compute_route_ids", readonly=True, ) _sql_constraints = [ ("name_uniq", "unique(name, company_id)", "Name must be unique") ] @api.depends("product_id", "warehouse_id", "location_id") def _compute_route_ids(self): route_obj = self.env["stock.location.route"] routes = route_obj.search( [("warehouse_ids", "in", self.mapped("warehouse_id").ids)] ) routes_by_warehouse = {} for route in routes: for warehouse in route.warehouse_ids: routes_by_warehouse.setdefault( warehouse.id, self.env["stock.location.route"] ) routes_by_warehouse[warehouse.id] |= route for record in self: routes = route_obj if record.product_id: routes += record.product_id.mapped( "route_ids" ) | record.product_id.mapped("categ_id").mapped("total_route_ids") if record.warehouse_id and routes_by_warehouse.get(record.warehouse_id.id): routes |= routes_by_warehouse[record.warehouse_id.id] parents = record.get_parents().ids record.route_ids = routes.filtered( lambda r: any(p.location_id.id in parents for p in r.rule_ids) ) def get_parents(self): location = self.location_id result = location while location.location_id: location = location.location_id result |= location return result @api.constrains( "company_id", "product_id", "warehouse_id", "location_id", "route_id" ) def _check_company_constrains(self): """ Check if the related models have the same company """ for rec in self: if ( rec.product_id.company_id and rec.product_id.company_id != rec.company_id ): raise ValidationError( _( "You have entered a product that is assigned " "to another company." ) ) if ( rec.location_id.company_id and rec.location_id.company_id != rec.company_id ): raise ValidationError( _( "You have entered a location that is " "assigned to another company." ) ) if rec.warehouse_id.company_id != rec.company_id: raise ValidationError( _( "You have entered a warehouse that is " "assigned to another company." ) ) if ( rec.route_id and rec.route_id.company_id and rec.route_id.company_id != rec.company_id ): raise ValidationError( _( "You have entered a route that is " "assigned to another company." ) ) @api.constrains("product_id") def _check_product_uom(self): """ Check if the UoM has the same category as the product standard UoM """ if any( request.product_id.uom_id.category_id != request.product_uom_id.category_id for request in self ): raise ValidationError( _( "You have to select a product unit of measure in the " "same category than the default unit " "of measure of the product" ) ) @api.constrains("product_qty") def _check_qty(self): for rec in self: if rec.product_qty <= 0: raise ValidationError( _("Stock Request product quantity has to be strictly positive.") ) @api.onchange("warehouse_id") def onchange_warehouse_id(self): """ Finds location id for changed warehouse. """ res = {"domain": {}} if self._name == "stock.request" and self.order_id: # When the stock request is created from an order the wh and # location are taken from the order and we rely on it to change # all request associated. Thus, no need to apply # the onchange, as it could lead to inconsistencies. return res if self.warehouse_id: loc_wh = self.location_id.get_warehouse() if self.warehouse_id != loc_wh: self.location_id = self.warehouse_id.lot_stock_id.id if self.warehouse_id.company_id != self.company_id: self.company_id = self.warehouse_id.company_id return res @api.onchange("location_id") def onchange_location_id(self): if self.location_id: loc_wh = self.location_id.get_warehouse() if loc_wh and self.warehouse_id != loc_wh: self.warehouse_id = loc_wh self.with_context(no_change_childs=True).onchange_warehouse_id() @api.onchange("allow_virtual_location") def onchange_allow_virtual_location(self): if self.allow_virtual_location: return {"domain": {"location_id": []}} @api.onchange("company_id") def onchange_company_id(self): """ Sets a default warehouse when the company is changed and limits the user selection of warehouses. """ if self.company_id and ( not self.warehouse_id or self.warehouse_id.company_id != self.company_id ): self.warehouse_id = self.env["stock.warehouse"].search( [("company_id", "=", self.company_id.id)], limit=1 ) self.onchange_warehouse_id() return {"domain": {"warehouse_id": [("company_id", "=", self.company_id.id)]}} @api.onchange("product_id") def onchange_product_id(self): res = {"domain": {}} if self.product_id: self.product_uom_id = self.product_id.uom_id.id res["domain"]["product_uom_id"] = [ ("category_id", "=", self.product_id.uom_id.category_id.id) ] return res res["domain"]["product_uom_id"] = [] return res
PypiClean
/pyparsing-3.1.0b1.tar.gz/pyparsing-3.1.0b1/examples/number_words.py
import pyparsing as pp from operator import mul def define_numeric_word_range( names: str, from_: int, to_: int = None, step: int = 1 ) -> pp.MatchFirst: """ Compose a MatchFirst of CaselessKeywords, given their names and values, which when parsed, are converted to their value """ def define_numeric_word(nm: str, val: int): return pp.CaselessKeyword(nm).add_parse_action(lambda: val) names = names.split() if to_ is None: to_ = from_ values = range(from_, to_ + 1, step) ret = pp.MatchFirst( define_numeric_word(name, value) for name, value in zip(names, values) ) if len(names) == 1: ret.set_name(names[0]) else: ret.set_name(f"{names[0]}-{names[-1]}") return ret def multiply(t): """ Parse action for hundreds and thousands. """ return mul(*t) opt_dash = pp.Opt(pp.Suppress("-")).set_name("'-'") opt_and = pp.Opt((pp.CaselessKeyword("and") | "-").suppress()).set_name("'and/-'") units = define_numeric_word_range("one two three four five six seven eight nine", 1, 9) teens_only = define_numeric_word_range( "eleven twelve thirteen fourteen fifteen sixteen seventeen eighteen nineteen", 11, 19, ) ten = define_numeric_word_range("ten", 10) teens = ten | teens_only tens = define_numeric_word_range( "twenty thirty forty fifty sixty seventy eighty ninety", 20, 90, 10 ) one_to_99 = (units | teens | (tens + pp.Opt(opt_dash + units))).set_name("1-99") one_to_99.add_parse_action(sum) hundred = define_numeric_word_range("hundred", 100) thousand = define_numeric_word_range("thousand", 1000) hundreds = (units | teens_only | (tens + opt_dash + units)) + hundred hundreds.set_name("100s") one_to_999 = ( (pp.Opt(hundreds + opt_and) + one_to_99 | hundreds).add_parse_action(sum) ).set_name("1-999") thousands = one_to_999 + thousand thousands.set_name("1000s") # for hundreds and thousands, must scale up (multiply) accordingly hundreds.add_parse_action(multiply) thousands.add_parse_action(multiply) numeric_expression = ( pp.Opt(thousands + opt_and) + pp.Opt(hundreds + opt_and) + one_to_99 | pp.Opt(thousands + opt_and) + hundreds | thousands ).set_name("numeric_words") # sum all sub-results into total numeric_expression.add_parse_action(sum) if __name__ == "__main__": numeric_expression.run_tests( """ one seven twelve twenty six forty-two two hundred twelve hundred one hundred and eleven ninety nine thousand nine hundred and ninety nine nine hundred thousand nine hundred and ninety nine nine hundred and ninety nine thousand nine hundred and ninety nine nineteen hundred thousand nineteen hundred and ninety nine # invalid twenty hundred """, postParse=lambda _, s: "{:,}".format(s[0]), ) # create railroad diagram numeric_expression.create_diagram("numeric_words_diagram.html", vertical=5)
PypiClean
/criteo_api_marketingsolutions_sdk-2023.7.0.230831-py3-none-any.whl/criteo_api_marketingsolutions_v2023_07/model/create_ad_set_targeting.py
import re # noqa: F401 import sys # noqa: F401 from criteo_api_marketingsolutions_v2023_07.model_utils import ( # noqa: F401 ApiTypeError, ModelComposed, ModelNormal, ModelSimple, cached_property, change_keys_js_to_python, convert_js_args_to_python_args, date, datetime, file_type, none_type, validate_get_composed_info, OpenApiModel ) from criteo_api_marketingsolutions_v2023_07.exceptions import ApiAttributeError def lazy_import(): from criteo_api_marketingsolutions_v2023_07.model.ad_set_delivery_limitations import AdSetDeliveryLimitations from criteo_api_marketingsolutions_v2023_07.model.ad_set_frequency_capping import AdSetFrequencyCapping from criteo_api_marketingsolutions_v2023_07.model.create_ad_set_geo_location import CreateAdSetGeoLocation globals()['AdSetDeliveryLimitations'] = AdSetDeliveryLimitations globals()['AdSetFrequencyCapping'] = AdSetFrequencyCapping globals()['CreateAdSetGeoLocation'] = CreateAdSetGeoLocation class CreateAdSetTargeting(ModelNormal): """NOTE: This class is auto generated by OpenAPI Generator. Ref: https://openapi-generator.tech Do not edit the class manually. Attributes: allowed_values (dict): The key is the tuple path to the attribute and the for var_name this is (var_name,). The value is a dict with a capitalized key describing the allowed value and an allowed value. These dicts store the allowed enum values. attribute_map (dict): The key is attribute name and the value is json key in definition. discriminator_value_class_map (dict): A dict to go from the discriminator variable value to the discriminator class name. validations (dict): The key is the tuple path to the attribute and the for var_name this is (var_name,). The value is a dict that stores validations for max_length, min_length, max_items, min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum, inclusive_minimum, and regex. additional_properties_type (tuple): A tuple of classes accepted as additional properties values. """ allowed_values = { } validations = { } @cached_property def additional_properties_type(): """ This must be a method because a model may have properties that are of type self, this must run after the class is loaded """ lazy_import() return (bool, date, datetime, dict, float, int, list, str, none_type,) # noqa: E501 _nullable = False @cached_property def openapi_types(): """ This must be a method because a model may have properties that are of type self, this must run after the class is loaded Returns openapi_types (dict): The key is attribute name and the value is attribute type. """ lazy_import() return { 'frequency_capping': (AdSetFrequencyCapping,), # noqa: E501 'delivery_limitations': (AdSetDeliveryLimitations,), # noqa: E501 'geo_location': (CreateAdSetGeoLocation,), # noqa: E501 } @cached_property def discriminator(): return None attribute_map = { 'frequency_capping': 'frequencyCapping', # noqa: E501 'delivery_limitations': 'deliveryLimitations', # noqa: E501 'geo_location': 'geoLocation', # noqa: E501 } read_only_vars = { } _composed_schemas = {} @classmethod @convert_js_args_to_python_args def _from_openapi_data(cls, frequency_capping, *args, **kwargs): # noqa: E501 """CreateAdSetTargeting - a model defined in OpenAPI Args: frequency_capping (AdSetFrequencyCapping): Keyword Args: _check_type (bool): if True, values for parameters in openapi_types will be type checked and a TypeError will be raised if the wrong type is input. Defaults to True _path_to_item (tuple/list): This is a list of keys or values to drill down to the model in received_data when deserializing a response _spec_property_naming (bool): True if the variable names in the input data are serialized names, as specified in the OpenAPI document. False if the variable names in the input data are pythonic names, e.g. snake case (default) _configuration (Configuration): the instance to use when deserializing a file_type parameter. If passed, type conversion is attempted If omitted no type conversion is done. _visited_composed_classes (tuple): This stores a tuple of classes that we have traveled through so that if we see that class again we will not use its discriminator again. When traveling through a discriminator, the composed schema that is is traveled through is added to this set. For example if Animal has a discriminator petType and we pass in "Dog", and the class Dog allOf includes Animal, we move through Animal once using the discriminator, and pick Dog. Then in Dog, we will make an instance of the Animal class but this time we won't travel through its discriminator because we passed in _visited_composed_classes = (Animal,) delivery_limitations (AdSetDeliveryLimitations): [optional] # noqa: E501 geo_location (CreateAdSetGeoLocation): [optional] # noqa: E501 """ _check_type = kwargs.pop('_check_type', True) _spec_property_naming = kwargs.pop('_spec_property_naming', True) _path_to_item = kwargs.pop('_path_to_item', ()) _configuration = kwargs.pop('_configuration', None) _visited_composed_classes = kwargs.pop('_visited_composed_classes', ()) self = super(OpenApiModel, cls).__new__(cls) if args: for arg in args: if isinstance(arg, dict): kwargs.update(arg) else: raise ApiTypeError( "Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % ( args, self.__class__.__name__, ), path_to_item=_path_to_item, valid_classes=(self.__class__,), ) self._data_store = {} self._check_type = _check_type self._spec_property_naming = _spec_property_naming self._path_to_item = _path_to_item self._configuration = _configuration self._visited_composed_classes = _visited_composed_classes + (self.__class__,) self.frequency_capping = frequency_capping for var_name, var_value in kwargs.items(): if var_name not in self.attribute_map and \ self._configuration is not None and \ self._configuration.discard_unknown_keys and \ self.additional_properties_type is None: # discard variable. continue setattr(self, var_name, var_value) return self required_properties = set([ '_data_store', '_check_type', '_spec_property_naming', '_path_to_item', '_configuration', '_visited_composed_classes', ]) @convert_js_args_to_python_args def __init__(self, frequency_capping, *args, **kwargs): # noqa: E501 """CreateAdSetTargeting - a model defined in OpenAPI Args: frequency_capping (AdSetFrequencyCapping): Keyword Args: _check_type (bool): if True, values for parameters in openapi_types will be type checked and a TypeError will be raised if the wrong type is input. Defaults to True _path_to_item (tuple/list): This is a list of keys or values to drill down to the model in received_data when deserializing a response _spec_property_naming (bool): True if the variable names in the input data are serialized names, as specified in the OpenAPI document. False if the variable names in the input data are pythonic names, e.g. snake case (default) _configuration (Configuration): the instance to use when deserializing a file_type parameter. If passed, type conversion is attempted If omitted no type conversion is done. _visited_composed_classes (tuple): This stores a tuple of classes that we have traveled through so that if we see that class again we will not use its discriminator again. When traveling through a discriminator, the composed schema that is is traveled through is added to this set. For example if Animal has a discriminator petType and we pass in "Dog", and the class Dog allOf includes Animal, we move through Animal once using the discriminator, and pick Dog. Then in Dog, we will make an instance of the Animal class but this time we won't travel through its discriminator because we passed in _visited_composed_classes = (Animal,) delivery_limitations (AdSetDeliveryLimitations): [optional] # noqa: E501 geo_location (CreateAdSetGeoLocation): [optional] # noqa: E501 """ _check_type = kwargs.pop('_check_type', True) _spec_property_naming = kwargs.pop('_spec_property_naming', False) _path_to_item = kwargs.pop('_path_to_item', ()) _configuration = kwargs.pop('_configuration', None) _visited_composed_classes = kwargs.pop('_visited_composed_classes', ()) if args: for arg in args: if isinstance(arg, dict): kwargs.update(arg) else: raise ApiTypeError( "Invalid positional arguments=%s passed to %s. Remove those invalid positional arguments." % ( args, self.__class__.__name__, ), path_to_item=_path_to_item, valid_classes=(self.__class__,), ) self._data_store = {} self._check_type = _check_type self._spec_property_naming = _spec_property_naming self._path_to_item = _path_to_item self._configuration = _configuration self._visited_composed_classes = _visited_composed_classes + (self.__class__,) self.frequency_capping = frequency_capping for var_name, var_value in kwargs.items(): if var_name not in self.attribute_map and \ self._configuration is not None and \ self._configuration.discard_unknown_keys and \ self.additional_properties_type is None: # discard variable. continue setattr(self, var_name, var_value) if var_name in self.read_only_vars: raise ApiAttributeError(f"`{var_name}` is a read-only attribute. Use `from_openapi_data` to instantiate " f"class with read only attributes.")
PypiClean
/azure-mgmt-sql-4.0.0b3.zip/azure-mgmt-sql-4.0.0b3/azure/mgmt/sql/operations/_database_recommended_actions_operations.py
from typing import Any, Callable, Dict, List, Optional, TypeVar from msrest import Serializer from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error from azure.core.pipeline import PipelineResponse from azure.core.pipeline.transport import HttpResponse from azure.core.rest import HttpRequest from azure.core.tracing.decorator import distributed_trace from azure.core.utils import case_insensitive_dict from azure.mgmt.core.exceptions import ARMErrorFormat from .. import models as _models from .._vendor import _convert_request, _format_url_section T = TypeVar('T') ClsType = Optional[Callable[[PipelineResponse[HttpRequest, HttpResponse], T, Dict[str, Any]], Any]] _SERIALIZER = Serializer() _SERIALIZER.client_side_validation = False def build_list_by_database_advisor_request( resource_group_name: str, server_name: str, database_name: str, advisor_name: str, subscription_id: str, **kwargs: Any ) -> HttpRequest: _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version = kwargs.pop('api_version', _params.pop('api-version', "2020-11-01-preview")) # type: str accept = _headers.pop('Accept', "application/json") # Construct URL _url = kwargs.pop("template_url", "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Sql/servers/{serverName}/databases/{databaseName}/advisors/{advisorName}/recommendedActions") # pylint: disable=line-too-long path_format_arguments = { "resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'), "serverName": _SERIALIZER.url("server_name", server_name, 'str'), "databaseName": _SERIALIZER.url("database_name", database_name, 'str'), "advisorName": _SERIALIZER.url("advisor_name", advisor_name, 'str'), "subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'), } _url = _format_url_section(_url, **path_format_arguments) # Construct parameters _params['api-version'] = _SERIALIZER.query("api_version", api_version, 'str') # Construct headers _headers['Accept'] = _SERIALIZER.header("accept", accept, 'str') return HttpRequest( method="GET", url=_url, params=_params, headers=_headers, **kwargs ) def build_get_request( resource_group_name: str, server_name: str, database_name: str, advisor_name: str, recommended_action_name: str, subscription_id: str, **kwargs: Any ) -> HttpRequest: _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version = kwargs.pop('api_version', _params.pop('api-version', "2020-11-01-preview")) # type: str accept = _headers.pop('Accept', "application/json") # Construct URL _url = kwargs.pop("template_url", "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Sql/servers/{serverName}/databases/{databaseName}/advisors/{advisorName}/recommendedActions/{recommendedActionName}") # pylint: disable=line-too-long path_format_arguments = { "resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'), "serverName": _SERIALIZER.url("server_name", server_name, 'str'), "databaseName": _SERIALIZER.url("database_name", database_name, 'str'), "advisorName": _SERIALIZER.url("advisor_name", advisor_name, 'str'), "recommendedActionName": _SERIALIZER.url("recommended_action_name", recommended_action_name, 'str'), "subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'), } _url = _format_url_section(_url, **path_format_arguments) # Construct parameters _params['api-version'] = _SERIALIZER.query("api_version", api_version, 'str') # Construct headers _headers['Accept'] = _SERIALIZER.header("accept", accept, 'str') return HttpRequest( method="GET", url=_url, params=_params, headers=_headers, **kwargs ) def build_update_request( resource_group_name: str, server_name: str, database_name: str, advisor_name: str, recommended_action_name: str, subscription_id: str, *, json: Optional[_models.RecommendedAction] = None, content: Any = None, **kwargs: Any ) -> HttpRequest: _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version = kwargs.pop('api_version', _params.pop('api-version', "2020-11-01-preview")) # type: str content_type = kwargs.pop('content_type', _headers.pop('Content-Type', None)) # type: Optional[str] accept = _headers.pop('Accept', "application/json") # Construct URL _url = kwargs.pop("template_url", "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Sql/servers/{serverName}/databases/{databaseName}/advisors/{advisorName}/recommendedActions/{recommendedActionName}") # pylint: disable=line-too-long path_format_arguments = { "resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'), "serverName": _SERIALIZER.url("server_name", server_name, 'str'), "databaseName": _SERIALIZER.url("database_name", database_name, 'str'), "advisorName": _SERIALIZER.url("advisor_name", advisor_name, 'str'), "recommendedActionName": _SERIALIZER.url("recommended_action_name", recommended_action_name, 'str'), "subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'), } _url = _format_url_section(_url, **path_format_arguments) # Construct parameters _params['api-version'] = _SERIALIZER.query("api_version", api_version, 'str') # Construct headers if content_type is not None: _headers['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str') _headers['Accept'] = _SERIALIZER.header("accept", accept, 'str') return HttpRequest( method="PATCH", url=_url, params=_params, headers=_headers, json=json, content=content, **kwargs ) class DatabaseRecommendedActionsOperations: """ .. warning:: **DO NOT** instantiate this class directly. Instead, you should access the following operations through :class:`~azure.mgmt.sql.SqlManagementClient`'s :attr:`database_recommended_actions` attribute. """ models = _models def __init__(self, *args, **kwargs): input_args = list(args) self._client = input_args.pop(0) if input_args else kwargs.pop("client") self._config = input_args.pop(0) if input_args else kwargs.pop("config") self._serialize = input_args.pop(0) if input_args else kwargs.pop("serializer") self._deserialize = input_args.pop(0) if input_args else kwargs.pop("deserializer") @distributed_trace def list_by_database_advisor( self, resource_group_name: str, server_name: str, database_name: str, advisor_name: str, **kwargs: Any ) -> List[_models.RecommendedAction]: """Gets list of Database Recommended Actions. :param resource_group_name: The name of the resource group that contains the resource. You can obtain this value from the Azure Resource Manager API or the portal. :type resource_group_name: str :param server_name: The name of the server. :type server_name: str :param database_name: The name of the database. :type database_name: str :param advisor_name: The name of the Database Advisor. :type advisor_name: str :keyword api_version: Api Version. Default value is "2020-11-01-preview". Note that overriding this default value may result in unsupported behavior. :paramtype api_version: str :keyword callable cls: A custom type or function that will be passed the direct response :return: list of RecommendedAction, or the result of cls(response) :rtype: list[~azure.mgmt.sql.models.RecommendedAction] :raises: ~azure.core.exceptions.HttpResponseError """ error_map = { 401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError } error_map.update(kwargs.pop('error_map', {}) or {}) _headers = kwargs.pop("headers", {}) or {} _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version = kwargs.pop('api_version', _params.pop('api-version', "2020-11-01-preview")) # type: str cls = kwargs.pop('cls', None) # type: ClsType[List[_models.RecommendedAction]] request = build_list_by_database_advisor_request( resource_group_name=resource_group_name, server_name=server_name, database_name=database_name, advisor_name=advisor_name, subscription_id=self._config.subscription_id, api_version=api_version, template_url=self.list_by_database_advisor.metadata['url'], headers=_headers, params=_params, ) request = _convert_request(request) request.url = self._client.format_url(request.url) # type: ignore pipeline_response = self._client._pipeline.run( # type: ignore # pylint: disable=protected-access request, stream=False, **kwargs ) response = pipeline_response.http_response if response.status_code not in [200]: map_error(status_code=response.status_code, response=response, error_map=error_map) raise HttpResponseError(response=response, error_format=ARMErrorFormat) deserialized = self._deserialize('[RecommendedAction]', pipeline_response) if cls: return cls(pipeline_response, deserialized, {}) return deserialized list_by_database_advisor.metadata = {'url': "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Sql/servers/{serverName}/databases/{databaseName}/advisors/{advisorName}/recommendedActions"} # type: ignore @distributed_trace def get( self, resource_group_name: str, server_name: str, database_name: str, advisor_name: str, recommended_action_name: str, **kwargs: Any ) -> _models.RecommendedAction: """Gets a database recommended action. :param resource_group_name: The name of the resource group that contains the resource. You can obtain this value from the Azure Resource Manager API or the portal. :type resource_group_name: str :param server_name: The name of the server. :type server_name: str :param database_name: The name of the database. :type database_name: str :param advisor_name: The name of the Database Advisor. :type advisor_name: str :param recommended_action_name: The name of Database Recommended Action. :type recommended_action_name: str :keyword api_version: Api Version. Default value is "2020-11-01-preview". Note that overriding this default value may result in unsupported behavior. :paramtype api_version: str :keyword callable cls: A custom type or function that will be passed the direct response :return: RecommendedAction, or the result of cls(response) :rtype: ~azure.mgmt.sql.models.RecommendedAction :raises: ~azure.core.exceptions.HttpResponseError """ error_map = { 401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError } error_map.update(kwargs.pop('error_map', {}) or {}) _headers = kwargs.pop("headers", {}) or {} _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version = kwargs.pop('api_version', _params.pop('api-version', "2020-11-01-preview")) # type: str cls = kwargs.pop('cls', None) # type: ClsType[_models.RecommendedAction] request = build_get_request( resource_group_name=resource_group_name, server_name=server_name, database_name=database_name, advisor_name=advisor_name, recommended_action_name=recommended_action_name, subscription_id=self._config.subscription_id, api_version=api_version, template_url=self.get.metadata['url'], headers=_headers, params=_params, ) request = _convert_request(request) request.url = self._client.format_url(request.url) # type: ignore pipeline_response = self._client._pipeline.run( # type: ignore # pylint: disable=protected-access request, stream=False, **kwargs ) response = pipeline_response.http_response if response.status_code not in [200]: map_error(status_code=response.status_code, response=response, error_map=error_map) raise HttpResponseError(response=response, error_format=ARMErrorFormat) deserialized = self._deserialize('RecommendedAction', pipeline_response) if cls: return cls(pipeline_response, deserialized, {}) return deserialized get.metadata = {'url': "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Sql/servers/{serverName}/databases/{databaseName}/advisors/{advisorName}/recommendedActions/{recommendedActionName}"} # type: ignore @distributed_trace def update( self, resource_group_name: str, server_name: str, database_name: str, advisor_name: str, recommended_action_name: str, parameters: _models.RecommendedAction, **kwargs: Any ) -> _models.RecommendedAction: """Updates a database recommended action. :param resource_group_name: The name of the resource group that contains the resource. You can obtain this value from the Azure Resource Manager API or the portal. :type resource_group_name: str :param server_name: The name of the server. :type server_name: str :param database_name: The name of the database. :type database_name: str :param advisor_name: The name of the Database Advisor. :type advisor_name: str :param recommended_action_name: The name of Database Recommended Action. :type recommended_action_name: str :param parameters: The requested recommended action resource state. :type parameters: ~azure.mgmt.sql.models.RecommendedAction :keyword api_version: Api Version. Default value is "2020-11-01-preview". Note that overriding this default value may result in unsupported behavior. :paramtype api_version: str :keyword callable cls: A custom type or function that will be passed the direct response :return: RecommendedAction, or the result of cls(response) :rtype: ~azure.mgmt.sql.models.RecommendedAction :raises: ~azure.core.exceptions.HttpResponseError """ error_map = { 401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError } error_map.update(kwargs.pop('error_map', {}) or {}) _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version = kwargs.pop('api_version', _params.pop('api-version', "2020-11-01-preview")) # type: str content_type = kwargs.pop('content_type', _headers.pop('Content-Type', "application/json")) # type: Optional[str] cls = kwargs.pop('cls', None) # type: ClsType[_models.RecommendedAction] _json = self._serialize.body(parameters, 'RecommendedAction') request = build_update_request( resource_group_name=resource_group_name, server_name=server_name, database_name=database_name, advisor_name=advisor_name, recommended_action_name=recommended_action_name, subscription_id=self._config.subscription_id, api_version=api_version, content_type=content_type, json=_json, template_url=self.update.metadata['url'], headers=_headers, params=_params, ) request = _convert_request(request) request.url = self._client.format_url(request.url) # type: ignore pipeline_response = self._client._pipeline.run( # type: ignore # pylint: disable=protected-access request, stream=False, **kwargs ) response = pipeline_response.http_response if response.status_code not in [200]: map_error(status_code=response.status_code, response=response, error_map=error_map) raise HttpResponseError(response=response, error_format=ARMErrorFormat) deserialized = self._deserialize('RecommendedAction', pipeline_response) if cls: return cls(pipeline_response, deserialized, {}) return deserialized update.metadata = {'url': "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Sql/servers/{serverName}/databases/{databaseName}/advisors/{advisorName}/recommendedActions/{recommendedActionName}"} # type: ignore
PypiClean
/podder-task-base-0.9.1.tar.gz/podder-task-base-0.9.1/podder_task_base/repositories/base/base_repository.py
from typing import Dict, List, Union from podder_task_base.context import Context from sqlalchemy.ext.declarative import DeclarativeMeta from sqlalchemy.orm.session import Session class BaseRepository(object): model_class = DeclarativeMeta @property def session(self) -> Session: raise Exception('Not Implements of session. Please override.') @property def read_only_session(self) -> Session: raise Exception('Not Implements of read_only_session. Please override.') def __init__(self, context: Context) -> None: self.context = context def all(self) -> List[DeclarativeMeta]: objects = self.read_only_session.query(self.model_class).all() self.session.commit() return objects def create(self, fields: Dict) -> DeclarativeMeta: try: model = self.model_class(**fields) model = self.session.merge(model) self.session.add(model) self.session.commit() except: self.session.rollback() raise return model def update(self, model: DeclarativeMeta, fields: Dict) -> DeclarativeMeta: try: for key in fields: setattr(model, key, fields[key]) model = self.session.merge(model) self.session.add(model) self.session.commit() except: self.session.rollback() raise return model def delete(self, model: DeclarativeMeta) -> None: try: model = self.session.merge(model) self.session.delete(model) self.session.commit() except: self.session.rollback() raise def find(self, primary_id: int) -> DeclarativeMeta: return self.read_only_session.query( self.model_class).filter(self.model_class.id == primary_id).one_or_none() def exist(self, primary_id: Union[int, str]) -> bool: return self.read_only_session.query( self.model_class).filter(self.model_class.id == primary_id).exists().scalar() def save(self) -> None: self.session.commit()
PypiClean
/stf-client-0.1.0.tar.gz/stf-client-0.1.0/docs/DeviceResponse.md
# DeviceResponse ## Properties Name | Type | Description | Notes ------------ | ------------- | ------------- | ------------- **success** | **bool** | | **description** | **str** | | **device** | **bool, date, datetime, dict, float, int, list, str, none_type** | | **any string name** | **bool, date, datetime, dict, float, int, list, str, none_type** | any string name can be used but the value must be the correct type | [optional] [[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
PypiClean
/python_telegram_bot_raw-20.4-py3-none-any.whl/telegram/_files/inputmedia.py
"""Base class for Telegram InputMedia Objects.""" from typing import Optional, Sequence, Tuple, Union from telegram._files.animation import Animation from telegram._files.audio import Audio from telegram._files.document import Document from telegram._files.inputfile import InputFile from telegram._files.photosize import PhotoSize from telegram._files.video import Video from telegram._messageentity import MessageEntity from telegram._telegramobject import TelegramObject from telegram._utils.argumentparsing import parse_sequence_arg from telegram._utils.defaultvalue import DEFAULT_NONE from telegram._utils.files import parse_file_input from telegram._utils.types import FileInput, JSONDict, ODVInput from telegram._utils.warnings_transition import ( warn_about_deprecated_attr_in_property, warn_about_thumb_return_thumbnail, ) from telegram.constants import InputMediaType MediaType = Union[Animation, Audio, Document, PhotoSize, Video] class InputMedia(TelegramObject): """ Base class for Telegram InputMedia Objects. .. versionchanged:: 20.0 Added arguments and attributes :attr:`type`, :attr:`media`, :attr:`caption`, :attr:`caption_entities`, :paramref:`parse_mode`. .. seealso:: :wiki:`Working with Files and Media <Working-with-Files-and-Media>` Args: media_type (:obj:`str`): Type of media that the instance represents. media (:obj:`str` | :term:`file object` | :obj:`bytes` | :class:`pathlib.Path` | \ :class:`telegram.Animation` | :class:`telegram.Audio` | \ :class:`telegram.Document` | :class:`telegram.PhotoSize` | \ :class:`telegram.Video`): File to send. |fileinputnopath| Lastly you can pass an existing telegram media object of the corresponding type to send. caption (:obj:`str`, optional): Caption of the media to be sent, 0-:tg-const:`telegram.constants.MessageLimit.CAPTION_LENGTH` characters after entities parsing. caption_entities (Sequence[:class:`telegram.MessageEntity`], optional): |caption_entities| .. versionchanged:: 20.0 |sequenceclassargs| parse_mode (:obj:`str`, optional): |parse_mode| Attributes: type (:obj:`str`): Type of the input media. media (:obj:`str` | :class:`telegram.InputFile`): Media to send. caption (:obj:`str`): Optional. Caption of the media to be sent, 0-:tg-const:`telegram.constants.MessageLimit.CAPTION_LENGTH` characters after entities parsing. parse_mode (:obj:`str`): Optional. |parse_mode| caption_entities (Tuple[:class:`telegram.MessageEntity`]): Optional. |captionentitiesattr| .. versionchanged:: 20.0 * |tupleclassattrs| * |alwaystuple| """ __slots__ = ("caption", "caption_entities", "media", "parse_mode", "type") def __init__( self, media_type: str, media: Union[str, InputFile, MediaType], caption: Optional[str] = None, caption_entities: Optional[Sequence[MessageEntity]] = None, parse_mode: ODVInput[str] = DEFAULT_NONE, *, api_kwargs: Optional[JSONDict] = None, ): super().__init__(api_kwargs=api_kwargs) self.type: str = media_type self.media: Union[str, InputFile, Animation, Audio, Document, PhotoSize, Video] = media self.caption: Optional[str] = caption self.caption_entities: Tuple[MessageEntity, ...] = parse_sequence_arg(caption_entities) self.parse_mode: ODVInput[str] = parse_mode self._freeze() @staticmethod def _parse_thumb_input(thumb: Optional[FileInput]) -> Optional[Union[str, InputFile]]: # We use local_mode=True because we don't have access to the actual setting and want # things to work in local mode. return ( parse_file_input(thumb, attach=True, local_mode=True) if thumb is not None else thumb ) class InputMediaAnimation(InputMedia): """Represents an animation file (GIF or H.264/MPEG-4 AVC video without sound) to be sent. Note: When using a :class:`telegram.Animation` for the :attr:`media` attribute, it will take the width, height and duration from that video, unless otherwise specified with the optional arguments. .. seealso:: :wiki:`Working with Files and Media <Working-with-Files-and-Media>` Args: media (:obj:`str` | :term:`file object` | :obj:`bytes` | :class:`pathlib.Path` | \ :class:`telegram.Animation`): File to send. |fileinputnopath| Lastly you can pass an existing :class:`telegram.Animation` object to send. .. versionchanged:: 13.2 Accept :obj:`bytes` as input. filename (:obj:`str`, optional): Custom file name for the animation, when uploading a new file. Convenience parameter, useful e.g. when sending files generated by the :obj:`tempfile` module. .. versionadded:: 13.1 thumb (:term:`file object` | :obj:`bytes` | :class:`pathlib.Path` | :obj:`str`, \ optional): |thumbdocstringnopath| .. versionchanged:: 13.2 Accept :obj:`bytes` as input. .. deprecated:: 20.2 |thumbargumentdeprecation| :paramref:`thumbnail`. caption (:obj:`str`, optional): Caption of the animation to be sent, 0-:tg-const:`telegram.constants.MessageLimit.CAPTION_LENGTH` characters after entities parsing. parse_mode (:obj:`str`, optional): |parse_mode| caption_entities (Sequence[:class:`telegram.MessageEntity`], optional): |caption_entities| .. versionchanged:: 20.0 |sequenceclassargs| width (:obj:`int`, optional): Animation width. height (:obj:`int`, optional): Animation height. duration (:obj:`int`, optional): Animation duration in seconds. has_spoiler (:obj:`bool`, optional): Pass :obj:`True`, if the animation needs to be covered with a spoiler animation. .. versionadded:: 20.0 thumbnail (:term:`file object` | :obj:`bytes` | :class:`pathlib.Path` | :obj:`str`, \ optional): |thumbdocstringnopath| .. versionadded:: 20.2 Attributes: type (:obj:`str`): :tg-const:`telegram.constants.InputMediaType.ANIMATION`. media (:obj:`str` | :class:`telegram.InputFile`): Animation to send. caption (:obj:`str`): Optional. Caption of the animation to be sent, 0-:tg-const:`telegram.constants.MessageLimit.CAPTION_LENGTH` characters after entities parsing. parse_mode (:obj:`str`): Optional. The parse mode to use for text formatting. caption_entities (Tuple[:class:`telegram.MessageEntity`]): Optional. |captionentitiesattr| .. versionchanged:: 20.0 * |tupleclassattrs| * |alwaystuple| width (:obj:`int`): Optional. Animation width. height (:obj:`int`): Optional. Animation height. duration (:obj:`int`): Optional. Animation duration in seconds. has_spoiler (:obj:`bool`): Optional. :obj:`True`, if the animation is covered with a spoiler animation. .. versionadded:: 20.0 thumbnail (:class:`telegram.InputFile`): Optional. |thumbdocstringbase| .. versionadded:: 20.2 """ __slots__ = ("duration", "height", "width", "has_spoiler", "thumbnail") def __init__( self, media: Union[FileInput, Animation], thumb: Optional[FileInput] = None, caption: Optional[str] = None, parse_mode: ODVInput[str] = DEFAULT_NONE, width: Optional[int] = None, height: Optional[int] = None, duration: Optional[int] = None, caption_entities: Optional[Sequence[MessageEntity]] = None, filename: Optional[str] = None, has_spoiler: Optional[bool] = None, thumbnail: Optional[FileInput] = None, *, api_kwargs: Optional[JSONDict] = None, ): if isinstance(media, Animation): width = media.width if width is None else width height = media.height if height is None else height duration = media.duration if duration is None else duration media = media.file_id else: # We use local_mode=True because we don't have access to the actual setting and want # things to work in local mode. media = parse_file_input(media, filename=filename, attach=True, local_mode=True) thumbnail = warn_about_thumb_return_thumbnail(deprecated_arg=thumb, new_arg=thumbnail) super().__init__( InputMediaType.ANIMATION, media, caption, caption_entities, parse_mode, api_kwargs=api_kwargs, ) with self._unfrozen(): self.thumbnail: Optional[Union[str, InputFile]] = self._parse_thumb_input(thumbnail) self.width: Optional[int] = width self.height: Optional[int] = height self.duration: Optional[int] = duration self.has_spoiler: Optional[bool] = has_spoiler @property def thumb(self) -> Optional[Union[str, InputFile]]: """:class:`telegram.InputFile`: Optional. |thumbdocstringbase| .. deprecated:: 20.2 |thumbattributedeprecation| :attr:`thumbnail`. """ warn_about_deprecated_attr_in_property( deprecated_attr_name="thumb", new_attr_name="thumbnail", bot_api_version="6.6", ) return self.thumbnail class InputMediaPhoto(InputMedia): """Represents a photo to be sent. .. seealso:: :wiki:`Working with Files and Media <Working-with-Files-and-Media>` Args: media (:obj:`str` | :term:`file object` | :obj:`bytes` | :class:`pathlib.Path` | \ :class:`telegram.PhotoSize`): File to send. |fileinputnopath| Lastly you can pass an existing :class:`telegram.PhotoSize` object to send. .. versionchanged:: 13.2 Accept :obj:`bytes` as input. filename (:obj:`str`, optional): Custom file name for the photo, when uploading a new file. Convenience parameter, useful e.g. when sending files generated by the :obj:`tempfile` module. .. versionadded:: 13.1 caption (:obj:`str`, optional ): Caption of the photo to be sent, 0-:tg-const:`telegram.constants.MessageLimit.CAPTION_LENGTH` characters after entities parsing. parse_mode (:obj:`str`, optional): |parse_mode| caption_entities (Sequence[:class:`telegram.MessageEntity`], optional): |caption_entities| .. versionchanged:: 20.0 |sequenceclassargs| has_spoiler (:obj:`bool`, optional): Pass :obj:`True`, if the photo needs to be covered with a spoiler animation. .. versionadded:: 20.0 Attributes: type (:obj:`str`): :tg-const:`telegram.constants.InputMediaType.PHOTO`. media (:obj:`str` | :class:`telegram.InputFile`): Photo to send. caption (:obj:`str`): Optional. Caption of the photo to be sent, 0-:tg-const:`telegram.constants.MessageLimit.CAPTION_LENGTH` characters after entities parsing. parse_mode (:obj:`str`): Optional. |parse_mode| caption_entities (Tuple[:class:`telegram.MessageEntity`]): Optional. |captionentitiesattr| .. versionchanged:: 20.0 * |tupleclassattrs| * |alwaystuple| has_spoiler (:obj:`bool`): Optional. :obj:`True`, if the photo is covered with a spoiler animation. .. versionadded:: 20.0 """ __slots__ = ("has_spoiler",) def __init__( self, media: Union[FileInput, PhotoSize], caption: Optional[str] = None, parse_mode: ODVInput[str] = DEFAULT_NONE, caption_entities: Optional[Sequence[MessageEntity]] = None, filename: Optional[str] = None, has_spoiler: Optional[bool] = None, *, api_kwargs: Optional[JSONDict] = None, ): # We use local_mode=True because we don't have access to the actual setting and want # things to work in local mode. media = parse_file_input(media, PhotoSize, filename=filename, attach=True, local_mode=True) super().__init__( InputMediaType.PHOTO, media, caption, caption_entities, parse_mode, api_kwargs=api_kwargs, ) with self._unfrozen(): self.has_spoiler: Optional[bool] = has_spoiler class InputMediaVideo(InputMedia): """Represents a video to be sent. .. seealso:: :wiki:`Working with Files and Media <Working-with-Files-and-Media>` Note: * When using a :class:`telegram.Video` for the :attr:`media` attribute, it will take the width, height and duration from that video, unless otherwise specified with the optional arguments. * :paramref:`thumb` will be ignored for small video files, for which Telegram can easily generate thumbnails. However, this behaviour is undocumented and might be changed by Telegram. Args: media (:obj:`str` | :term:`file object` | :obj:`bytes` | :class:`pathlib.Path` | \ :class:`telegram.Video`): File to send. |fileinputnopath| Lastly you can pass an existing :class:`telegram.Video` object to send. .. versionchanged:: 13.2 Accept :obj:`bytes` as input. filename (:obj:`str`, optional): Custom file name for the video, when uploading a new file. Convenience parameter, useful e.g. when sending files generated by the :obj:`tempfile` module. .. versionadded:: 13.1 caption (:obj:`str`, optional): Caption of the video to be sent, 0-:tg-const:`telegram.constants.MessageLimit.CAPTION_LENGTH` characters after entities parsing. parse_mode (:obj:`str`, optional): |parse_mode| caption_entities (Sequence[:class:`telegram.MessageEntity`], optional): |caption_entities| .. versionchanged:: 20.0 |sequenceclassargs| width (:obj:`int`, optional): Video width. height (:obj:`int`, optional): Video height. duration (:obj:`int`, optional): Video duration in seconds. supports_streaming (:obj:`bool`, optional): Pass :obj:`True`, if the uploaded video is suitable for streaming. thumb (:term:`file object` | :obj:`bytes` | :class:`pathlib.Path` | :obj:`str`, \ optional): |thumbdocstringnopath| .. versionchanged:: 13.2 Accept :obj:`bytes` as input. .. deprecated:: 20.2 |thumbargumentdeprecation| :paramref:`thumbnail`. has_spoiler (:obj:`bool`, optional): Pass :obj:`True`, if the video needs to be covered with a spoiler animation. .. versionadded:: 20.0 thumbnail (:term:`file object` | :obj:`bytes` | :class:`pathlib.Path` | :obj:`str`, \ optional): |thumbdocstringnopath| .. versionadded:: 20.2 Attributes: type (:obj:`str`): :tg-const:`telegram.constants.InputMediaType.VIDEO`. media (:obj:`str` | :class:`telegram.InputFile`): Video file to send. caption (:obj:`str`): Optional. Caption of the video to be sent, 0-:tg-const:`telegram.constants.MessageLimit.CAPTION_LENGTH` characters after entities parsing. parse_mode (:obj:`str`): Optional. |parse_mode| caption_entities (Tuple[:class:`telegram.MessageEntity`]): Optional. |captionentitiesattr| .. versionchanged:: 20.0 * |tupleclassattrs| * |alwaystuple| width (:obj:`int`): Optional. Video width. height (:obj:`int`): Optional. Video height. duration (:obj:`int`): Optional. Video duration in seconds. supports_streaming (:obj:`bool`): Optional. :obj:`True`, if the uploaded video is suitable for streaming. has_spoiler (:obj:`bool`): Optional. :obj:`True`, if the video is covered with a spoiler animation. .. versionadded:: 20.0 thumbnail (:class:`telegram.InputFile`): Optional. |thumbdocstringbase| .. versionadded:: 20.2 """ __slots__ = ( "duration", "height", "supports_streaming", "width", "has_spoiler", "thumbnail", ) def __init__( self, media: Union[FileInput, Video], caption: Optional[str] = None, width: Optional[int] = None, height: Optional[int] = None, duration: Optional[int] = None, supports_streaming: Optional[bool] = None, parse_mode: ODVInput[str] = DEFAULT_NONE, thumb: Optional[FileInput] = None, caption_entities: Optional[Sequence[MessageEntity]] = None, filename: Optional[str] = None, has_spoiler: Optional[bool] = None, thumbnail: Optional[FileInput] = None, *, api_kwargs: Optional[JSONDict] = None, ): if isinstance(media, Video): width = width if width is not None else media.width height = height if height is not None else media.height duration = duration if duration is not None else media.duration media = media.file_id else: # We use local_mode=True because we don't have access to the actual setting and want # things to work in local mode. media = parse_file_input(media, filename=filename, attach=True, local_mode=True) thumbnail = warn_about_thumb_return_thumbnail(deprecated_arg=thumb, new_arg=thumbnail) super().__init__( InputMediaType.VIDEO, media, caption, caption_entities, parse_mode, api_kwargs=api_kwargs, ) with self._unfrozen(): self.width: Optional[int] = width self.height: Optional[int] = height self.duration: Optional[int] = duration self.thumbnail: Optional[Union[str, InputFile]] = self._parse_thumb_input(thumbnail) self.supports_streaming: Optional[bool] = supports_streaming self.has_spoiler: Optional[bool] = has_spoiler @property def thumb(self) -> Optional[Union[str, InputFile]]: """:class:`telegram.InputFile`: Optional. |thumbdocstringbase| .. deprecated:: 20.2 |thumbattributedeprecation| :attr:`thumbnail`. """ warn_about_deprecated_attr_in_property( deprecated_attr_name="thumb", new_attr_name="thumbnail", bot_api_version="6.6", ) return self.thumbnail class InputMediaAudio(InputMedia): """Represents an audio file to be treated as music to be sent. .. seealso:: :wiki:`Working with Files and Media <Working-with-Files-and-Media>` Note: When using a :class:`telegram.Audio` for the :attr:`media` attribute, it will take the duration, performer and title from that video, unless otherwise specified with the optional arguments. Args: media (:obj:`str` | :term:`file object` | :obj:`bytes` | :class:`pathlib.Path` | \ :class:`telegram.Audio`): File to send. |fileinputnopath| Lastly you can pass an existing :class:`telegram.Audio` object to send. .. versionchanged:: 13.2 Accept :obj:`bytes` as input. filename (:obj:`str`, optional): Custom file name for the audio, when uploading a new file. Convenience parameter, useful e.g. when sending files generated by the :obj:`tempfile` module. .. versionadded:: 13.1 caption (:obj:`str`, optional): Caption of the audio to be sent, 0-:tg-const:`telegram.constants.MessageLimit.CAPTION_LENGTH` characters after entities parsing. parse_mode (:obj:`str`, optional): |parse_mode| caption_entities (Sequence[:class:`telegram.MessageEntity`], optional): |caption_entities| .. versionchanged:: 20.0 |sequenceclassargs| duration (:obj:`int`, optional): Duration of the audio in seconds as defined by sender. performer (:obj:`str`, optional): Performer of the audio as defined by sender or by audio tags. title (:obj:`str`, optional): Title of the audio as defined by sender or by audio tags. thumb (:term:`file object` | :obj:`bytes` | :class:`pathlib.Path` | :obj:`str`, \ optional): |thumbdocstringnopath| .. versionchanged:: 13.2 Accept :obj:`bytes` as input. .. deprecated:: 20.2 |thumbargumentdeprecation| :paramref:`thumbnail`. thumbnail (:term:`file object` | :obj:`bytes` | :class:`pathlib.Path` | :obj:`str`, \ optional): |thumbdocstringnopath| .. versionadded:: 20.2 Attributes: type (:obj:`str`): :tg-const:`telegram.constants.InputMediaType.AUDIO`. media (:obj:`str` | :class:`telegram.InputFile`): Audio file to send. caption (:obj:`str`): Optional. Caption of the audio to be sent, 0-:tg-const:`telegram.constants.MessageLimit.CAPTION_LENGTH` characters after entities parsing. parse_mode (:obj:`str`): Optional. |parse_mode| caption_entities (Tuple[:class:`telegram.MessageEntity`]): Optional. |captionentitiesattr| .. versionchanged:: 20.0 * |tupleclassattrs| * |alwaystuple| duration (:obj:`int`): Optional. Duration of the audio in seconds. performer (:obj:`str`): Optional. Performer of the audio as defined by sender or by audio tags. title (:obj:`str`): Optional. Title of the audio as defined by sender or by audio tags. thumbnail (:class:`telegram.InputFile`): Optional. |thumbdocstringbase| .. versionadded:: 20.2 """ __slots__ = ("duration", "performer", "title", "thumbnail") def __init__( self, media: Union[FileInput, Audio], thumb: Optional[FileInput] = None, caption: Optional[str] = None, parse_mode: ODVInput[str] = DEFAULT_NONE, duration: Optional[int] = None, performer: Optional[str] = None, title: Optional[str] = None, caption_entities: Optional[Sequence[MessageEntity]] = None, filename: Optional[str] = None, thumbnail: Optional[FileInput] = None, *, api_kwargs: Optional[JSONDict] = None, ): if isinstance(media, Audio): duration = media.duration if duration is None else duration performer = media.performer if performer is None else performer title = media.title if title is None else title media = media.file_id else: # We use local_mode=True because we don't have access to the actual setting and want # things to work in local mode. media = parse_file_input(media, filename=filename, attach=True, local_mode=True) thumbnail = warn_about_thumb_return_thumbnail(deprecated_arg=thumb, new_arg=thumbnail) super().__init__( InputMediaType.AUDIO, media, caption, caption_entities, parse_mode, api_kwargs=api_kwargs, ) with self._unfrozen(): self.thumbnail: Optional[Union[str, InputFile]] = self._parse_thumb_input(thumbnail) self.duration: Optional[int] = duration self.title: Optional[str] = title self.performer: Optional[str] = performer @property def thumb(self) -> Optional[Union[str, InputFile]]: """:class:`telegram.InputFile`: Optional. |thumbdocstringbase| .. deprecated:: 20.2 |thumbattributedeprecation| :attr:`thumbnail`. """ warn_about_deprecated_attr_in_property( deprecated_attr_name="thumb", new_attr_name="thumbnail", bot_api_version="6.6", ) return self.thumbnail class InputMediaDocument(InputMedia): """Represents a general file to be sent. .. seealso:: :wiki:`Working with Files and Media <Working-with-Files-and-Media>` Args: media (:obj:`str` | :term:`file object` | :obj:`bytes` | :class:`pathlib.Path` | \ :class:`telegram.Document`): File to send. |fileinputnopath| Lastly you can pass an existing :class:`telegram.Document` object to send. .. versionchanged:: 13.2 Accept :obj:`bytes` as input. filename (:obj:`str`, optional): Custom file name for the document, when uploading a new file. Convenience parameter, useful e.g. when sending files generated by the :obj:`tempfile` module. .. versionadded:: 13.1 caption (:obj:`str`, optional): Caption of the document to be sent, 0-:tg-const:`telegram.constants.MessageLimit.CAPTION_LENGTH` characters after entities parsing. parse_mode (:obj:`str`, optional): |parse_mode| caption_entities (Sequence[:class:`telegram.MessageEntity`], optional): |caption_entities| .. versionchanged:: 20.0 |sequenceclassargs| thumb (:term:`file object` | :obj:`bytes` | :class:`pathlib.Path` | :obj:`str`, \ optional): |thumbdocstringnopath| .. versionchanged:: 13.2 Accept :obj:`bytes` as input. .. deprecated:: 20.2 |thumbargumentdeprecation| :paramref:`thumbnail`. disable_content_type_detection (:obj:`bool`, optional): Disables automatic server-side content type detection for files uploaded using multipart/form-data. Always :obj:`True`, if the document is sent as part of an album. thumbnail (:term:`file object` | :obj:`bytes` | :class:`pathlib.Path` | :obj:`str`, \ optional): |thumbdocstringnopath| .. versionadded:: 20.2 Attributes: type (:obj:`str`): :tg-const:`telegram.constants.InputMediaType.DOCUMENT`. media (:obj:`str` | :class:`telegram.InputFile`): File to send. caption (:obj:`str`): Optional. Caption of the document to be sent, 0-:tg-const:`telegram.constants.MessageLimit.CAPTION_LENGTH` characters after entities parsing. parse_mode (:obj:`str`): Optional. |parse_mode| caption_entities (Tuple[:class:`telegram.MessageEntity`]): Optional. |captionentitiesattr| .. versionchanged:: 20.0 * |tupleclassattrs| * |alwaystuple| disable_content_type_detection (:obj:`bool`): Optional. Disables automatic server-side content type detection for files uploaded using multipart/form-data. Always :obj:`True`, if the document is sent as part of an album. thumbnail (:class:`telegram.InputFile`): Optional. |thumbdocstringbase| .. versionadded:: 20.2 """ __slots__ = ("disable_content_type_detection", "thumbnail") def __init__( self, media: Union[FileInput, Document], thumb: Optional[FileInput] = None, caption: Optional[str] = None, parse_mode: ODVInput[str] = DEFAULT_NONE, disable_content_type_detection: Optional[bool] = None, caption_entities: Optional[Sequence[MessageEntity]] = None, filename: Optional[str] = None, thumbnail: Optional[FileInput] = None, *, api_kwargs: Optional[JSONDict] = None, ): # We use local_mode=True because we don't have access to the actual setting and want # things to work in local mode. media = parse_file_input(media, Document, filename=filename, attach=True, local_mode=True) thumbnail = warn_about_thumb_return_thumbnail(deprecated_arg=thumb, new_arg=thumbnail) super().__init__( InputMediaType.DOCUMENT, media, caption, caption_entities, parse_mode, api_kwargs=api_kwargs, ) with self._unfrozen(): self.thumbnail: Optional[Union[str, InputFile]] = self._parse_thumb_input(thumbnail) self.disable_content_type_detection: Optional[bool] = disable_content_type_detection @property def thumb(self) -> Optional[Union[str, InputFile]]: """:class:`telegram.InputFile`: Optional. |thumbdocstringbase| .. deprecated:: 20.2 |thumbattributedeprecation| :attr:`thumbnail`. """ warn_about_deprecated_attr_in_property( deprecated_attr_name="thumb", new_attr_name="thumbnail", bot_api_version="6.6", ) return self.thumbnail
PypiClean
/ixnetwork_restpy-1.1.10.tar.gz/ixnetwork_restpy-1.1.10/ixnetwork_restpy/testplatform/sessions/ixnetwork/topology/cfmsimulatedmp_d0096b7abd3340f4c8bdf5e2ba2578fb.py
import sys from ixnetwork_restpy.base import Base from ixnetwork_restpy.files import Files if sys.version_info >= (3, 5): from typing import List, Any, Union class CfmSimulatedMp(Base): """Simulated Node Information The CfmSimulatedMp class encapsulates a list of cfmSimulatedMp resources that are managed by the system. A list of resources can be retrieved from the server using the CfmSimulatedMp.find() method. """ __slots__ = () _SDM_NAME = "cfmSimulatedMp" _SDM_ATT_MAP = { "Active": "active", "AisEnableUnicastMac": "aisEnableUnicastMac", "AisInterval": "aisInterval", "AisMode": "aisMode", "AisPriority": "aisPriority", "AisUnicastMac": "aisUnicastMac", "AutoDmTimeout": "autoDmTimeout", "AutoDmTimer": "autoDmTimer", "AutoLbIteration": "autoLbIteration", "AutoLbTimeoutInSec": "autoLbTimeoutInSec", "AutoLbTimerInSec": "autoLbTimerInSec", "AutoLmIteration": "autoLmIteration", "AutoLmTimeout": "autoLmTimeout", "AutoLmTimer": "autoLmTimer", "AutoLtIteration": "autoLtIteration", "AutoLtTimeoutInSec": "autoLtTimeoutInSec", "AutoLtTimerInSec": "autoLtTimerInSec", "AutoLtTtl": "autoLtTtl", "AutodmIteration": "autodmIteration", "CVlanId": "cVlanId", "CVlanPriority": "cVlanPriority", "CVlanTpid": "cVlanTpid", "CciInterval": "cciInterval", "CcmLmmTxFcf": "ccmLmmTxFcf", "CcmLmmTxFcfStepPer100mSec": "ccmLmmTxFcfStepPer100mSec", "CcmPriority": "ccmPriority", "CcmRxFcb": "ccmRxFcb", "CcmRxFcbStepPer100mSec": "ccmRxFcbStepPer100mSec", "ChassisId": "chassisId", "ChassisIdLength": "chassisIdLength", "ChassisIdSubType": "chassisIdSubType", "Count": "count", "DataTlvLength": "dataTlvLength", "DataTlvValue": "dataTlvValue", "DescriptiveName": "descriptiveName", "DmAllRemoteMeps": "dmAllRemoteMeps", "DmDestinationMacAddress": "dmDestinationMacAddress", "DmMethod": "dmMethod", "DmPriority": "dmPriority", "Enable1slRx": "enable1slRx", "EnableAisRx": "enableAisRx", "EnableAutoDm": "enableAutoDm", "EnableAutoLb": "enableAutoLb", "EnableAutoLm": "enableAutoLm", "EnableAutoLt": "enableAutoLt", "EnableDataTlv": "enableDataTlv", "EnableInterfaceStatusTlv": "enableInterfaceStatusTlv", "EnableLckRx": "enableLckRx", "EnableLmCounterUpdate": "enableLmCounterUpdate", "EnableOrganizationSpecificTlv": "enableOrganizationSpecificTlv", "EnablePortStatusTlv": "enablePortStatusTlv", "EnableSenderIdTlv": "enableSenderIdTlv", "EnableSlmRx": "enableSlmRx", "EnableTstRx": "enableTstRx", "EnableVlan": "enableVlan", "InterRemoteMepRxIncrementStep": "interRemoteMepRxIncrementStep", "InterRemoteMepTxIncrementStep": "interRemoteMepTxIncrementStep", "LbAllRemoteMeps": "lbAllRemoteMeps", "LbDestinationMacAddress": "lbDestinationMacAddress", "LbmPriority": "lbmPriority", "LckEnableUnicastMac": "lckEnableUnicastMac", "LckInterval": "lckInterval", "LckMode": "lckMode", "LckPriority": "lckPriority", "LckSupportAisGeneration": "lckSupportAisGeneration", "LckUnicastMac": "lckUnicastMac", "LmAllRemoteMeps": "lmAllRemoteMeps", "LmDestinationMacAddress": "lmDestinationMacAddress", "LmMethodType": "lmMethodType", "LmmPriority": "lmmPriority", "LmrPriority": "lmrPriority", "LmrRxFcf": "lmrRxFcf", "LmrRxFcfStepPer100mSec": "lmrRxFcfStepPer100mSec", "LmrTxFcb": "lmrTxFcb", "LmrTxFcbStepPer100mSec": "lmrTxFcbStepPer100mSec", "LtAllRemoteMeps": "ltAllRemoteMeps", "LtDestinationMacAddress": "ltDestinationMacAddress", "LtmPriority": "ltmPriority", "ManagementAddress": "managementAddress", "ManagementAddressDomain": "managementAddressDomain", "ManagementAddressDomainLength": "managementAddressDomainLength", "ManagementAddressLength": "managementAddressLength", "MdMegLevel": "mdMegLevel", "MdName": "mdName", "MdNameFormat": "mdNameFormat", "MegId": "megId", "MegIdFormat": "megIdFormat", "MepId": "mepId", "MpType": "mpType", "Name": "name", "NumberOfCustomTLVs": "numberOfCustomTLVs", "NumberOfSlm": "numberOfSlm", "OrganizationSpecificTlvLength": "organizationSpecificTlvLength", "OrganizationSpecificTlvValue": "organizationSpecificTlvValue", "OverrideVlanPriority": "overrideVlanPriority", "Rdi": "rdi", "SVlanId": "sVlanId", "SVlanPriority": "sVlanPriority", "SVlanTpid": "sVlanTpid", "ShortMaName": "shortMaName", "ShortMaNameFormat": "shortMaNameFormat", "SlmInitialTxfcb": "slmInitialTxfcb", "SlmSimulatedLossInRxPath": "slmSimulatedLossInRxPath", "SlmTxfcbStep": "slmTxfcbStep", "TstEnableUnicastMac": "tstEnableUnicastMac", "TstIncrementPacketLength": "tstIncrementPacketLength", "TstInitialPatternValue": "tstInitialPatternValue", "TstInterval": "tstInterval", "TstMode": "tstMode", "TstOverwriteSequenceNumber": "tstOverwriteSequenceNumber", "TstPacketLength": "tstPacketLength", "TstPacketLengthStep": "tstPacketLengthStep", "TstPatternType": "tstPatternType", "TstPriority": "tstPriority", "TstSequenceNumber": "tstSequenceNumber", "TstTestType": "tstTestType", "TstUnicastMac": "tstUnicastMac", "VlanId": "vlanId", "VlanPriority": "vlanPriority", "VlanStacking": "vlanStacking", "VlanTpid": "vlanTpid", } _SDM_ENUM_MAP = {} def __init__(self, parent, list_op=False): super(CfmSimulatedMp, self).__init__(parent, list_op) @property def CfmCustomTLVList(self): """ Returns ------- - obj(ixnetwork_restpy.testplatform.sessions.ixnetwork.topology.cfmcustomtlvlist_798bcbc04fddcff054434d56d2b00117.CfmCustomTLVList): An instance of the CfmCustomTLVList class Raises ------ - ServerError: The server has encountered an uncategorized error condition """ from ixnetwork_restpy.testplatform.sessions.ixnetwork.topology.cfmcustomtlvlist_798bcbc04fddcff054434d56d2b00117 import ( CfmCustomTLVList, ) if len(self._object_properties) > 0: if self._properties.get("CfmCustomTLVList", None) is not None: return self._properties.get("CfmCustomTLVList") return CfmCustomTLVList(self) @property def CfmSlm(self): """ Returns ------- - obj(ixnetwork_restpy.testplatform.sessions.ixnetwork.topology.cfmslm_a552256bb40704de586bc05808a60f7f.CfmSlm): An instance of the CfmSlm class Raises ------ - ServerError: The server has encountered an uncategorized error condition """ from ixnetwork_restpy.testplatform.sessions.ixnetwork.topology.cfmslm_a552256bb40704de586bc05808a60f7f import ( CfmSlm, ) if len(self._object_properties) > 0: if self._properties.get("CfmSlm", None) is not None: return self._properties.get("CfmSlm") return CfmSlm(self)._select() @property def StartCcmSimulatedMpParams(self): """ Returns ------- - obj(ixnetwork_restpy.testplatform.sessions.ixnetwork.topology.startccmsimulatedmpparams_4c311ea185eeaa4106e3c4181a4ec347.StartCcmSimulatedMpParams): An instance of the StartCcmSimulatedMpParams class Raises ------ - ServerError: The server has encountered an uncategorized error condition """ from ixnetwork_restpy.testplatform.sessions.ixnetwork.topology.startccmsimulatedmpparams_4c311ea185eeaa4106e3c4181a4ec347 import ( StartCcmSimulatedMpParams, ) if len(self._object_properties) > 0: if self._properties.get("StartCcmSimulatedMpParams", None) is not None: return self._properties.get("StartCcmSimulatedMpParams") return StartCcmSimulatedMpParams(self)._select() @property def StopCcmSimulatedMpParams(self): """ Returns ------- - obj(ixnetwork_restpy.testplatform.sessions.ixnetwork.topology.stopccmsimulatedmpparams_93b05cff27480ec5b14accd9b8a754a7.StopCcmSimulatedMpParams): An instance of the StopCcmSimulatedMpParams class Raises ------ - ServerError: The server has encountered an uncategorized error condition """ from ixnetwork_restpy.testplatform.sessions.ixnetwork.topology.stopccmsimulatedmpparams_93b05cff27480ec5b14accd9b8a754a7 import ( StopCcmSimulatedMpParams, ) if len(self._object_properties) > 0: if self._properties.get("StopCcmSimulatedMpParams", None) is not None: return self._properties.get("StopCcmSimulatedMpParams") return StopCcmSimulatedMpParams(self)._select() @property def Active(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Activate/Deactivate Configuration """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["Active"])) @property def AisEnableUnicastMac(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enable AIS in unicast mode to the specified MAC address of the remote MEP. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["AisEnableUnicastMac"]) ) @property def AisInterval(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Interval between two AIS PDUs. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["AisInterval"])) @property def AisMode(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Specifies Alarm Indication Signal Mode. Can be trigged from LCK only if set to Auto Mode. Manually Start or Stop otherwise. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["AisMode"])) @property def AisPriority(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): VLAN Priority for AIS PDU. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["AisPriority"])) @property def AisUnicastMac(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): The MAC address of the remote MEP. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["AisUnicastMac"])) @property def AutoDmTimeout(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Timeout value (in sec) of waiting for DMR of respective DMM. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["AutoDmTimeout"])) @property def AutoDmTimer(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Interval (in sec) between two DMM PDUs to be sent. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["AutoDmTimer"])) @property def AutoLbIteration(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Number of LBM PDUs to be sent. 0 for infinite timer. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["AutoLbIteration"]) ) @property def AutoLbTimeoutInSec(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Timeout value (in sec) of waiting for LBR of respective LBM. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["AutoLbTimeoutInSec"]) ) @property def AutoLbTimerInSec(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Interval (in sec) between two LBM PDUs to be sent. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["AutoLbTimerInSec"]) ) @property def AutoLmIteration(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Number of LMM PDUs to be sent. 0 for infinite timer. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["AutoLmIteration"]) ) @property def AutoLmTimeout(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Timeout value (in msec) of waiting for LMR of respective LMM. Must be multiple of 100. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["AutoLmTimeout"])) @property def AutoLmTimer(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Interval (in msec) between two LMM PDUs to be sent. Must be multiples of 100. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["AutoLmTimer"])) @property def AutoLtIteration(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Number of LTM PDUs to be sent. 0 for infinite timer. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["AutoLtIteration"]) ) @property def AutoLtTimeoutInSec(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Timeout value (in sec) of waiting for LTR of respective LTM. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["AutoLtTimeoutInSec"]) ) @property def AutoLtTimerInSec(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Interval (in sec) between two LTM PDUs to be sent. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["AutoLtTimerInSec"]) ) @property def AutoLtTtl(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): TTL for LBM PDU. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["AutoLtTtl"])) @property def AutodmIteration(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Number of DMM PDUs to be sent. 0 for infinite timer. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["AutodmIteration"]) ) @property def CVlanId(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): C-VLAN ID """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["CVlanId"])) @property def CVlanPriority(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): C-VLAN Priority """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["CVlanPriority"])) @property def CVlanTpid(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): C-VLAN TPID """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["CVlanTpid"])) @property def CciInterval(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Interval between two CCM PDUs. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["CciInterval"])) @property def CcmLmmTxFcf(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): TxFCf value in CCM (dual mode) or LMM (single mode) PDU. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["CcmLmmTxFcf"])) @property def CcmLmmTxFcfStepPer100mSec(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): TxFCf step value per 100ms in CCM (dual mode) or LMM (single mode) PDU. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["CcmLmmTxFcfStepPer100mSec"]) ) @property def CcmPriority(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): VLAN Priority for CCM PDU. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["CcmPriority"])) @property def CcmRxFcb(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): RxFCb value in CCM (dual mode) PDU. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["CcmRxFcb"])) @property def CcmRxFcbStepPer100mSec(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): RxFCb step value per 100ms in CCM (dual mode) PDU. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["CcmRxFcbStepPer100mSec"]) ) @property def ChassisId(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Chassis ID for Sender ID TLV. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["ChassisId"])) @property def ChassisIdLength(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Chassis ID Length for Sender ID TLV. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["ChassisIdLength"]) ) @property def ChassisIdSubType(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Chassis ID SubType for Sender ID TLV. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["ChassisIdSubType"]) ) @property def Count(self): # type: () -> int """ Returns ------- - number: Number of elements inside associated multiplier-scaled container object, e.g. number of devices inside a Device Group. """ return self._get_attribute(self._SDM_ATT_MAP["Count"]) @property def DataTlvLength(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Data TLV Length """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["DataTlvLength"])) @property def DataTlvValue(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Data TLV Value """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["DataTlvValue"])) @property def DescriptiveName(self): # type: () -> str """ Returns ------- - str: Longer, more descriptive name for element. It's not guaranteed to be unique like -name-, but may offer more context. """ return self._get_attribute(self._SDM_ATT_MAP["DescriptiveName"]) @property def DmAllRemoteMeps(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enables DMM to be sent for all remote MEPs. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["DmAllRemoteMeps"]) ) @property def DmDestinationMacAddress(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): MAC address of the remote MEP. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["DmDestinationMacAddress"]) ) @property def DmMethod(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Specifies One Way or Two Way Delay Measurement Method. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["DmMethod"])) @property def DmPriority(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): VLAN Priority for DMM PDU. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["DmPriority"])) @property def Enable1slRx(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enable 1SL Rx """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["Enable1slRx"])) @property def EnableAisRx(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enables AIS PDUs to be processed in this MEP upon receiving at port. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["EnableAisRx"])) @property def EnableAutoDm(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enable Periodic Delay Measurement. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["EnableAutoDm"])) @property def EnableAutoLb(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enable Periodic Loopback. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["EnableAutoLb"])) @property def EnableAutoLm(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enable Periodic Loss Measurement. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["EnableAutoLm"])) @property def EnableAutoLt(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enable Periodic Link Trace. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["EnableAutoLt"])) @property def EnableDataTlv(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enable Data TLV for all applicable PDUs. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["EnableDataTlv"])) @property def EnableInterfaceStatusTlv(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enable Interface Status TLV for all applicable PDUs. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["EnableInterfaceStatusTlv"]) ) @property def EnableLckRx(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enables LCK PDUs to be processed in this MEP upon receiving at port. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["EnableLckRx"])) @property def EnableLmCounterUpdate(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enable updating the counter value for subsequent PDUs. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["EnableLmCounterUpdate"]) ) @property def EnableOrganizationSpecificTlv(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enable Organization Specific TLV for all applicable PDUs. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["EnableOrganizationSpecificTlv"]), ) @property def EnablePortStatusTlv(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enable Port Status TLV for all applicable PDUs. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["EnablePortStatusTlv"]) ) @property def EnableSenderIdTlv(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enable Sender ID TLV for all applicable PDUs. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["EnableSenderIdTlv"]) ) @property def EnableSlmRx(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enable SLM Rx """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["EnableSlmRx"])) @property def EnableTstRx(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enables TST PDUs to be processed in this MEP upon receiving at port. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["EnableTstRx"])) @property def EnableVlan(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enable VLAN for this MP. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["EnableVlan"])) @property def InterRemoteMepRxIncrementStep(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Inter Remote MEP Rx Increment Step. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["InterRemoteMepRxIncrementStep"]), ) @property def InterRemoteMepTxIncrementStep(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Inter Remote MEP Tx Increment Step. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["InterRemoteMepTxIncrementStep"]), ) @property def LbAllRemoteMeps(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enables LBM to be sent for all remote MEPs. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["LbAllRemoteMeps"]) ) @property def LbDestinationMacAddress(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): MAC address of the remote MEP. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["LbDestinationMacAddress"]) ) @property def LbmPriority(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): VLAN Priority for LBM PDU. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["LbmPriority"])) @property def LckEnableUnicastMac(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enable LCK in unicast mode to the specified MAC address of the remote MEP. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["LckEnableUnicastMac"]) ) @property def LckInterval(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Interval between two LCK PDUs. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["LckInterval"])) @property def LckMode(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Specifies LCK Mode. Can be trigged from TST only if set to Auto Mode. Manually Start or Stop otherwise. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["LckMode"])) @property def LckPriority(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): VLAN Priority for LCK PDU. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["LckPriority"])) @property def LckSupportAisGeneration(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enable or disable AIS PDU generation. Needs AIS mode to be set to Auto. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["LckSupportAisGeneration"]) ) @property def LckUnicastMac(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): MAC Address of the remote MEP. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["LckUnicastMac"])) @property def LmAllRemoteMeps(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enables LMM to be sent for all remote MEPs. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["LmAllRemoteMeps"]) ) @property def LmDestinationMacAddress(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): MAC address of the remote MEP. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["LmDestinationMacAddress"]) ) @property def LmMethodType(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Loss Measurement Method - One Way or Two Way. For Two way, CCM PDUs are used. LMM & LMR otherwise. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["LmMethodType"])) @property def LmmPriority(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): VLAN Priority for LMM PDU. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["LmmPriority"])) @property def LmrPriority(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): VLAN Priority for LMR PDU. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["LmrPriority"])) @property def LmrRxFcf(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): RxFCf value of LMR PDU (Single Mode). """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["LmrRxFcf"])) @property def LmrRxFcfStepPer100mSec(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): RxFCf step value per 100ms of LMR PDU (Single Mode). """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["LmrRxFcfStepPer100mSec"]) ) @property def LmrTxFcb(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): TxFCb value in LMR PDU (Single mode) i.e. TxFCf of CCM or LMM. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["LmrTxFcb"])) @property def LmrTxFcbStepPer100mSec(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): TxFCb step value per 100ms in LMR PDU (Single mode). """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["LmrTxFcbStepPer100mSec"]) ) @property def LtAllRemoteMeps(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enables LTM to be sent for all remote MEPs. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["LtAllRemoteMeps"]) ) @property def LtDestinationMacAddress(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): MAC address of the remote MEP. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["LtDestinationMacAddress"]) ) @property def LtmPriority(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): VLAN Priority for LTM PDU. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["LtmPriority"])) @property def ManagementAddress(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Management Address for Sender ID TLV. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["ManagementAddress"]) ) @property def ManagementAddressDomain(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Management Address Domain for Sender ID TLV. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["ManagementAddressDomain"]) ) @property def ManagementAddressDomainLength(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Management Address Domain Length for Sender ID TLV. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["ManagementAddressDomainLength"]), ) @property def ManagementAddressLength(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Management Address Length for Sender ID TLV. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["ManagementAddressLength"]) ) @property def MdMegLevel(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): MD/MEG Level in which this MP belongs. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["MdMegLevel"])) @property def MdName(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): MD Name for the selected MD Level. For MD Name Format MAC + Int, Please Use MAC-Int eg. 11:22:33:44:55:66-1. For Others, Use Any String. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["MdName"])) @property def MdNameFormat(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Format for Maintenance Domain Name. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["MdNameFormat"])) @property def MegId(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): MEG ID (Y.1731 Mode). """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["MegId"])) @property def MegIdFormat(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Format for MEG ID (Y.1731 Mode). Non-ICC formats are supported only if 'Allow CFM MAID Formats' is enabled. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["MegIdFormat"])) @property def MepId(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): MP Identifier. Must be unique in one MA. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["MepId"])) @property def MpType(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Select MP type MIP or MEP. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["MpType"])) @property def Name(self): # type: () -> str """ Returns ------- - str: Name of NGPF element, guaranteed to be unique in Scenario """ return self._get_attribute(self._SDM_ATT_MAP["Name"]) @Name.setter def Name(self, value): # type: (str) -> None self._set_attribute(self._SDM_ATT_MAP["Name"], value) @property def NumberOfCustomTLVs(self): # type: () -> int """ Returns ------- - number: Number of Custom TLVs for PDUs. """ return self._get_attribute(self._SDM_ATT_MAP["NumberOfCustomTLVs"]) @NumberOfCustomTLVs.setter def NumberOfCustomTLVs(self, value): # type: (int) -> None self._set_attribute(self._SDM_ATT_MAP["NumberOfCustomTLVs"], value) @property def NumberOfSlm(self): # type: () -> int """ Returns ------- - number: Number of SLM tests """ return self._get_attribute(self._SDM_ATT_MAP["NumberOfSlm"]) @NumberOfSlm.setter def NumberOfSlm(self, value): # type: (int) -> None self._set_attribute(self._SDM_ATT_MAP["NumberOfSlm"], value) @property def OrganizationSpecificTlvLength(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Organization Specific TLV Length """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["OrganizationSpecificTlvLength"]), ) @property def OrganizationSpecificTlvValue(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Organization Specific TLV Value """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["OrganizationSpecificTlvValue"]) ) @property def OverrideVlanPriority(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Override VLAN Priority value for PDUs. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["OverrideVlanPriority"]) ) @property def Rdi(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Remote Defect Indication. Auto Update - RDI On if there is a defect in remote MEP. On or Off - Turn on or off RDI intentionally. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["Rdi"])) @property def SVlanId(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): S-VLAN ID """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["SVlanId"])) @property def SVlanPriority(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): S-VLAN Priority """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["SVlanPriority"])) @property def SVlanTpid(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): S-VLAN TPID """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["SVlanTpid"])) @property def ShortMaName(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Short Maintenance Association Name. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["ShortMaName"])) @property def ShortMaNameFormat(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Format for Maintenance Association Name. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["ShortMaNameFormat"]) ) @property def SlmInitialTxfcb(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Initial TxFCb """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["SlmInitialTxfcb"]) ) @property def SlmSimulatedLossInRxPath(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Percentage of Simulated Loss (in Rx Path) """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["SlmSimulatedLossInRxPath"]) ) @property def SlmTxfcbStep(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): TxFCb Step """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["SlmTxfcbStep"])) @property def TstEnableUnicastMac(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Enable TST in unicast mode to the specified MAC address of the remote MEP. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["TstEnableUnicastMac"]) ) @property def TstIncrementPacketLength(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Increment Packet Length for subsequent TST PDUs. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["TstIncrementPacketLength"]) ) @property def TstInitialPatternValue(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Initial Pattern Value of Test. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["TstInitialPatternValue"]) ) @property def TstInterval(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Interval between two TST PDUs (in ms). """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["TstInterval"])) @property def TstMode(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): TST Mode On or Off. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["TstMode"])) @property def TstOverwriteSequenceNumber(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Overwrite Sequence Number using specified value. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["TstOverwriteSequenceNumber"]) ) @property def TstPacketLength(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Packet Length of TST PDU. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["TstPacketLength"]) ) @property def TstPacketLengthStep(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Increment for Packet Length Step. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["TstPacketLengthStep"]) ) @property def TstPatternType(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Pattern Type of Test. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["TstPatternType"]) ) @property def TstPriority(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): VLAN Priority for TST PDU. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["TstPriority"])) @property def TstSequenceNumber(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Sequence Number of TST PDU. Effective only if overridden. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue( self, self._get_attribute(self._SDM_ATT_MAP["TstSequenceNumber"]) ) @property def TstTestType(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Test Type In or Out Service. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["TstTestType"])) @property def TstUnicastMac(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): The MAC address of the remote MEP. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["TstUnicastMac"])) @property def VlanId(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): VLAN ID """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["VlanId"])) @property def VlanPriority(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): VLAN Priority """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["VlanPriority"])) @property def VlanStacking(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): Single or Stacked VLAN Selection. """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["VlanStacking"])) @property def VlanTpid(self): # type: () -> 'Multivalue' """ Returns ------- - obj(ixnetwork_restpy.multivalue.Multivalue): VLAN TPID """ from ixnetwork_restpy.multivalue import Multivalue return Multivalue(self, self._get_attribute(self._SDM_ATT_MAP["VlanTpid"])) def update(self, Name=None, NumberOfCustomTLVs=None, NumberOfSlm=None): # type: (str, int, int) -> CfmSimulatedMp """Updates cfmSimulatedMp resource on the server. This method has some named parameters with a type: obj (Multivalue). The Multivalue class has documentation that details the possible values for those named parameters. Args ---- - Name (str): Name of NGPF element, guaranteed to be unique in Scenario - NumberOfCustomTLVs (number): Number of Custom TLVs for PDUs. - NumberOfSlm (number): Number of SLM tests Raises ------ - ServerError: The server has encountered an uncategorized error condition """ return self._update(self._map_locals(self._SDM_ATT_MAP, locals())) def add(self, Name=None, NumberOfCustomTLVs=None, NumberOfSlm=None): # type: (str, int, int) -> CfmSimulatedMp """Adds a new cfmSimulatedMp resource on the json, only valid with batch add utility Args ---- - Name (str): Name of NGPF element, guaranteed to be unique in Scenario - NumberOfCustomTLVs (number): Number of Custom TLVs for PDUs. - NumberOfSlm (number): Number of SLM tests Returns ------- - self: This instance with all currently retrieved cfmSimulatedMp resources using find and the newly added cfmSimulatedMp resources available through an iterator or index Raises ------ - Exception: if this function is not being used with config assistance """ return self._add_xpath(self._map_locals(self._SDM_ATT_MAP, locals())) def find( self, Count=None, DescriptiveName=None, Name=None, NumberOfCustomTLVs=None, NumberOfSlm=None, ): # type: (int, str, str, int, int) -> CfmSimulatedMp """Finds and retrieves cfmSimulatedMp resources from the server. All named parameters are evaluated on the server using regex. The named parameters can be used to selectively retrieve cfmSimulatedMp resources from the server. To retrieve an exact match ensure the parameter value starts with ^ and ends with $ By default the find method takes no parameters and will retrieve all cfmSimulatedMp resources from the server. Args ---- - Count (number): Number of elements inside associated multiplier-scaled container object, e.g. number of devices inside a Device Group. - DescriptiveName (str): Longer, more descriptive name for element. It's not guaranteed to be unique like -name-, but may offer more context. - Name (str): Name of NGPF element, guaranteed to be unique in Scenario - NumberOfCustomTLVs (number): Number of Custom TLVs for PDUs. - NumberOfSlm (number): Number of SLM tests Returns ------- - self: This instance with matching cfmSimulatedMp resources retrieved from the server available through an iterator or index Raises ------ - ServerError: The server has encountered an uncategorized error condition """ return self._select(self._map_locals(self._SDM_ATT_MAP, locals())) def read(self, href): """Retrieves a single instance of cfmSimulatedMp data from the server. Args ---- - href (str): An href to the instance to be retrieved Returns ------- - self: This instance with the cfmSimulatedMp resources from the server available through an iterator or index Raises ------ - NotFoundError: The requested resource does not exist on the server - ServerError: The server has encountered an uncategorized error condition """ return self._read(href) def Abort(self, *args, **kwargs): # type: (*Any, **Any) -> None """Executes the abort operation on the server. Abort CPF control plane (equals to demote to kUnconfigured state). abort(async_operation=bool) --------------------------- - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. Raises ------ - NotFoundError: The requested resource does not exist on the server - ServerError: The server has encountered an uncategorized error condition """ payload = {"Arg1": self} for i in range(len(args)): payload["Arg%s" % (i + 2)] = args[i] for item in kwargs.items(): payload[item[0]] = item[1] return self._execute("abort", payload=payload, response_object=None) def ActivateMpSimulated(self, *args, **kwargs): # type: (*Any, **Any) -> None """Executes the activateMpSimulated operation on the server. Activate Simulated MP The IxNetwork model allows for multiple method Signatures with the same name while python does not. activateMpSimulated(async_operation=bool) ----------------------------------------- - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. activateMpSimulated(SessionIndices=list, async_operation=bool) -------------------------------------------------------------- - SessionIndices (list(number)): This parameter requires an array of session numbers 1 2 3 - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. activateMpSimulated(SessionIndices=string, async_operation=bool) ---------------------------------------------------------------- - SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12 - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. Raises ------ - NotFoundError: The requested resource does not exist on the server - ServerError: The server has encountered an uncategorized error condition """ payload = {"Arg1": self} for i in range(len(args)): payload["Arg%s" % (i + 2)] = args[i] for item in kwargs.items(): payload[item[0]] = item[1] return self._execute( "activateMpSimulated", payload=payload, response_object=None ) def DeactivateMpSimulated(self, *args, **kwargs): # type: (*Any, **Any) -> None """Executes the deactivateMpSimulated operation on the server. Deactivate Simulated MP The IxNetwork model allows for multiple method Signatures with the same name while python does not. deactivateMpSimulated(async_operation=bool) ------------------------------------------- - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. deactivateMpSimulated(SessionIndices=list, async_operation=bool) ---------------------------------------------------------------- - SessionIndices (list(number)): This parameter requires an array of session numbers 1 2 3 - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. deactivateMpSimulated(SessionIndices=string, async_operation=bool) ------------------------------------------------------------------ - SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12 - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. Raises ------ - NotFoundError: The requested resource does not exist on the server - ServerError: The server has encountered an uncategorized error condition """ payload = {"Arg1": self} for i in range(len(args)): payload["Arg%s" % (i + 2)] = args[i] for item in kwargs.items(): payload[item[0]] = item[1] return self._execute( "deactivateMpSimulated", payload=payload, response_object=None ) def Start(self, *args, **kwargs): # type: (*Any, **Any) -> None """Executes the start operation on the server. Start CPF control plane (equals to promote to negotiated state). start(async_operation=bool) --------------------------- - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. Raises ------ - NotFoundError: The requested resource does not exist on the server - ServerError: The server has encountered an uncategorized error condition """ payload = {"Arg1": self} for i in range(len(args)): payload["Arg%s" % (i + 2)] = args[i] for item in kwargs.items(): payload[item[0]] = item[1] return self._execute("start", payload=payload, response_object=None) def StartCcmSimulated(self, *args, **kwargs): # type: (*Any, **Any) -> None """Executes the startCcmSimulated operation on the server. Start CCM The IxNetwork model allows for multiple method Signatures with the same name while python does not. startCcmSimulated(async_operation=bool) --------------------------------------- - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. startCcmSimulated(SessionIndices=list, async_operation=bool) ------------------------------------------------------------ - SessionIndices (list(number)): This parameter requires an array of session numbers 1 2 3 - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. startCcmSimulated(SessionIndices=string, async_operation=bool) -------------------------------------------------------------- - SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12 - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. Raises ------ - NotFoundError: The requested resource does not exist on the server - ServerError: The server has encountered an uncategorized error condition """ payload = {"Arg1": self} for i in range(len(args)): payload["Arg%s" % (i + 2)] = args[i] for item in kwargs.items(): payload[item[0]] = item[1] return self._execute("startCcmSimulated", payload=payload, response_object=None) def StartCcmSimulatedMp(self, *args, **kwargs): # type: (*Any, **Any) -> Union[List[str], None] """Executes the startCcmSimulatedMp operation on the server. Start CCM PDU Transmission DEPRECATED startCcmSimulatedMp(Arg2=list, async_operation=bool)list ------------------------------------------------------------------- - Arg2 (list(number)): List of indices into the network info. An empty list indicates all instances in the node specific data. - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. - Returns list(str): ID to associate each async action invocation Raises ------ - NotFoundError: The requested resource does not exist on the server - ServerError: The server has encountered an uncategorized error condition """ payload = {"Arg1": self.href} for i in range(len(args)): payload["Arg%s" % (i + 2)] = args[i] for item in kwargs.items(): payload[item[0]] = item[1] return self._execute( "startCcmSimulatedMp", payload=payload, response_object=None ) def StartSlm(self, *args, **kwargs): # type: (*Any, **Any) -> None """Executes the startSlm operation on the server. Start SLM The IxNetwork model allows for multiple method Signatures with the same name while python does not. startSlm(async_operation=bool) ------------------------------ - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. startSlm(SessionIndices=list, async_operation=bool) --------------------------------------------------- - SessionIndices (list(number)): This parameter requires an array of session numbers 1 2 3 - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. startSlm(SessionIndices=string, async_operation=bool) ----------------------------------------------------- - SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12 - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. Raises ------ - NotFoundError: The requested resource does not exist on the server - ServerError: The server has encountered an uncategorized error condition """ payload = {"Arg1": self} for i in range(len(args)): payload["Arg%s" % (i + 2)] = args[i] for item in kwargs.items(): payload[item[0]] = item[1] return self._execute("startSlm", payload=payload, response_object=None) def Stop(self, *args, **kwargs): # type: (*Any, **Any) -> None """Executes the stop operation on the server. Stop CPF control plane (equals to demote to PreValidated-DoDDone state). stop(async_operation=bool) -------------------------- - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. Raises ------ - NotFoundError: The requested resource does not exist on the server - ServerError: The server has encountered an uncategorized error condition """ payload = {"Arg1": self} for i in range(len(args)): payload["Arg%s" % (i + 2)] = args[i] for item in kwargs.items(): payload[item[0]] = item[1] return self._execute("stop", payload=payload, response_object=None) def StopCcmSimulated(self, *args, **kwargs): # type: (*Any, **Any) -> None """Executes the stopCcmSimulated operation on the server. Stop CCM The IxNetwork model allows for multiple method Signatures with the same name while python does not. stopCcmSimulated(async_operation=bool) -------------------------------------- - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. stopCcmSimulated(SessionIndices=list, async_operation=bool) ----------------------------------------------------------- - SessionIndices (list(number)): This parameter requires an array of session numbers 1 2 3 - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. stopCcmSimulated(SessionIndices=string, async_operation=bool) ------------------------------------------------------------- - SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12 - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. Raises ------ - NotFoundError: The requested resource does not exist on the server - ServerError: The server has encountered an uncategorized error condition """ payload = {"Arg1": self} for i in range(len(args)): payload["Arg%s" % (i + 2)] = args[i] for item in kwargs.items(): payload[item[0]] = item[1] return self._execute("stopCcmSimulated", payload=payload, response_object=None) def StopCcmSimulatedMp(self, *args, **kwargs): # type: (*Any, **Any) -> Union[List[str], None] """Executes the stopCcmSimulatedMp operation on the server. Stop CCM PDU Transmission DEPRECATED stopCcmSimulatedMp(Arg2=list, async_operation=bool)list ------------------------------------------------------------------ - Arg2 (list(number)): List of indices into the network info. An empty list indicates all instances in the node specific data. - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. - Returns list(str): ID to associate each async action invocation Raises ------ - NotFoundError: The requested resource does not exist on the server - ServerError: The server has encountered an uncategorized error condition """ payload = {"Arg1": self.href} for i in range(len(args)): payload["Arg%s" % (i + 2)] = args[i] for item in kwargs.items(): payload[item[0]] = item[1] return self._execute( "stopCcmSimulatedMp", payload=payload, response_object=None ) def StopSlm(self, *args, **kwargs): # type: (*Any, **Any) -> None """Executes the stopSlm operation on the server. Stop SLM The IxNetwork model allows for multiple method Signatures with the same name while python does not. stopSlm(async_operation=bool) ----------------------------- - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. stopSlm(SessionIndices=list, async_operation=bool) -------------------------------------------------- - SessionIndices (list(number)): This parameter requires an array of session numbers 1 2 3 - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. stopSlm(SessionIndices=string, async_operation=bool) ---------------------------------------------------- - SessionIndices (str): This parameter requires a string of session numbers 1-4;6;7-12 - async_operation (bool=False): True to execute the operation asynchronously. Any subsequent rest api calls made through the Connection class will block until the operation is complete. Raises ------ - NotFoundError: The requested resource does not exist on the server - ServerError: The server has encountered an uncategorized error condition """ payload = {"Arg1": self} for i in range(len(args)): payload["Arg%s" % (i + 2)] = args[i] for item in kwargs.items(): payload[item[0]] = item[1] return self._execute("stopSlm", payload=payload, response_object=None) def get_device_ids( self, PortNames=None, Active=None, AisEnableUnicastMac=None, AisInterval=None, AisMode=None, AisPriority=None, AisUnicastMac=None, AutoDmTimeout=None, AutoDmTimer=None, AutoLbIteration=None, AutoLbTimeoutInSec=None, AutoLbTimerInSec=None, AutoLmIteration=None, AutoLmTimeout=None, AutoLmTimer=None, AutoLtIteration=None, AutoLtTimeoutInSec=None, AutoLtTimerInSec=None, AutoLtTtl=None, AutodmIteration=None, CVlanId=None, CVlanPriority=None, CVlanTpid=None, CciInterval=None, CcmLmmTxFcf=None, CcmLmmTxFcfStepPer100mSec=None, CcmPriority=None, CcmRxFcb=None, CcmRxFcbStepPer100mSec=None, ChassisId=None, ChassisIdLength=None, ChassisIdSubType=None, DataTlvLength=None, DataTlvValue=None, DmAllRemoteMeps=None, DmDestinationMacAddress=None, DmMethod=None, DmPriority=None, Enable1slRx=None, EnableAisRx=None, EnableAutoDm=None, EnableAutoLb=None, EnableAutoLm=None, EnableAutoLt=None, EnableDataTlv=None, EnableInterfaceStatusTlv=None, EnableLckRx=None, EnableLmCounterUpdate=None, EnableOrganizationSpecificTlv=None, EnablePortStatusTlv=None, EnableSenderIdTlv=None, EnableSlmRx=None, EnableTstRx=None, EnableVlan=None, InterRemoteMepRxIncrementStep=None, InterRemoteMepTxIncrementStep=None, LbAllRemoteMeps=None, LbDestinationMacAddress=None, LbmPriority=None, LckEnableUnicastMac=None, LckInterval=None, LckMode=None, LckPriority=None, LckSupportAisGeneration=None, LckUnicastMac=None, LmAllRemoteMeps=None, LmDestinationMacAddress=None, LmMethodType=None, LmmPriority=None, LmrPriority=None, LmrRxFcf=None, LmrRxFcfStepPer100mSec=None, LmrTxFcb=None, LmrTxFcbStepPer100mSec=None, LtAllRemoteMeps=None, LtDestinationMacAddress=None, LtmPriority=None, ManagementAddress=None, ManagementAddressDomain=None, ManagementAddressDomainLength=None, ManagementAddressLength=None, MdMegLevel=None, MdName=None, MdNameFormat=None, MegId=None, MegIdFormat=None, MepId=None, MpType=None, OrganizationSpecificTlvLength=None, OrganizationSpecificTlvValue=None, OverrideVlanPriority=None, Rdi=None, SVlanId=None, SVlanPriority=None, SVlanTpid=None, ShortMaName=None, ShortMaNameFormat=None, SlmInitialTxfcb=None, SlmSimulatedLossInRxPath=None, SlmTxfcbStep=None, TstEnableUnicastMac=None, TstIncrementPacketLength=None, TstInitialPatternValue=None, TstInterval=None, TstMode=None, TstOverwriteSequenceNumber=None, TstPacketLength=None, TstPacketLengthStep=None, TstPatternType=None, TstPriority=None, TstSequenceNumber=None, TstTestType=None, TstUnicastMac=None, VlanId=None, VlanPriority=None, VlanStacking=None, VlanTpid=None, ): """Base class infrastructure that gets a list of cfmSimulatedMp device ids encapsulated by this object. Use the optional regex parameters in the method to refine the list of device ids encapsulated by this object. Args ---- - PortNames (str): optional regex of port names - Active (str): optional regex of active - AisEnableUnicastMac (str): optional regex of aisEnableUnicastMac - AisInterval (str): optional regex of aisInterval - AisMode (str): optional regex of aisMode - AisPriority (str): optional regex of aisPriority - AisUnicastMac (str): optional regex of aisUnicastMac - AutoDmTimeout (str): optional regex of autoDmTimeout - AutoDmTimer (str): optional regex of autoDmTimer - AutoLbIteration (str): optional regex of autoLbIteration - AutoLbTimeoutInSec (str): optional regex of autoLbTimeoutInSec - AutoLbTimerInSec (str): optional regex of autoLbTimerInSec - AutoLmIteration (str): optional regex of autoLmIteration - AutoLmTimeout (str): optional regex of autoLmTimeout - AutoLmTimer (str): optional regex of autoLmTimer - AutoLtIteration (str): optional regex of autoLtIteration - AutoLtTimeoutInSec (str): optional regex of autoLtTimeoutInSec - AutoLtTimerInSec (str): optional regex of autoLtTimerInSec - AutoLtTtl (str): optional regex of autoLtTtl - AutodmIteration (str): optional regex of autodmIteration - CVlanId (str): optional regex of cVlanId - CVlanPriority (str): optional regex of cVlanPriority - CVlanTpid (str): optional regex of cVlanTpid - CciInterval (str): optional regex of cciInterval - CcmLmmTxFcf (str): optional regex of ccmLmmTxFcf - CcmLmmTxFcfStepPer100mSec (str): optional regex of ccmLmmTxFcfStepPer100mSec - CcmPriority (str): optional regex of ccmPriority - CcmRxFcb (str): optional regex of ccmRxFcb - CcmRxFcbStepPer100mSec (str): optional regex of ccmRxFcbStepPer100mSec - ChassisId (str): optional regex of chassisId - ChassisIdLength (str): optional regex of chassisIdLength - ChassisIdSubType (str): optional regex of chassisIdSubType - DataTlvLength (str): optional regex of dataTlvLength - DataTlvValue (str): optional regex of dataTlvValue - DmAllRemoteMeps (str): optional regex of dmAllRemoteMeps - DmDestinationMacAddress (str): optional regex of dmDestinationMacAddress - DmMethod (str): optional regex of dmMethod - DmPriority (str): optional regex of dmPriority - Enable1slRx (str): optional regex of enable1slRx - EnableAisRx (str): optional regex of enableAisRx - EnableAutoDm (str): optional regex of enableAutoDm - EnableAutoLb (str): optional regex of enableAutoLb - EnableAutoLm (str): optional regex of enableAutoLm - EnableAutoLt (str): optional regex of enableAutoLt - EnableDataTlv (str): optional regex of enableDataTlv - EnableInterfaceStatusTlv (str): optional regex of enableInterfaceStatusTlv - EnableLckRx (str): optional regex of enableLckRx - EnableLmCounterUpdate (str): optional regex of enableLmCounterUpdate - EnableOrganizationSpecificTlv (str): optional regex of enableOrganizationSpecificTlv - EnablePortStatusTlv (str): optional regex of enablePortStatusTlv - EnableSenderIdTlv (str): optional regex of enableSenderIdTlv - EnableSlmRx (str): optional regex of enableSlmRx - EnableTstRx (str): optional regex of enableTstRx - EnableVlan (str): optional regex of enableVlan - InterRemoteMepRxIncrementStep (str): optional regex of interRemoteMepRxIncrementStep - InterRemoteMepTxIncrementStep (str): optional regex of interRemoteMepTxIncrementStep - LbAllRemoteMeps (str): optional regex of lbAllRemoteMeps - LbDestinationMacAddress (str): optional regex of lbDestinationMacAddress - LbmPriority (str): optional regex of lbmPriority - LckEnableUnicastMac (str): optional regex of lckEnableUnicastMac - LckInterval (str): optional regex of lckInterval - LckMode (str): optional regex of lckMode - LckPriority (str): optional regex of lckPriority - LckSupportAisGeneration (str): optional regex of lckSupportAisGeneration - LckUnicastMac (str): optional regex of lckUnicastMac - LmAllRemoteMeps (str): optional regex of lmAllRemoteMeps - LmDestinationMacAddress (str): optional regex of lmDestinationMacAddress - LmMethodType (str): optional regex of lmMethodType - LmmPriority (str): optional regex of lmmPriority - LmrPriority (str): optional regex of lmrPriority - LmrRxFcf (str): optional regex of lmrRxFcf - LmrRxFcfStepPer100mSec (str): optional regex of lmrRxFcfStepPer100mSec - LmrTxFcb (str): optional regex of lmrTxFcb - LmrTxFcbStepPer100mSec (str): optional regex of lmrTxFcbStepPer100mSec - LtAllRemoteMeps (str): optional regex of ltAllRemoteMeps - LtDestinationMacAddress (str): optional regex of ltDestinationMacAddress - LtmPriority (str): optional regex of ltmPriority - ManagementAddress (str): optional regex of managementAddress - ManagementAddressDomain (str): optional regex of managementAddressDomain - ManagementAddressDomainLength (str): optional regex of managementAddressDomainLength - ManagementAddressLength (str): optional regex of managementAddressLength - MdMegLevel (str): optional regex of mdMegLevel - MdName (str): optional regex of mdName - MdNameFormat (str): optional regex of mdNameFormat - MegId (str): optional regex of megId - MegIdFormat (str): optional regex of megIdFormat - MepId (str): optional regex of mepId - MpType (str): optional regex of mpType - OrganizationSpecificTlvLength (str): optional regex of organizationSpecificTlvLength - OrganizationSpecificTlvValue (str): optional regex of organizationSpecificTlvValue - OverrideVlanPriority (str): optional regex of overrideVlanPriority - Rdi (str): optional regex of rdi - SVlanId (str): optional regex of sVlanId - SVlanPriority (str): optional regex of sVlanPriority - SVlanTpid (str): optional regex of sVlanTpid - ShortMaName (str): optional regex of shortMaName - ShortMaNameFormat (str): optional regex of shortMaNameFormat - SlmInitialTxfcb (str): optional regex of slmInitialTxfcb - SlmSimulatedLossInRxPath (str): optional regex of slmSimulatedLossInRxPath - SlmTxfcbStep (str): optional regex of slmTxfcbStep - TstEnableUnicastMac (str): optional regex of tstEnableUnicastMac - TstIncrementPacketLength (str): optional regex of tstIncrementPacketLength - TstInitialPatternValue (str): optional regex of tstInitialPatternValue - TstInterval (str): optional regex of tstInterval - TstMode (str): optional regex of tstMode - TstOverwriteSequenceNumber (str): optional regex of tstOverwriteSequenceNumber - TstPacketLength (str): optional regex of tstPacketLength - TstPacketLengthStep (str): optional regex of tstPacketLengthStep - TstPatternType (str): optional regex of tstPatternType - TstPriority (str): optional regex of tstPriority - TstSequenceNumber (str): optional regex of tstSequenceNumber - TstTestType (str): optional regex of tstTestType - TstUnicastMac (str): optional regex of tstUnicastMac - VlanId (str): optional regex of vlanId - VlanPriority (str): optional regex of vlanPriority - VlanStacking (str): optional regex of vlanStacking - VlanTpid (str): optional regex of vlanTpid Returns ------- - list(int): A list of device ids that meets the regex criteria provided in the method parameters Raises ------ - ServerError: The server has encountered an uncategorized error condition """ return self._get_ngpf_device_ids(locals())
PypiClean
/DomainThesaurus-1.2.3.tar.gz/DomainThesaurus-1.2.3/DST/word_discrimination/WordDiscrimination.py
import re import networkx as nx from DST.utils.DSUtil import levenshtein_distance import logging logger = logging.getLogger(__name__) def __numberInString(term): for i in term: if i.isdigit(): return True return False # check if the number in two string are the same def __isNumberSame(term1, term2): # if one term has number inside, the other one should have the same number if __numberInString(term1) or __numberInString(term2): # if the number lists are different, they are not abbreviation relationships if re.findall(r"\d+", term1) != re.findall(r"\d+", term2): return False return True # check if the letter order of two terms are the same def __checkLetterOrder(shortTerm, longTerm): position = 0 # record the position of letter matchCount = 0 # how many letters in the short term are ordered as those in long term # do not take "-" and "_" into consideration for letter in shortTerm: addPosition = longTerm[position:].find(letter) if addPosition == -1: break else: matchCount = matchCount + 1 position = position + addPosition + 1 if matchCount == len(shortTerm): # if the abbreviation only refer to the the first word in long term, it is not the abbreviation of the whole term such as advace --> advace_mike # the last position of the blank i.e., the letters in an abbreviation should lay in all components in the words if " " in longTerm and position < longTerm.rfind(" ") + 1 and (shortTerm[-1] != longTerm.split(" ")[-1][ 0]): # note that the position of " " should add 1 because of the earlier program # print shortTerm, "," , longTerm return False else: return True else: return False # check if one term is an abbreviation of the other def __isAbrreviation(longTerm, shortTerm): try: longTerm.encode(encoding="ascii") shortTerm.encode(encoding="ascii") except: return False longTerm = longTerm.replace("-", " ").replace("_", " ").strip() shortTerm = shortTerm.replace("-", " ").replace("_", " ").strip() # the length of the abbreviation must be shorter than the full style at least 2 letters if (len(longTerm) - len(shortTerm) < 2) or len(longTerm) < 1 or len(shortTerm) < 1: return False # the length of the abbreviation should not be so long compared to its postential full name if float(len(shortTerm)) / len(longTerm) > 0.68 and not __numberInString( shortTerm): # the parameter is determined by observation return False if len(shortTerm) > 10: # the short term should not be so long return False if ("++" in shortTerm) != ("++" in longTerm): # if they both include "++" or neither do they return False if " " in shortTerm or " " in longTerm: # if the parts of short term is part of the long term, it is not regarded as abbreviations if set(shortTerm.split(" ")) < set(longTerm.split(" ")): return False if longTerm[0] == "." and shortTerm[0] == "." and not longTerm.startswith(".net") and not shortTerm.startswith( ".net"): # As term beginning with dot is always the file extension, it always is not the abbreviation return False if shortTerm[0] != longTerm[0]: # two terms should have the same first letter return False if longTerm.split(" ")[0] + "s" == shortTerm: # e.g., bolg aritcles --> blogs return False # if any letter in the short term is not contained in the long term, it is not an abbreviation for letter in shortTerm: if letter != "-" and letter != "_": if letter not in longTerm: return False break # check if number in terms are the same if __numberInString(longTerm) or __numberInString(shortTerm): if not __isNumberSame(longTerm, shortTerm): return False # check if the letter order is the same if __checkLetterOrder(shortTerm, longTerm): # if shortTerm in a consecutive part of long term, it is not an abbreviation if shortTerm.replace(" ", "") in longTerm.replace(" ", ""): return False return True else: return False # check if two terms are synonyms def __isSynonym(term1, term2): try: term1.encode(encoding="ascii") term2.encode(encoding="ascii") except: return False if term1 == term2 or term1.replace(" ", "") == term2.replace(" ", ""): return True # the numbers inside them should be the same if not __isNumberSame(term1, term2): return False if ("++" in term1) != ("++" in term2): # if they both include "++" or neither do they like c++ # print term1, term2 return False # there are some one-letter change which may results in different meaning such as "encode" and "decode" if (term1.replace(" ", "").replace("en", "de") == term2.replace(" ", "")) or ( term1.replace(" ", "").replace("de", "en") == term2.replace(" ", "")): # print term1, "," ,term2 return False # the first letter of two terms should be the same if term1[0] != term2[0]: return False if term1[0] == "." and term2[ 0] == ".": # As term beginning with dot is always the file extension, it always is not the abbreviation return False try: absoluteDis = levenshtein_distance(term1, term2) # absolute edit distance except Exception as e: print(e, term1, term2) return False # relative edit distance by diving the length of two terms23 relativeDis = float(absoluteDis) / max((len(term1), len(term2))) if absoluteDis < 4 and relativeDis <= 0.3: # set the absoluteDis for long term while relativeDis for short term return True else: return False def default_classify_func(term1, term2): if __isSynonym(term1, term2): return "synonym" elif __isAbrreviation(term1, term2): return "abbreviation" else: return "other" def get_default_synonym_types(): return ["abbreviation", "other", "synonym"] class WordDiscrimination(object): def __init__(self, classify_word_func, semantic_related_types,group_dict=False,group_word_type="synonym", domain_vocab=None): """ :param classify_word_func: function, parameters: term1,term2, return the type of term2 to term1 :param semantic_related_types: list, all semantic related types """ self.classify_word_func = classify_word_func self.synonym_types = semantic_related_types self.group_dict = group_dict self.group_word_type = group_word_type self.domain_vocab = domain_vocab def __group_dict(self,dst): if not isinstance(self.domain_vocab,dict): raise TypeError("type of domain_vocab should be dict") G = nx.Graph() nodes, edges = [], [] for k, v in dst.items(): nodes.append(k) for i in v[self.group_word_type]: edges.append((k, i)) nodes.append(i) G.add_nodes_from(nodes) G.add_edges_from(edges) groups = [] for i in nx.connected_component_subgraphs(G): groups.append(list(i)) # get new dict newDi = {} otherKeys = list(v.keys()).copy() otherKeys.remove(self.group_word_type) for group in groups: key = max(group, key=lambda x: self.domain_vocab[x] if x in self.domain_vocab else 0) group.remove(key) newDi[key] = {self.group_word_type: group} for i in otherKeys: newDi[key][i] = [] for j in newDi[key][self.group_word_type]: if j in dst: newDi[key][i].extend(dst[j][i]) return newDi def discriminate_words(self, vocab): """ classifiy word :param vocab: dict, key:term, value: list, the semantic realted words of this term :return: dict, key:term, value:dict(key:synonym type, value: list, words of this synonym type ) """ res = {} for k, v in vocab.items(): res[k] = {} for i in self.synonym_types: res[k][i] = [] for i in v: res[k][self.classify_word_func(k, i)].append(i) if self.group_dict: logging.info("group thesaurus......") return self.__group_dict(res) else: return res if __name__ == "__main__": pass
PypiClean
/aliyun-python-sdk-ecs-test-2.0.6.tar.gz/aliyun-python-sdk-ecs-test-2.0.6/aliyunsdkecs/request/v20140526/CreateNatGatewayRequest.py
from aliyunsdkcore.request import RpcRequest class CreateNatGatewayRequest(RpcRequest): def __init__(self): RpcRequest.__init__(self, 'Ecs', '2014-05-26', 'CreateNatGateway') def get_ResourceOwnerId(self): return self.get_query_params().get('ResourceOwnerId') def set_ResourceOwnerId(self,ResourceOwnerId): self.add_query_param('ResourceOwnerId',ResourceOwnerId) def get_ClientToken(self): return self.get_query_params().get('ClientToken') def set_ClientToken(self,ClientToken): self.add_query_param('ClientToken',ClientToken) def get_Description(self): return self.get_query_params().get('Description') def set_Description(self,Description): self.add_query_param('Description',Description) def get_BandwidthPackages(self): return self.get_query_params().get('BandwidthPackages') def set_BandwidthPackages(self,BandwidthPackages): for i in range(len(BandwidthPackages)): if BandwidthPackages[i].get('Bandwidth') is not None: self.add_query_param('BandwidthPackage.' + str(i + 1) + '.Bandwidth' , BandwidthPackages[i].get('Bandwidth')) if BandwidthPackages[i].get('Zone') is not None: self.add_query_param('BandwidthPackage.' + str(i + 1) + '.Zone' , BandwidthPackages[i].get('Zone')) if BandwidthPackages[i].get('IpCount') is not None: self.add_query_param('BandwidthPackage.' + str(i + 1) + '.IpCount' , BandwidthPackages[i].get('IpCount')) def get_ResourceOwnerAccount(self): return self.get_query_params().get('ResourceOwnerAccount') def set_ResourceOwnerAccount(self,ResourceOwnerAccount): self.add_query_param('ResourceOwnerAccount',ResourceOwnerAccount) def get_OwnerAccount(self): return self.get_query_params().get('OwnerAccount') def set_OwnerAccount(self,OwnerAccount): self.add_query_param('OwnerAccount',OwnerAccount) def get_OwnerId(self): return self.get_query_params().get('OwnerId') def set_OwnerId(self,OwnerId): self.add_query_param('OwnerId',OwnerId) def get_VpcId(self): return self.get_query_params().get('VpcId') def set_VpcId(self,VpcId): self.add_query_param('VpcId',VpcId) def get_Name(self): return self.get_query_params().get('Name') def set_Name(self,Name): self.add_query_param('Name',Name)
PypiClean
/twomartens.allrisscraper-0.5.9-py3-none-any.whl/twomartens/allrisscraper/public.py
import argparse import configparser import json import os from selenium import webdriver from selenium.webdriver.firefox.firefox_binary import FirefoxBinary from selenium.webdriver.firefox.options import Options from twomartens.allrisscraper import agenda from twomartens.allrisscraper import config as config_module from twomartens.allrisscraper import custom_json from twomartens.allrisscraper import definitions from twomartens.allrisscraper import meeting from twomartens.allrisscraper import organization from twomartens.allrisscraper import person def main(args: argparse.Namespace): config_file = f"{os.getcwd()}/tm-allris-scraper-config.ini" if not config_module.initialize_config(config_file): return config = configparser.ConfigParser() config.read(config_file) district = config["Default"]["district"] json_path = config["Default"]["jsonLocation"] firefox_binary = config["Default"]["firefoxBinary"] base_url = definitions.PUBLIC_BASE_LINKS[district] options = Options() options.headless = False binary = FirefoxBinary(firefox_binary) driver = webdriver.Firefox(firefox_binary=binary, options=options) driver.implicitly_wait(2) os.makedirs(json_path, exist_ok=True) if args.include_meetings: meetings = meeting.get_meetings(driver, base_url) agenda.process_agendas(driver, meetings) motions = agenda.get_motions(driver, meetings) with open(json_path + "meetings.json", "w") as file: json.dump(meetings, file, cls=custom_json.EnhancedJSONEncoder) with open(json_path + "motions.json", "w") as file: json.dump(motions, file, cls=custom_json.EnhancedJSONEncoder) if args.include_organizations: organizations = organization.get_organizations(driver, base_url) persons = person.get_persons(driver, organizations) with open(json_path + "organizations.json", "w") as file: json.dump(organizations, file, cls=custom_json.EnhancedJSONEncoder) with open(json_path + "persons.json", "w") as file: json.dump(persons, file, cls=custom_json.EnhancedJSONEncoder) driver.close()
PypiClean
/gtftools-0.9.0.tar.gz/gtftools-0.9.0/README.md
GTFtools ==== ## Description GTFtools provides a set of functions to compute or extract various features of gene models as described in the table below. Note that GTFtools can be applied to not only human but also non-human gene models like the lab mouse. | **Options** | **Functions and example use** | **Notes** | |-------------|--------------------------------------------------------|-----------| | -h | Help information.<BR/>Example use:<BR/>gtftools -h | | | -m | For each gene, calculate merged exons by merging exons of all splice isoforms from the same gene. The output is merged exons in bed format.<BR/>Example use:<BR/>gtftools -m merged_exons.bed demo.gtf |Used to calculate nonverlapping exonic length of genes with multiple splice isoforms.| | -d | Calculate independent introns which is defined as introns (or part of introns) that do not overlap with any exons of any genes in the genome2. It is calculated by subtracting merged exons from genes. The output is in bed format.<BR/>Example use:<BR/>gtftools -d independent_introns.bed demo.gtf |Used in intron retention detection.| | -l | Calculate gene lengths. Since a gene may have multiple isoforms, there are multiple ways to calculate gene length based on literature. Three simple ways are considering the mean, median and maximum of the lengths of isoforms as the length of the gene. A fourth way is to calculate the length of merged exons of all isoforms (i.e. non-overlapping exonic length). So, in total, four different types of gene lengths(the mean, median and max of lengths of isoforms of agene, and the length of merged exons of isoforms of a gene) are provided. format.<BR/>Example use:<BR/>gtftools -l gene_length.txt demo.gtf |Needed for<BR/> e.g.<BR/> calculating FPKM in RNA-seq data analysis, where gene length is required.| | -r | Calculate transcript isoform lengths.<BR/>Example use:<BR/>gtftools -r isoform_length.txt demo.gtf|| | -g | Output gene coordination and ID mappings in bed format.<BR/>Example use:<BR/>gtftools -g genes.bed demo.gtf|| | -p | An input file containing a list of SNPs with at least three columns, with the first being chromosome and the second being coordinate and the third being SNP names such as rs ID number. With this option, GTFtools will search for and output cis-SNPs for each gene annotated in the provided GTF file.<BR/>Example use:<BR/>gtftools -p snp_list.txt demo.gtf > cisSNP.bed|| | -f | -f specifies the upstream and downstream distance used to calculate cis-range of a gene. -f is specified in the format of 'distup-distdown', where distup represent the upstream distance from TSS and distdown means the downstream distance from the end of the gene. Note that this parameter takes effect only when the '-g' option is used. For example, using 'gtftools -g gene.bed -f 2000-1000 demo.gtf' means that 2000 bases upstream and 1000 bases downstream of the gene will be clculated as the cis-range and the cis-range will be output to the gene.bed file. By default, -f is set to 0-0, indicating that cis-range will not be calculated when using -g to calculate gene information.<BR/>Example use:<BR/>gtftools -g gene.bed -f 2000-1000 demo.gtf|| | -s | Output isoform coordination and parent-gene IDs in bed format.<BR/>Example use:<BR/>gtftools -s isoform.bed demo.gtf|| | -q | output 5' and 3' splice site regions in bed format. The region is based on MaxEntScan: the 5' donor site is 9 bases long with 3 bases in exon and 6 bases in intron, and the 3' acceptor site is 23 bases long with 20 bases in the intron and 3 bases in the exon.<BR/>Example use:<BR/>gtftools -q splice_regions.bed demo.gtf|| | -e | Output exons in bed format.<BR/>Example use:<BR/>gtftools -e exons.bed demo.gtf|| | -i | Output introns in bed format.<BR/>Example use:<BR/>gtftools -i introns.bed demo.gtf|| | -b | Output intergenic regions in bed format.<BR/>Example use:<BR/>gtftools -b intergenic_regions.bed demo.gtf|| | -k | Calculate introns (part of introns) that overlap with exons of other isoforms. The output is in bed format.<BR/>Example use:<BR/>gtftools -k introns_ovlp_exons.bed demo.gtf|| | -u | Output UTRs in bed format.<BR/>Example use:<BR/>gtftools -u utr.bed demo.gtf|| | -t | Output transcription start site (TSS)-flanking regions in bed format. It is calculated as (TSS-wup,TSS+wdown), where wup is a user-specified distance, say 1000bp, upstream of TSS, and wdown is the distance downstream of TSS. wup and wdown is defined by the w parameter specified by '-w'<BR/>Example use:<BR/>gtftools -t tss_regions.bed -w 1000-300 demo.gtf| Used for scanning TF-binding sites. | | -w | w specifies the upstream and downstream distance from TSS as described in '-t'. w is specified in the format of wup-wdown, where wup and wdown represent the upstream and downstream distance of TSS. Default w = 1000-300 (that is, 1000 bases upstream of TSS and 300 bases downstream of TSS). This range is based on promoter regions used in the dbSNP database based on ref: Genome-wide promoter extraction and analysis in human, mouse, and rat, Genome Biology, 2005.<BR/>Example use:<BR/>gtftools -t tss_regions.bed -w 1000-300 demo.gtf|| | -c | Specify chromosomes to analyze. Dash(-) and comma(,) are allowed. For example, ‘-c 1-5,X,Y’ indicates 7 chromosomes: 1 to 5 together with X and Y. Default is 1-22, X and Y.<BR/>Example use (output genes which are on chromosomes 1, 2, X and Y):<BR/>gtftools -g gene.bed -c 1-2,X,Y demo.gtf<BR/>Example use (output splice site regions which are on chromosomes 1 and Y):<BR/>gtftools -q splice_regions.bed -c 1,Y demo.gtf|| | -v | Show version.<BR/>Example use:<BR/>gtftools -v|| ## Help In general, you can run 'gtftools -h' to obtain help documents. Examples on how to use GTFtools are shown in the above Table. ## Contact If any questions, please do not hesitate to contact me at: Hongdong Li, [email protected]
PypiClean
/maya_mock_completion-0.0.1.tar.gz/maya_mock_completion-0.0.1/maya/app/renderSetup/model/modelCmds.py
if False: from typing import Dict, List, Tuple, Union, Optional class _MPxCommand(object): """ Base class for custom commands. """ def __init__(*args, **kwargs): """ x.__init__(...) initializes x; see help(type(x)) for signature """ pass def doIt(*args, **kwargs): """ Called by Maya to execute the command. """ pass def hasSyntax(*args, **kwargs): """ Called by Maya to determine if the command provides an MSyntax object describing its syntax. """ pass def isUndoable(*args, **kwargs): """ Called by Maya to determine if the command supports undo. """ pass def redoIt(*args, **kwargs): """ Called by Maya to redo a previously undone command. """ pass def syntax(*args, **kwargs): """ Returns the command's MSyntax object, if it has one. """ pass def undoIt(*args, **kwargs): """ Called by Maya to undo a previously executed command. """ pass @staticmethod def appendToResult(*args, **kwargs): """ Append a value to the result to be returned by the command. """ pass @staticmethod def clearResult(*args, **kwargs): """ Clears the command's result. """ pass @staticmethod def currentResult(*args, **kwargs): """ Returns the command's current result. """ pass @staticmethod def currentResultType(*args, **kwargs): """ Returns the type of the current result. """ pass @staticmethod def displayError(*args, **kwargs): """ Display an error message. """ pass @staticmethod def displayInfo(*args, **kwargs): """ Display an informational message. """ pass @staticmethod def displayWarning(*args, **kwargs): """ Display a warning message. """ pass @staticmethod def isCurrentResultArray(*args, **kwargs): """ Returns true if the command's current result is an array of values. """ pass @staticmethod def setResult(*args, **kwargs): """ Set the value of the result to be returned by the command. """ pass __new__ = None commandString = None historyOn = None kDouble = 1 kLong = 0 kNoArg = 3 kString = 2 class RenderLayerMembersCmd(_MPxCommand): """ Command that filters a list of passed in DAG node nodes and returns the filtered results based on the flags you set. This command is query only. The flags for this command are: -notIn: keep objects that do not belong to the provided render layers (default false, keep only objects that are render layer members in list. When notIn is set to false, passing in a list of DAG nodes is optional). -renderLayers <renderLayers>: the render layers to check for membership (-notIn false) or not check for membership (-notIn true). Example: // Isolate objects in the provided list ("pSphere1", "pCube1"), that are // not in any of the provided render layers renderLayerMembers "pSphere1" "pCube1" -notIn true -renderLayers "layer1" "layer2" """ def doIt(self, args): pass def isUndoable(self): pass @staticmethod def creator(): pass __dict__ = None __weakref__ = None kCmdName = 'renderLayerMembers' kNotInFlag = '-ni' kNotInFlagLong = '-notIn' kNotInFlags = set() kRenderLayersFlag = '-rl' kRenderLayersFlagLong = '-renderLayers' kRenderLayersFlags = set() class RenderSetupLegacyLayerCmd(_MPxCommand): """ Command used to query the renderLayer associated to a specific renderSetupLayer Usage: "renderSetupLegacyLayer renderSetupLayerName". """ def doIt(self, args): pass def isUndoable(self): pass @staticmethod def createSyntax(): pass @staticmethod def creator(): pass __dict__ = None __weakref__ = None kCmdName = 'renderSetupLegacyLayer' class RenderSetupFindCmd(_MPxCommand): """ Command that finds collections, the members of which match any of the provided DAG objects. This command takes flags: -inRenderLayers (mandatory flag) which only searches for collections under the specified render layer. -includeLayers which will also return the layer names if the objects are members of that layer (because included by a collection or implicit member (ex: light shapes) Examples: // Finds from the "layer1" and "layer2" render layers, the collections // that "pSphere1" and "pCube1" belong to renderSetupFind "pSphere1" "pCube1" -inRenderLayers "layer1" "layer2" """ def doIt(self, args): pass def isUndoable(self): pass @staticmethod def creator(): pass __dict__ = None __weakref__ = None kCmdName = 'renderSetupFind' kInRenderLayersFlag = '-irl' kInRenderLayersFlagLong = '-inRenderLayers' kInRenderLayersFlags = set() kIncludeLayersFlag = '-il' kIncludeLayersFlagLong = '-includeLayers' kIncludeLayersFlags = set() class RenderSetupCmd(_MPxCommand): """ Command that will be used for querying and editing the render setup state. At present a user can only query the list of render layers with "renderSetup -query -renderLayers". """ def doIt(self, args): pass def isUndoable(self): pass @staticmethod def createSyntax(): pass @staticmethod def creator(): pass __dict__ = None __weakref__ = None kCmdName = 'renderSetup' kRenderLayersFlag = '-rl' kRenderLayersFlagLong = '-renderLayers' def getCollections(renderLayers): """ # Returns the collection models under the specified render layers """ pass def notInRenderLayers(*args, **kwargs): pass def getMembersAsLongNames(renderLayers): pass def inRenderLayers(*args, **kwargs): pass def isCollectionMember(objectNodeName, collections): pass def renderSetupFind(objectNodeNames, renderLayerNames, includeLayers): pass def getLongName(name): pass def longNamesToNamesDict(names): pass def renderLayerMembers(objectNodeNames, renderLayerNames, notInRenderLayers='False'): pass kNotInNeedAListToFilter = [] kInvalidNodeName = []
PypiClean
/paper-uploads-0.15.1.tar.gz/paper-uploads-0.15.1/paper_uploads/signals/handlers.py
from django.apps import apps as global_apps from django.db import DEFAULT_DB_ALIAS, migrations, transaction from django.db.migrations.operations.base import Operation from django.db.models.signals import post_delete from django.dispatch import receiver from django.utils.timezone import now from .. import exceptions from ..models import CollectionItemBase from ..models.fields.base import ResourceFieldBase from .classes import ExtendableMigration @receiver(post_delete, sender=CollectionItemBase) def on_delete_collection_item(sender, instance, **kwargs): """ Обновление поля `modified_at` коллекции при удалении элемента, чтобы метод `get_last_modified()` возвращал корректные данные. """ if instance.collection_id and instance.collection_content_type_id: try: collection_cls = instance.get_collection_class() except exceptions.CollectionModelNotFoundError: return collection_cls.objects.filter( pk=instance.collection_id ).update( modified_at=now() ) def inject_operations( plan=None, apps=global_apps, using=DEFAULT_DB_ALIAS, **kwargs ): if plan is None: return for migration, backward in plan: if migration.name == "0001_initial": continue PaperMigration( migration=migration, backward=backward, apps=apps, using=using ).iterate() class PaperMigration(ExtendableMigration): def process(self, operation: Operation, **kwargs): if isinstance(operation, migrations.RenameField): self.insert_after( RenameOwnerField( self.migration.app_label, operation.model_name, operation.old_name, operation.new_name, ) ) elif isinstance(operation, migrations.RenameModel): self.insert_after( RenameOwnerModel( self.migration.app_label, operation.old_name_lower, operation.new_name_lower, ) ) class RenameOwnerField(migrations.RunPython): """ При переименовании поля, ссылающегося на ресурс, необходимо исправить значение owner_fieldname во всех связанных экземплярах. """ def __init__(self, app_label, model_name, old_name, new_name): self.app_label = app_label self.model_name = model_name self.old_name = old_name self.new_name = new_name super().__init__(self.forward, self.backward) def forward(self, apps, schema_editor): self._rename(apps, schema_editor, self.old_name, self.new_name) def backward(self, apps, schema_editor): self._rename(apps, schema_editor, self.new_name, self.old_name) def _rename(self, apps, schema_editor, old_name, new_name): using = schema_editor.connection.alias model = apps.get_model(self.app_label, self.model_name) # Текущая миграция выполняется после миграции переименования, # поэтому целевое поле уже имеет новое имя. field = model._meta.get_field(new_name) if isinstance(field, ResourceFieldBase): with transaction.atomic(using=using): field.related_model._base_manager.db_manager(using).filter( owner_app_label=self.app_label, owner_model_name=self.model_name, owner_fieldname=old_name, ).update( owner_fieldname=new_name ) class RenameOwnerModel(migrations.RunPython): """ При переименовании модели, в которой есть поля, ссылающиеся на ресурсы, необходимо исправить значение owner_model_name во всех связанных экземплярах. """ def __init__(self, app_label, old_name, new_name): self.app_label = app_label self.old_name = old_name self.new_name = new_name super().__init__(self.forward, self.backward) def forward(self, apps, schema_editor): self._rename(apps, schema_editor, self.old_name, self.new_name) def backward(self, apps, schema_editor): self._rename(apps, schema_editor, self.new_name, self.old_name) def _rename(self, apps, schema_editor, old_name, new_name): using = schema_editor.connection.alias # Текущая миграция выполняется после миграции переименования, # поэтому модель уже имеет новое имя. model = apps.get_model(self.app_label, new_name) with transaction.atomic(using=using): for field in model._meta.fields: if isinstance(field, ResourceFieldBase): field.related_model._base_manager.db_manager(using).filter( owner_app_label=self.app_label, owner_model_name=old_name, owner_fieldname=field.name, ).update( owner_model_name=new_name )
PypiClean
/antenna-intensity-modeler-0.1.1.tar.gz/antenna-intensity-modeler-0.1.1/README.rst
========================= antenna-intensity-modeler ========================= .. image:: https://img.shields.io/pypi/v/antenna_intensity_modeler.svg :target: https://pypi.python.org/pypi/antenna_intensity_modeler .. image:: https://img.shields.io/travis/wboxx1/antenna-intensity-modeler.svg :target: https://travis-ci.com/wboxx1/antenna-intensity-modeler.svg?branch=master :alt: Build status on travis-ci .. image:: https://ci.appveyor.com/api/projects/status/wboxx1/branch/master?svg=true :target: https://ci.appveyor.com/api/projects/status/a9phai3m3pxjwtt5?svg=true :alt: Build status on Appveyor .. image:: https://pyup.io/repos/github/wboxx1/antenna-intensity-modeler/shield.svg :target: https://pyup.io/repos/github/wboxx1/antenna-intensity-modeler/ :alt: Updates Create near-field plots of parabolic dish antennas. * Free software: GNU General Public License v3 * Documentation: https://wboxx1.github.io/antenna-intensity-modeler Installation: ------------- .. code-block:: console $ pip install antenna-intensity-modeler Features -------- * TODO Credits ------- This package was created with Cookiecutter_ and the `wboxx1/cookiecutter-pypackage`_ project template. .. _Cookiecutter: https://github.com/audreyr/cookiecutter .. _`wboxx1/cookiecutter-pypackage`: https://github.com/wboxx1/cookiecutter-pypackage-poetry
PypiClean
/local_visualizer-0.2.0.tar.gz/local_visualizer-0.2.0/README.md
[![Documentation Status](https://readthedocs.org/projects/local-visualizer/badge/?version=latest)](http://local-visualizer.readthedocs.io/en/latest/?badge=latest) [![Build Status](https://travis-ci.org/psvishnu91/local_visualizer.svg?branch=master)](https://travis-ci.org/psvishnu91/local_visualizer) [![PyPI version](https://badge.fury.io/py/local-visualizer.svg)](https://badge.fury.io/py/local-visualizer) ## LocalVisualizer Simple python api to visualize the plots in a script. * Free software: MIT license * Documentation: https://local-visualizer.readthedocs.io * PyPI: https://pypi.python.org/pypi/local-visualizer/ ### Installation ``` bash pip install local-visualizer ``` ### Motivation * When moving from an IPython notebook to a script, we lose the diagnostics of visualizing pandas as tables and matplotlib plots. * :class:`LocalViz` starts a local http server and creates a html file to which pandas tables and matplotlib plots can be sent over. * The html file is dynamically updated for long running scripts. ### Usage ``` python import logging, sys, numpy as np, pandas as pd, matplotlib.pyplot as plt import local_visualizer plt.style.use('fivethirtyeight') logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) # Create the local visualizer instance lviz = local_visualizer.LocalViz(html_file='lviz_test.html', port=9112) # INFO:root:Starting background server at: http://localhost:9112/. # INFO:local_visualizer:Click: http://carpediem:9112/lviz_test.html or http://localhost:9112/lviz_test.html # Create plots which will be streamed to the html file. lviz.h3('Matplotlib :o') lviz.p( 'Wrap your plots in the figure context manager which takes ' 'in the kwargs of plt.figure and returns a plt.figure object.', ) with lviz.figure(figsize=(10, 8)) as fig: x = np.linspace(-10, 10, 1000) plt.plot(x, np.sin(x)) plt.title('Sine test') lviz.hr() # Visualize pandas dataframes as tables. lviz.h3('Pandas dataframes') df = pd.DataFrame({'A': np.linspace(1, 10, 10)}) df = pd.concat( [df, pd.DataFrame(np.random.randn(10, 4), columns=list('BCDE'))], axis=1, ) lviz.write(df) lviz.close() ``` ### Output This starts a HTTPServer and creates a html file which is dynamically updated each time ``lviz`` is called. ![Output image]( https://i.imgur.com/jjwvAX2.png "The output of the above commands") ### Support and Requirements Python 2.7 ### API methods 1. `p`: paragraph 2. `br`: line break 3. `hr`: Horizontal rule with line breaks 4. `h1`, `h2`, ..., `h6`: Headers 5. `write`: Directly write text to the html document (or pass in a `pandas.DataFrame`) 6. `figure`: Context manager which accepts the kwargs of `plt.figure` and returns a `plt.figure` object 7. `start`: Applicable if `LocalViz` was initialized with `lazy=True`. Starts the server and creates the html file 8. `close`: Completes the html file 9. `del_html`: Deletes the html file ### Credits This package was created with Cookiecutter_ and the `audreyr/cookiecutter-pypackage`_ project template. .. _Cookiecutter: https://github.com/audreyr/cookiecutter .. _`audreyr/cookiecutter-pypackage`: https://github.com/audreyr/cookiecutter-pypackage
PypiClean
/PyBrain-0.3.tar.gz/PyBrain-0.3/pybrain/structure/modules/mdlstm.py
__author__ = 'Tom Schaul, [email protected]' from scipy import zeros, tanh from neuronlayer import NeuronLayer from module import Module from pybrain.structure.parametercontainer import ParameterContainer from pybrain.tools.functions import sigmoid, sigmoidPrime, tanhPrime from pybrain.structure.moduleslice import ModuleSlice class MDLSTMLayer(NeuronLayer, ParameterContainer): """Multi-dimensional long short-term memory cell layer. The cell-states are explicitly passed on through a part of the input/output buffers (which should be connected correctly with IdentityConnections). The input consists of 4 parts, in the following order: - input gate - forget gates (1 per dim) - cell input - output gate - previous states (1 per dim) The output consists of two parts: - cell output - current statte Attention: this module has to be used with care: it's last <size> input and outputs are reserved for transmitting internal states on flattened recursive multi-dim networks, and so its connections have always to be sliced! """ peepholes = False dimensions = 1 maxoffset = 0 # Transfer functions and their derivatives def f(self, x): return sigmoid(x) def fprime(self, x): return sigmoidPrime(x) def g(self, x): return tanh(x) def gprime(self, x): return tanhPrime(x) def h(self, x): return tanh(x) def hprime(self, x): return tanhPrime(x) def __init__(self, dim, dimensions=1, peepholes=False, name=None): self.setArgs(dim=dim, peepholes=peepholes, dimensions=dimensions) # Internal buffers: self.bufferlist = [ ('ingate', dim), ('outgate', dim), ('forgetgate', dim * dimensions), ('ingatex', dim), ('outgatex', dim), ('forgetgatex', dim * dimensions), ('state', dim), ('ingateError', dim), ('outgateError', dim), ('forgetgateError', dim * dimensions), ('stateError', dim), ] Module.__init__(self, (3 + 2 * dimensions) * dim, dim * 2, name) if self.peepholes: ParameterContainer.__init__(self, dim * (2 + dimensions)) self._setParameters(self.params) self._setDerivatives(self.derivs) def _setParameters(self, p, owner=None): ParameterContainer._setParameters(self, p, owner) size = self.dim self.ingatePeepWeights = self.params[:size] self.forgetgatePeepWeights = self.params[size:size*(1 + self.dimensions)] self.outgatePeepWeights = self.params[size*(1 + self.dimensions):] def _setDerivatives(self, d, owner=None): ParameterContainer._setDerivatives(self, d, owner) size = self.dim self.ingatePeepDerivs = self.derivs[:size] self.forgetgatePeepDerivs = \ self.derivs[size:size * (1 + self.dimensions)] self.outgatePeepDerivs = \ self.derivs[size * (1 + self.dimensions):] def _forwardImplementation(self, inbuf, outbuf): self.maxoffset = max(self.offset + 1, self.maxoffset) size = self.dim # slicing the input buffer into the 4 parts. self.ingatex[self.offset] = inbuf[:size] self.forgetgatex[self.offset] = inbuf[size:size*(1+self.dimensions)] cellx = inbuf[size*(1+self.dimensions):size*(2+self.dimensions)] self.outgatex[self.offset] = inbuf[size*(2+self.dimensions):size*(3+self.dimensions)] laststates = inbuf[size*(3+self.dimensions):] # Peephole treatment if self.peepholes: for i in range(self.dimensions): self.ingatex[self.offset] += self.ingatePeepWeights * laststates[size * i:size * (i + 1)] self.forgetgatex[self.offset] += self.forgetgatePeepWeights * laststates self.ingate[self.offset] = self.f(self.ingatex[self.offset]) self.forgetgate[self.offset] = self.f(self.forgetgatex[self.offset]) self.state[self.offset] = self.ingate[self.offset] * self.g(cellx) for i in range(self.dimensions): self.state[self.offset] += self.forgetgate[self.offset, size*i:size*(i+1)] * laststates[size*i:size*(i+1)] if self.peepholes: self.outgatex[self.offset] += self.outgatePeepWeights * self.state[self.offset] self.outgate[self.offset] = self.f(self.outgatex[self.offset]) outbuf[:size] = self.outgate[self.offset] * self.h(self.state[self.offset]) outbuf[size:] = self.state[self.offset] def _backwardImplementation(self, outerr2, inerr, outbuf, inbuf): size = self.dim cellx = inbuf[size*(1+self.dimensions):size*(2+self.dimensions)] laststates = inbuf[size*(3+self.dimensions):] outerr = outerr2[:size] nextstateerr = outerr2[size:] self.outgateError[self.offset] = self.fprime(self.outgatex[self.offset]) * outerr * self.h(self.state[self.offset]) self.stateError[self.offset] = outerr * self.outgate[self.offset] * self.hprime(self.state[self.offset]) self.stateError[self.offset] += nextstateerr if self.peepholes: self.stateError[self.offset] += self.outgateError[self.offset] * self.outgatePeepWeights cellError = self.ingate[self.offset] * self.gprime(cellx) * self.stateError[self.offset] for i in range(self.dimensions): self.forgetgateError[self.offset, size*i:size*(i+1)] = (self.fprime(self.forgetgatex[self.offset, size*i:size*(i+1)]) * self.stateError[self.offset] * laststates[size*i:size*(i+1)]) self.ingateError[self.offset] = self.fprime(self.ingatex[self.offset]) * self.stateError[self.offset] * self.g(cellx) # compute derivatives if self.peepholes: self.outgatePeepDerivs += self.outgateError[self.offset] * self.state[self.offset] for i in range(self.dimensions): self.ingatePeepDerivs += self.ingateError[self.offset] * laststates[size*i:size*(i+1)] self.forgetgatePeepDerivs[size*i:size*(i+1)] += (self.forgetgateError[self.offset, size*i:size*(i+1)] * laststates[size*i:size*(i+1)]) instateErrors = zeros((size * self.dimensions)) for i in range(self.dimensions): instateErrors[size * i:size * (i + 1)] = (self.stateError[self.offset] * self.forgetgate[self.offset, size*i:size*(i+1)]) if self.peepholes: instateErrors[size * i:size * (i + 1)] += self.ingateError[self.offset] * self.ingatePeepWeights instateErrors[size * i:size * (i + 1)] += self.forgetgateError[self.offset, size*i:size*(i+1)] * \ self.forgetgatePeepWeights[size*i:size*(i+1)] inerr[:size] = self.ingateError[self.offset] inerr[size:size*(1+self.dimensions)] = self.forgetgateError[self.offset] inerr[size*(1+self.dimensions):size*(2+self.dimensions)] = cellError inerr[size*(2+self.dimensions):size*(3+self.dimensions)] = self.outgateError[self.offset] inerr[size * (3 + self.dimensions):] = instateErrors def meatSlice(self): """Return a moduleslice that wraps the meat part of the layer.""" return ModuleSlice(self, inSliceTo=self.dim * (3 + self.dimensions), outSliceTo=self.dim) def stateSlice(self): """Return a moduleslice that wraps the state transfer part of the layer. """ return ModuleSlice(self, inSliceFrom=self.dim * (3 + self.dimensions), outSliceFrom=self.dim) def whichNeuron(self, inputIndex=None, outputIndex=None): if inputIndex != None: return inputIndex % self.dim if outputIndex != None: return outputIndex % self.dim
PypiClean
/maghilchiplusplus-0.0.2.tar.gz/maghilchiplusplus-0.0.2/interpreter/tokens_/compiler.py
import abc import hashlib import json import os import pickle from typing import Any, Dict, List, Optional, Tuple, Type from interpreter.internal.token_ import Token # type: ignore from interpreter.tokens_ import tokens TokenData = Tuple[int, str, Any, int, Optional[int], int] class CompilerError(Exception): """CompilerError exception, gets thrown when the compiled file for a corresponding source code file cannot be read or is not current. """ class BytecodeCompiler(abc.ABC): """Bytecode compiler class, stores token data to bypass lexing and parsing""" exception = CompilerError def write_compiled_file(self, tokens_: List[Token], filename: str) -> None: """Traverses all token trees to collect their data, then dumps token data to pickle file, along with a current filehash of the source code file specified. """ saved_tokens: List[TokenData] = [] for token in tokens_: _save_tokens(token, saved_tokens) content = {"tokens": saved_tokens, "hash": _hash_file(filename)} compiled_filename = self.get_compiled_filename(filename) self.write_content_to_file(content, compiled_filename) def read_compiled_file(self, filename: str) -> List[Token]: """Loads tokens from the corresponding compiled file to the filename specified. If a compiled file cannot be found or the file hash of the source code file and the hash in the compiled file do not match, a CompilerError exception is thrown. """ compiled_filename = self.get_compiled_filename(filename) if not os.path.isfile(compiled_filename): raise CompilerError("Compiled file does not exist!") contents = self.read_content_from_file(compiled_filename) if _hash_file(filename) != contents["hash"]: raise CompilerError("Hash in compiled file did not match file hash!") tokens_ = _construct_token_trees(token_data=contents["tokens"]) return tokens_ @abc.abstractmethod def get_compiled_filename(self, filename: str) -> str: """Returns the name of the compiled file corresponding to the specified filename""" @abc.abstractmethod def write_content_to_file(self, content: Any, compiled_filename: str) -> None: """Writes the content to the compiled file""" @abc.abstractmethod def read_content_from_file(self, compiled_filename: str) -> Dict[str, Any]: """Reads the content from the compiled file""" class PickleCompiler(BytecodeCompiler): """Bytecode compiler using pickle to write/read compiled files""" def write_content_to_file(self, content: Any, compiled_filename: str) -> None: """Writes the content to the compiled file""" with open(compiled_filename, "wb") as file: pickle.dump(content, file) def read_content_from_file(self, compiled_filename: str) -> Dict[str, Any]: """Reads the content from the compiled file""" with open(compiled_filename, "rb") as file: contents = pickle.load(file) return contents def get_compiled_filename(self, filename: str) -> str: """Returns the name of the compiled file corresponding to the specified filename""" return filename.replace(".mgpp", ".mgppc") class JsonCompiler(BytecodeCompiler): """Bytecode compiler using json to write/read compiled files""" def write_content_to_file(self, content: Any, compiled_filename: str) -> None: """Writes the content to the compiled file""" with open(compiled_filename, "w", encoding="utf-8") as file: json.dump(content, file) def read_content_from_file(self, compiled_filename: str) -> Dict[str, Any]: """Reads the content from the compiled file""" with open(compiled_filename, "r", encoding="utf-8") as file: contents = json.load(file) return contents def get_compiled_filename(self, filename: str) -> str: """Returns the name of the compiled file corresponding to the specified filename""" return filename.replace(".mgpp", ".json") def _hash_file(filename: str) -> str: """Returns the hash of the contents of the specified file.""" hash_ = hashlib.sha256() bytes_ = bytearray(128 * 1024) memory_view = memoryview(bytes_) with open(filename, "rb", buffering=0) as file: for n in iter(lambda: file.readinto(memory_view), 0): # type: ignore hash_.update(memory_view[:n]) return hash_.hexdigest() def _save_tokens( token: Token, list_: List[TokenData], id_: int = 0, parent_id: Optional[int] = None ) -> None: """Recursively traverses token and its subtokens and appends their token data to the list.""" list_.append( ( id_, token.__class__.__name__, token.value, token.run_order, parent_id, token.line, ) ) for token in token.tokens: _save_tokens(token, list_, id_ + 1, parent_id=id_) def _construct_token_trees(token_data: List[TokenData]) -> List[Token]: """Constructs a list of nested token trees from the passed token data.""" parents: Dict[int, Token] = {} tokens_: List[Token] = [] expected_tokens = [] # type: ignore for id_, class_name, value, run_order, parent_id, line in token_data: class_: Type[Token] = tokens.__dict__[class_name] token_ = class_(value, line) token_.run_order = run_order token_.expected_tokens = expected_tokens parents[id_] = token_ if parent_id is None: tokens_.append(token_) else: parents[parent_id].tokens.append(token_) return tokens_
PypiClean
/iq-0.0.2.tar.gz/iq-0.0.2/README.rst
Pay === .. image:: https://img.shields.io/pypi/v/iq.svg :target: https://pypi.python.org/pypi/iq/ :alt: Latest Version .. image:: https://img.shields.io/pypi/wheel/iq.svg :target: https://pypi.python.org/pypi/iq/ .. image:: https://img.shields.io/pypi/pyversions/iq.svg :target: https://pypi.python.org/pypi/iq/ .. image:: https://img.shields.io/pypi/l/iq.svg :target: https://pypi.python.org/pypi/iq/ IQ SDK. Installing ---------- Install and update using `pip`_: .. code-block:: text pip install -U iq .. _pip: https://pip.pypa.io/en/stable/quickstart/
PypiClean
/tmx-nano-2040-wifi-aio-1.0.tar.gz/tmx-nano-2040-wifi-aio-1.0/tmx_nano2040_wifi_aio/private_constants.py
class PrivateConstants: """ This class contains a set of constants for telemetrix internal use . """ # commands # send a loop back request - for debugging communications LOOP_COMMAND = 0 SET_PIN_MODE = 1 # set a pin to INPUT/OUTPUT/PWM/etc DIGITAL_WRITE = 2 # set a single digital pin value instead of entire port ANALOG_WRITE = 3 MODIFY_REPORTING = 4 GET_FIRMWARE_VERSION = 5 ARE_U_THERE = 6 # Arduino ID query for auto-detect of telemetrix connected boards SERVO_ATTACH = 7 SERVO_WRITE = 8 SERVO_DETACH = 9 I2C_BEGIN = 10 I2C_READ = 11 I2C_WRITE = 12 SONAR_NEW = 13 RGB_WRITE = 14 STOP_ALL_REPORTS = 15 SET_ANALOG_SCANNING_INTERVAL = 16 ENABLE_ALL_REPORTS = 17 RESET = 18 IMU_ENABLE = 19 MICROPHONE_ENABLE = 20 INITIALIZE_NEO_PIXELS = 21 SHOW_NEO_PIXELS = 22 SET_NEO_PIXEL = 23 CLEAR_ALL_NEO_PIXELS = 24 FILL_ALL_NEO_PIXELS = 25 SPI_INIT = 26 SPI_WRITE_BLOCKING = 27 SPI_READ_BLOCKING = 28 SPI_SET_FORMAT = 29 SPI_CS_CONTROL = 30 DHT_NEW = 31 # reports # debug data from Arduino DIGITAL_REPORT = DIGITAL_WRITE ANALOG_REPORT = ANALOG_WRITE FIRMWARE_REPORT = GET_FIRMWARE_VERSION I_AM_HERE_REPORT = ARE_U_THERE SERVO_UNAVAILABLE = SERVO_ATTACH I2C_TOO_FEW_BYTES_RCVD = 8 I2C_TOO_MANY_BYTES_RCVD = 9 I2C_READ_REPORT = 10 SONAR_DISTANCE = 11 IMU_REPORT = 12 MICROPHONE_REPORT = 13 DHT_REPORT = 14 SPI_REPORT = 15 DEBUG_PRINT = 99 TELEMETRIX_VERSION = "1.0" # reporting control REPORTING_DISABLE_ALL = 0 REPORTING_ANALOG_ENABLE = 1 REPORTING_DIGITAL_ENABLE = 2 REPORTING_ANALOG_DISABLE = 3 REPORTING_DIGITAL_DISABLE = 4 # Pin mode definitions AT_INPUT = 0 AT_OUTPUT = 1 AT_INPUT_PULLUP = 2 AT_ANALOG = 3 AT_SERVO = 4 AT_SONAR = 5 AT_RGB = 6 AT_MICROPHONE_MONO = 7 AT_MICROPHONE_STEREO = 8 AT_IMU = 9 AT_NEO_PIXEL = 10 AT_MODE_NOT_SET = 255 # microphone channels MICROPHONE_MONO = 1 MICROPHONE_STEREO = 2 # maximum number of digital pins supported NUMBER_OF_DIGITAL_PINS = 20 # maximum number of analog pins supported NUMBER_OF_ANALOG_PINS = 8 # maximum number of sonars allowed MAX_SONARS = 6 # RGB Pin Numbers LED_R = 27 LED_G = 25 LED_B = 26 # maximum number of DHT devices allowed MAX_DHTS = 2 # DHT Report sub-types DHT_DATA = 0 DHT_ERROR = 1
PypiClean
/azure-mgmt-resource-23.1.0b1.zip/azure-mgmt-resource-23.1.0b1/azure/mgmt/resource/subscriptions/v2022_12_01/models/_models_py3.py
from typing import Any, Dict, List, Optional, TYPE_CHECKING, Union from ... import _serialization if TYPE_CHECKING: # pylint: disable=unused-import,ungrouped-imports from .. import models as _models class AvailabilityZoneMappings(_serialization.Model): """Availability zone mappings for the region. Variables are only populated by the server, and will be ignored when sending a request. :ivar logical_zone: The logical zone id for the availability zone. :vartype logical_zone: str :ivar physical_zone: The fully qualified physical zone id of availability zone to which logical zone id is mapped to. :vartype physical_zone: str """ _validation = { "logical_zone": {"readonly": True}, "physical_zone": {"readonly": True}, } _attribute_map = { "logical_zone": {"key": "logicalZone", "type": "str"}, "physical_zone": {"key": "physicalZone", "type": "str"}, } def __init__(self, **kwargs: Any) -> None: """ """ super().__init__(**kwargs) self.logical_zone = None self.physical_zone = None class AvailabilityZonePeers(_serialization.Model): """List of availability zones shared by the subscriptions. Variables are only populated by the server, and will be ignored when sending a request. :ivar availability_zone: The availabilityZone. :vartype availability_zone: str :ivar peers: Details of shared availability zone. :vartype peers: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.Peers] """ _validation = { "availability_zone": {"readonly": True}, } _attribute_map = { "availability_zone": {"key": "availabilityZone", "type": "str"}, "peers": {"key": "peers", "type": "[Peers]"}, } def __init__(self, *, peers: Optional[List["_models.Peers"]] = None, **kwargs: Any) -> None: """ :keyword peers: Details of shared availability zone. :paramtype peers: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.Peers] """ super().__init__(**kwargs) self.availability_zone = None self.peers = peers class CheckResourceNameResult(_serialization.Model): """Resource Name valid if not a reserved word, does not contain a reserved word and does not start with a reserved word. :ivar name: Name of Resource. :vartype name: str :ivar type: Type of Resource. :vartype type: str :ivar status: Is the resource name Allowed or Reserved. Known values are: "Allowed" and "Reserved". :vartype status: str or ~azure.mgmt.resource.subscriptions.v2022_12_01.models.ResourceNameStatus """ _attribute_map = { "name": {"key": "name", "type": "str"}, "type": {"key": "type", "type": "str"}, "status": {"key": "status", "type": "str"}, } def __init__( self, *, name: Optional[str] = None, type: Optional[str] = None, status: Optional[Union[str, "_models.ResourceNameStatus"]] = None, **kwargs: Any ) -> None: """ :keyword name: Name of Resource. :paramtype name: str :keyword type: Type of Resource. :paramtype type: str :keyword status: Is the resource name Allowed or Reserved. Known values are: "Allowed" and "Reserved". :paramtype status: str or ~azure.mgmt.resource.subscriptions.v2022_12_01.models.ResourceNameStatus """ super().__init__(**kwargs) self.name = name self.type = type self.status = status class CheckZonePeersRequest(_serialization.Model): """Check zone peers request parameters. :ivar location: The Microsoft location. :vartype location: str :ivar subscription_ids: The peer Microsoft Azure subscription ID. :vartype subscription_ids: list[str] """ _attribute_map = { "location": {"key": "location", "type": "str"}, "subscription_ids": {"key": "subscriptionIds", "type": "[str]"}, } def __init__( self, *, location: Optional[str] = None, subscription_ids: Optional[List[str]] = None, **kwargs: Any ) -> None: """ :keyword location: The Microsoft location. :paramtype location: str :keyword subscription_ids: The peer Microsoft Azure subscription ID. :paramtype subscription_ids: list[str] """ super().__init__(**kwargs) self.location = location self.subscription_ids = subscription_ids class CheckZonePeersResult(_serialization.Model): """Result of the Check zone peers operation. Variables are only populated by the server, and will be ignored when sending a request. :ivar subscription_id: The subscription ID. :vartype subscription_id: str :ivar location: the location of the subscription. :vartype location: str :ivar availability_zone_peers: The Availability Zones shared by the subscriptions. :vartype availability_zone_peers: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.AvailabilityZonePeers] """ _validation = { "subscription_id": {"readonly": True}, } _attribute_map = { "subscription_id": {"key": "subscriptionId", "type": "str"}, "location": {"key": "location", "type": "str"}, "availability_zone_peers": {"key": "availabilityZonePeers", "type": "[AvailabilityZonePeers]"}, } def __init__( self, *, location: Optional[str] = None, availability_zone_peers: Optional[List["_models.AvailabilityZonePeers"]] = None, **kwargs: Any ) -> None: """ :keyword location: the location of the subscription. :paramtype location: str :keyword availability_zone_peers: The Availability Zones shared by the subscriptions. :paramtype availability_zone_peers: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.AvailabilityZonePeers] """ super().__init__(**kwargs) self.subscription_id = None self.location = location self.availability_zone_peers = availability_zone_peers class ErrorAdditionalInfo(_serialization.Model): """The resource management error additional info. Variables are only populated by the server, and will be ignored when sending a request. :ivar type: The additional info type. :vartype type: str :ivar info: The additional info. :vartype info: JSON """ _validation = { "type": {"readonly": True}, "info": {"readonly": True}, } _attribute_map = { "type": {"key": "type", "type": "str"}, "info": {"key": "info", "type": "object"}, } def __init__(self, **kwargs: Any) -> None: """ """ super().__init__(**kwargs) self.type = None self.info = None class ErrorDetail(_serialization.Model): """The error detail. Variables are only populated by the server, and will be ignored when sending a request. :ivar code: The error code. :vartype code: str :ivar message: The error message. :vartype message: str :ivar target: The error target. :vartype target: str :ivar details: The error details. :vartype details: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.ErrorDetail] :ivar additional_info: The error additional info. :vartype additional_info: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.ErrorAdditionalInfo] """ _validation = { "code": {"readonly": True}, "message": {"readonly": True}, "target": {"readonly": True}, "details": {"readonly": True}, "additional_info": {"readonly": True}, } _attribute_map = { "code": {"key": "code", "type": "str"}, "message": {"key": "message", "type": "str"}, "target": {"key": "target", "type": "str"}, "details": {"key": "details", "type": "[ErrorDetail]"}, "additional_info": {"key": "additionalInfo", "type": "[ErrorAdditionalInfo]"}, } def __init__(self, **kwargs: Any) -> None: """ """ super().__init__(**kwargs) self.code = None self.message = None self.target = None self.details = None self.additional_info = None class ErrorResponse(_serialization.Model): """Common error response for all Azure Resource Manager APIs to return error details for failed operations. (This also follows the OData error response format.). Variables are only populated by the server, and will be ignored when sending a request. :ivar code: The error code. :vartype code: str :ivar message: The error message. :vartype message: str :ivar target: The error target. :vartype target: str :ivar details: The error details. :vartype details: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.ErrorResponse] :ivar additional_info: The error additional info. :vartype additional_info: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.ErrorAdditionalInfo] """ _validation = { "code": {"readonly": True}, "message": {"readonly": True}, "target": {"readonly": True}, "details": {"readonly": True}, "additional_info": {"readonly": True}, } _attribute_map = { "code": {"key": "code", "type": "str"}, "message": {"key": "message", "type": "str"}, "target": {"key": "target", "type": "str"}, "details": {"key": "details", "type": "[ErrorResponse]"}, "additional_info": {"key": "additionalInfo", "type": "[ErrorAdditionalInfo]"}, } def __init__(self, **kwargs: Any) -> None: """ """ super().__init__(**kwargs) self.code = None self.message = None self.target = None self.details = None self.additional_info = None class ErrorResponseAutoGenerated(_serialization.Model): """Common error response for all Azure Resource Manager APIs to return error details for failed operations. (This also follows the OData error response format.). :ivar error: The error object. :vartype error: ~azure.mgmt.resource.subscriptions.v2022_12_01.models.ErrorDetail """ _attribute_map = { "error": {"key": "error", "type": "ErrorDetail"}, } def __init__(self, *, error: Optional["_models.ErrorDetail"] = None, **kwargs: Any) -> None: """ :keyword error: The error object. :paramtype error: ~azure.mgmt.resource.subscriptions.v2022_12_01.models.ErrorDetail """ super().__init__(**kwargs) self.error = error class Location(_serialization.Model): """Location information. Variables are only populated by the server, and will be ignored when sending a request. :ivar id: The fully qualified ID of the location. For example, /subscriptions/8d65815f-a5b6-402f-9298-045155da7d74/locations/westus. :vartype id: str :ivar subscription_id: The subscription ID. :vartype subscription_id: str :ivar name: The location name. :vartype name: str :ivar type: The location type. Known values are: "Region" and "EdgeZone". :vartype type: str or ~azure.mgmt.resource.subscriptions.v2022_12_01.models.LocationType :ivar display_name: The display name of the location. :vartype display_name: str :ivar regional_display_name: The display name of the location and its region. :vartype regional_display_name: str :ivar metadata: Metadata of the location, such as lat/long, paired region, and others. :vartype metadata: ~azure.mgmt.resource.subscriptions.v2022_12_01.models.LocationMetadata :ivar availability_zone_mappings: The availability zone mappings for this region. :vartype availability_zone_mappings: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.AvailabilityZoneMappings] """ _validation = { "id": {"readonly": True}, "subscription_id": {"readonly": True}, "name": {"readonly": True}, "type": {"readonly": True}, "display_name": {"readonly": True}, "regional_display_name": {"readonly": True}, } _attribute_map = { "id": {"key": "id", "type": "str"}, "subscription_id": {"key": "subscriptionId", "type": "str"}, "name": {"key": "name", "type": "str"}, "type": {"key": "type", "type": "str"}, "display_name": {"key": "displayName", "type": "str"}, "regional_display_name": {"key": "regionalDisplayName", "type": "str"}, "metadata": {"key": "metadata", "type": "LocationMetadata"}, "availability_zone_mappings": {"key": "availabilityZoneMappings", "type": "[AvailabilityZoneMappings]"}, } def __init__( self, *, metadata: Optional["_models.LocationMetadata"] = None, availability_zone_mappings: Optional[List["_models.AvailabilityZoneMappings"]] = None, **kwargs: Any ) -> None: """ :keyword metadata: Metadata of the location, such as lat/long, paired region, and others. :paramtype metadata: ~azure.mgmt.resource.subscriptions.v2022_12_01.models.LocationMetadata :keyword availability_zone_mappings: The availability zone mappings for this region. :paramtype availability_zone_mappings: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.AvailabilityZoneMappings] """ super().__init__(**kwargs) self.id = None self.subscription_id = None self.name = None self.type = None self.display_name = None self.regional_display_name = None self.metadata = metadata self.availability_zone_mappings = availability_zone_mappings class LocationListResult(_serialization.Model): """Location list operation response. :ivar value: An array of locations. :vartype value: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.Location] """ _attribute_map = { "value": {"key": "value", "type": "[Location]"}, } def __init__(self, *, value: Optional[List["_models.Location"]] = None, **kwargs: Any) -> None: """ :keyword value: An array of locations. :paramtype value: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.Location] """ super().__init__(**kwargs) self.value = value class LocationMetadata(_serialization.Model): """Location metadata information. Variables are only populated by the server, and will be ignored when sending a request. :ivar region_type: The type of the region. Known values are: "Physical" and "Logical". :vartype region_type: str or ~azure.mgmt.resource.subscriptions.v2022_12_01.models.RegionType :ivar region_category: The category of the region. Known values are: "Recommended", "Extended", and "Other". :vartype region_category: str or ~azure.mgmt.resource.subscriptions.v2022_12_01.models.RegionCategory :ivar geography: The geography of the location. :vartype geography: str :ivar geography_group: The geography group of the location. :vartype geography_group: str :ivar longitude: The longitude of the location. :vartype longitude: str :ivar latitude: The latitude of the location. :vartype latitude: str :ivar physical_location: The physical location of the Azure location. :vartype physical_location: str :ivar paired_region: The regions paired to this region. :vartype paired_region: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.PairedRegion] :ivar home_location: The home location of an edge zone. :vartype home_location: str """ _validation = { "region_type": {"readonly": True}, "region_category": {"readonly": True}, "geography": {"readonly": True}, "geography_group": {"readonly": True}, "longitude": {"readonly": True}, "latitude": {"readonly": True}, "physical_location": {"readonly": True}, "home_location": {"readonly": True}, } _attribute_map = { "region_type": {"key": "regionType", "type": "str"}, "region_category": {"key": "regionCategory", "type": "str"}, "geography": {"key": "geography", "type": "str"}, "geography_group": {"key": "geographyGroup", "type": "str"}, "longitude": {"key": "longitude", "type": "str"}, "latitude": {"key": "latitude", "type": "str"}, "physical_location": {"key": "physicalLocation", "type": "str"}, "paired_region": {"key": "pairedRegion", "type": "[PairedRegion]"}, "home_location": {"key": "homeLocation", "type": "str"}, } def __init__(self, *, paired_region: Optional[List["_models.PairedRegion"]] = None, **kwargs: Any) -> None: """ :keyword paired_region: The regions paired to this region. :paramtype paired_region: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.PairedRegion] """ super().__init__(**kwargs) self.region_type = None self.region_category = None self.geography = None self.geography_group = None self.longitude = None self.latitude = None self.physical_location = None self.paired_region = paired_region self.home_location = None class ManagedByTenant(_serialization.Model): """Information about a tenant managing the subscription. Variables are only populated by the server, and will be ignored when sending a request. :ivar tenant_id: The tenant ID of the managing tenant. This is a GUID. :vartype tenant_id: str """ _validation = { "tenant_id": {"readonly": True}, } _attribute_map = { "tenant_id": {"key": "tenantId", "type": "str"}, } def __init__(self, **kwargs: Any) -> None: """ """ super().__init__(**kwargs) self.tenant_id = None class Operation(_serialization.Model): """Details of a REST API operation, returned from the Resource Provider Operations API. Variables are only populated by the server, and will be ignored when sending a request. :ivar name: The name of the operation, as per Resource-Based Access Control (RBAC). Examples: "Microsoft.Compute/virtualMachines/write", "Microsoft.Compute/virtualMachines/capture/action". :vartype name: str :ivar is_data_action: Whether the operation applies to data-plane. This is "true" for data-plane operations and "false" for ARM/control-plane operations. :vartype is_data_action: bool :ivar display: Localized display information for this particular operation. :vartype display: ~azure.mgmt.resource.subscriptions.v2022_12_01.models.OperationDisplay :ivar origin: The intended executor of the operation; as in Resource Based Access Control (RBAC) and audit logs UX. Default value is "user,system". Known values are: "user", "system", and "user,system". :vartype origin: str or ~azure.mgmt.resource.subscriptions.v2022_12_01.models.Origin :ivar action_type: Enum. Indicates the action type. "Internal" refers to actions that are for internal only APIs. "Internal" :vartype action_type: str or ~azure.mgmt.resource.subscriptions.v2022_12_01.models.ActionType """ _validation = { "name": {"readonly": True}, "is_data_action": {"readonly": True}, "origin": {"readonly": True}, "action_type": {"readonly": True}, } _attribute_map = { "name": {"key": "name", "type": "str"}, "is_data_action": {"key": "isDataAction", "type": "bool"}, "display": {"key": "display", "type": "OperationDisplay"}, "origin": {"key": "origin", "type": "str"}, "action_type": {"key": "actionType", "type": "str"}, } def __init__(self, *, display: Optional["_models.OperationDisplay"] = None, **kwargs: Any) -> None: """ :keyword display: Localized display information for this particular operation. :paramtype display: ~azure.mgmt.resource.subscriptions.v2022_12_01.models.OperationDisplay """ super().__init__(**kwargs) self.name = None self.is_data_action = None self.display = display self.origin = None self.action_type = None class OperationAutoGenerated(_serialization.Model): """Details of a REST API operation, returned from the Resource Provider Operations API. Variables are only populated by the server, and will be ignored when sending a request. :ivar name: Operation name: {provider}/{resource}/{operation}. :vartype name: str :ivar is_data_action: Whether the operation applies to data-plane. This is "true" for data-plane operations and "false" for ARM/control-plane operations. :vartype is_data_action: bool :ivar display: Localized display information for this particular operation. :vartype display: ~azure.mgmt.resource.subscriptions.v2022_12_01.models.OperationDisplayAutoGenerated :ivar origin: The intended executor of the operation; as in Resource Based Access Control (RBAC) and audit logs UX. Default value is "user,system". Known values are: "user", "system", and "user,system". :vartype origin: str or ~azure.mgmt.resource.subscriptions.v2022_12_01.models.Origin :ivar action_type: Enum. Indicates the action type. "Internal" refers to actions that are for internal only APIs. "Internal" :vartype action_type: str or ~azure.mgmt.resource.subscriptions.v2022_12_01.models.ActionType """ _validation = { "is_data_action": {"readonly": True}, "origin": {"readonly": True}, "action_type": {"readonly": True}, } _attribute_map = { "name": {"key": "name", "type": "str"}, "is_data_action": {"key": "isDataAction", "type": "bool"}, "display": {"key": "display", "type": "OperationDisplayAutoGenerated"}, "origin": {"key": "origin", "type": "str"}, "action_type": {"key": "actionType", "type": "str"}, } def __init__( self, *, name: Optional[str] = None, display: Optional["_models.OperationDisplayAutoGenerated"] = None, **kwargs: Any ) -> None: """ :keyword name: Operation name: {provider}/{resource}/{operation}. :paramtype name: str :keyword display: Localized display information for this particular operation. :paramtype display: ~azure.mgmt.resource.subscriptions.v2022_12_01.models.OperationDisplayAutoGenerated """ super().__init__(**kwargs) self.name = name self.is_data_action = None self.display = display self.origin = None self.action_type = None class OperationDisplay(_serialization.Model): """Localized display information for this particular operation. Variables are only populated by the server, and will be ignored when sending a request. :ivar provider: The localized friendly form of the resource provider name, e.g. "Microsoft Monitoring Insights" or "Microsoft Compute". :vartype provider: str :ivar resource: The localized friendly name of the resource type related to this operation. E.g. "Virtual Machines" or "Job Schedule Collections". :vartype resource: str :ivar operation: The concise, localized friendly name for the operation; suitable for dropdowns. E.g. "Create or Update Virtual Machine", "Restart Virtual Machine". :vartype operation: str :ivar description: The short, localized friendly description of the operation; suitable for tool tips and detailed views. :vartype description: str """ _validation = { "provider": {"readonly": True}, "resource": {"readonly": True}, "operation": {"readonly": True}, "description": {"readonly": True}, } _attribute_map = { "provider": {"key": "provider", "type": "str"}, "resource": {"key": "resource", "type": "str"}, "operation": {"key": "operation", "type": "str"}, "description": {"key": "description", "type": "str"}, } def __init__(self, **kwargs: Any) -> None: """ """ super().__init__(**kwargs) self.provider = None self.resource = None self.operation = None self.description = None class OperationDisplayAutoGenerated(_serialization.Model): """Localized display information for this particular operation. :ivar provider: Service provider: Microsoft.Resources. :vartype provider: str :ivar resource: Resource on which the operation is performed: Profile, endpoint, etc. :vartype resource: str :ivar operation: Operation type: Read, write, delete, etc. :vartype operation: str :ivar description: Description of the operation. :vartype description: str """ _attribute_map = { "provider": {"key": "provider", "type": "str"}, "resource": {"key": "resource", "type": "str"}, "operation": {"key": "operation", "type": "str"}, "description": {"key": "description", "type": "str"}, } def __init__( self, *, provider: Optional[str] = None, resource: Optional[str] = None, operation: Optional[str] = None, description: Optional[str] = None, **kwargs: Any ) -> None: """ :keyword provider: Service provider: Microsoft.Resources. :paramtype provider: str :keyword resource: Resource on which the operation is performed: Profile, endpoint, etc. :paramtype resource: str :keyword operation: Operation type: Read, write, delete, etc. :paramtype operation: str :keyword description: Description of the operation. :paramtype description: str """ super().__init__(**kwargs) self.provider = provider self.resource = resource self.operation = operation self.description = description class OperationListResult(_serialization.Model): """A list of REST API operations supported by an Azure Resource Provider. It contains an URL link to get the next set of results. Variables are only populated by the server, and will be ignored when sending a request. :ivar value: List of operations supported by the resource provider. :vartype value: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.Operation] :ivar next_link: URL to get the next set of operation list results (if there are any). :vartype next_link: str """ _validation = { "value": {"readonly": True}, "next_link": {"readonly": True}, } _attribute_map = { "value": {"key": "value", "type": "[Operation]"}, "next_link": {"key": "nextLink", "type": "str"}, } def __init__(self, **kwargs: Any) -> None: """ """ super().__init__(**kwargs) self.value = None self.next_link = None class OperationListResultAutoGenerated(_serialization.Model): """A list of REST API operations supported by an Azure Resource Provider. It contains an URL link to get the next set of results. :ivar value: List of operations supported by the resource provider. :vartype value: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.OperationAutoGenerated] :ivar next_link: URL to get the next set of operation list results (if there are any). :vartype next_link: str """ _attribute_map = { "value": {"key": "value", "type": "[OperationAutoGenerated]"}, "next_link": {"key": "nextLink", "type": "str"}, } def __init__( self, *, value: Optional[List["_models.OperationAutoGenerated"]] = None, next_link: Optional[str] = None, **kwargs: Any ) -> None: """ :keyword value: List of operations supported by the resource provider. :paramtype value: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.OperationAutoGenerated] :keyword next_link: URL to get the next set of operation list results (if there are any). :paramtype next_link: str """ super().__init__(**kwargs) self.value = value self.next_link = next_link class PairedRegion(_serialization.Model): """Information regarding paired region. Variables are only populated by the server, and will be ignored when sending a request. :ivar name: The name of the paired region. :vartype name: str :ivar id: The fully qualified ID of the location. For example, /subscriptions/8d65815f-a5b6-402f-9298-045155da7d74/locations/westus. :vartype id: str :ivar subscription_id: The subscription ID. :vartype subscription_id: str """ _validation = { "name": {"readonly": True}, "id": {"readonly": True}, "subscription_id": {"readonly": True}, } _attribute_map = { "name": {"key": "name", "type": "str"}, "id": {"key": "id", "type": "str"}, "subscription_id": {"key": "subscriptionId", "type": "str"}, } def __init__(self, **kwargs: Any) -> None: """ """ super().__init__(**kwargs) self.name = None self.id = None self.subscription_id = None class Peers(_serialization.Model): """Information about shared availability zone. Variables are only populated by the server, and will be ignored when sending a request. :ivar subscription_id: The subscription ID. :vartype subscription_id: str :ivar availability_zone: The availabilityZone. :vartype availability_zone: str """ _validation = { "subscription_id": {"readonly": True}, "availability_zone": {"readonly": True}, } _attribute_map = { "subscription_id": {"key": "subscriptionId", "type": "str"}, "availability_zone": {"key": "availabilityZone", "type": "str"}, } def __init__(self, **kwargs: Any) -> None: """ """ super().__init__(**kwargs) self.subscription_id = None self.availability_zone = None class ResourceName(_serialization.Model): """Name and Type of the Resource. All required parameters must be populated in order to send to Azure. :ivar name: Name of the resource. Required. :vartype name: str :ivar type: The type of the resource. Required. :vartype type: str """ _validation = { "name": {"required": True}, "type": {"required": True}, } _attribute_map = { "name": {"key": "name", "type": "str"}, "type": {"key": "type", "type": "str"}, } def __init__(self, *, name: str, type: str, **kwargs: Any) -> None: """ :keyword name: Name of the resource. Required. :paramtype name: str :keyword type: The type of the resource. Required. :paramtype type: str """ super().__init__(**kwargs) self.name = name self.type = type class Subscription(_serialization.Model): """Subscription information. Variables are only populated by the server, and will be ignored when sending a request. :ivar id: The fully qualified ID for the subscription. For example, /subscriptions/8d65815f-a5b6-402f-9298-045155da7d74. :vartype id: str :ivar subscription_id: The subscription ID. :vartype subscription_id: str :ivar display_name: The subscription display name. :vartype display_name: str :ivar tenant_id: The subscription tenant ID. :vartype tenant_id: str :ivar state: The subscription state. Possible values are Enabled, Warned, PastDue, Disabled, and Deleted. Known values are: "Enabled", "Warned", "PastDue", "Disabled", and "Deleted". :vartype state: str or ~azure.mgmt.resource.subscriptions.v2022_12_01.models.SubscriptionState :ivar subscription_policies: The subscription policies. :vartype subscription_policies: ~azure.mgmt.resource.subscriptions.v2022_12_01.models.SubscriptionPolicies :ivar authorization_source: The authorization source of the request. Valid values are one or more combinations of Legacy, RoleBased, Bypassed, Direct and Management. For example, 'Legacy, RoleBased'. :vartype authorization_source: str :ivar managed_by_tenants: An array containing the tenants managing the subscription. :vartype managed_by_tenants: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.ManagedByTenant] :ivar tags: The tags attached to the subscription. :vartype tags: dict[str, str] """ _validation = { "id": {"readonly": True}, "subscription_id": {"readonly": True}, "display_name": {"readonly": True}, "tenant_id": {"readonly": True}, "state": {"readonly": True}, } _attribute_map = { "id": {"key": "id", "type": "str"}, "subscription_id": {"key": "subscriptionId", "type": "str"}, "display_name": {"key": "displayName", "type": "str"}, "tenant_id": {"key": "tenantId", "type": "str"}, "state": {"key": "state", "type": "str"}, "subscription_policies": {"key": "subscriptionPolicies", "type": "SubscriptionPolicies"}, "authorization_source": {"key": "authorizationSource", "type": "str"}, "managed_by_tenants": {"key": "managedByTenants", "type": "[ManagedByTenant]"}, "tags": {"key": "tags", "type": "{str}"}, } def __init__( self, *, subscription_policies: Optional["_models.SubscriptionPolicies"] = None, authorization_source: Optional[str] = None, managed_by_tenants: Optional[List["_models.ManagedByTenant"]] = None, tags: Optional[Dict[str, str]] = None, **kwargs: Any ) -> None: """ :keyword subscription_policies: The subscription policies. :paramtype subscription_policies: ~azure.mgmt.resource.subscriptions.v2022_12_01.models.SubscriptionPolicies :keyword authorization_source: The authorization source of the request. Valid values are one or more combinations of Legacy, RoleBased, Bypassed, Direct and Management. For example, 'Legacy, RoleBased'. :paramtype authorization_source: str :keyword managed_by_tenants: An array containing the tenants managing the subscription. :paramtype managed_by_tenants: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.ManagedByTenant] :keyword tags: The tags attached to the subscription. :paramtype tags: dict[str, str] """ super().__init__(**kwargs) self.id = None self.subscription_id = None self.display_name = None self.tenant_id = None self.state = None self.subscription_policies = subscription_policies self.authorization_source = authorization_source self.managed_by_tenants = managed_by_tenants self.tags = tags class SubscriptionListResult(_serialization.Model): """Subscription list operation response. All required parameters must be populated in order to send to Azure. :ivar value: An array of subscriptions. :vartype value: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.Subscription] :ivar next_link: The URL to get the next set of results. Required. :vartype next_link: str """ _validation = { "next_link": {"required": True}, } _attribute_map = { "value": {"key": "value", "type": "[Subscription]"}, "next_link": {"key": "nextLink", "type": "str"}, } def __init__(self, *, next_link: str, value: Optional[List["_models.Subscription"]] = None, **kwargs: Any) -> None: """ :keyword value: An array of subscriptions. :paramtype value: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.Subscription] :keyword next_link: The URL to get the next set of results. Required. :paramtype next_link: str """ super().__init__(**kwargs) self.value = value self.next_link = next_link class SubscriptionPolicies(_serialization.Model): """Subscription policies. Variables are only populated by the server, and will be ignored when sending a request. :ivar location_placement_id: The subscription location placement ID. The ID indicates which regions are visible for a subscription. For example, a subscription with a location placement Id of Public_2014-09-01 has access to Azure public regions. :vartype location_placement_id: str :ivar quota_id: The subscription quota ID. :vartype quota_id: str :ivar spending_limit: The subscription spending limit. Known values are: "On", "Off", and "CurrentPeriodOff". :vartype spending_limit: str or ~azure.mgmt.resource.subscriptions.v2022_12_01.models.SpendingLimit """ _validation = { "location_placement_id": {"readonly": True}, "quota_id": {"readonly": True}, "spending_limit": {"readonly": True}, } _attribute_map = { "location_placement_id": {"key": "locationPlacementId", "type": "str"}, "quota_id": {"key": "quotaId", "type": "str"}, "spending_limit": {"key": "spendingLimit", "type": "str"}, } def __init__(self, **kwargs: Any) -> None: """ """ super().__init__(**kwargs) self.location_placement_id = None self.quota_id = None self.spending_limit = None class TenantIdDescription(_serialization.Model): """Tenant Id information. Variables are only populated by the server, and will be ignored when sending a request. :ivar id: The fully qualified ID of the tenant. For example, /tenants/8d65815f-a5b6-402f-9298-045155da7d74. :vartype id: str :ivar tenant_id: The tenant ID. For example, 8d65815f-a5b6-402f-9298-045155da7d74. :vartype tenant_id: str :ivar tenant_category: Category of the tenant. Known values are: "Home", "ProjectedBy", and "ManagedBy". :vartype tenant_category: str or ~azure.mgmt.resource.subscriptions.v2022_12_01.models.TenantCategory :ivar country: Country/region name of the address for the tenant. :vartype country: str :ivar country_code: Country/region abbreviation for the tenant. :vartype country_code: str :ivar display_name: The display name of the tenant. :vartype display_name: str :ivar domains: The list of domains for the tenant. :vartype domains: list[str] :ivar default_domain: The default domain for the tenant. :vartype default_domain: str :ivar tenant_type: The tenant type. Only available for 'Home' tenant category. :vartype tenant_type: str :ivar tenant_branding_logo_url: The tenant's branding logo URL. Only available for 'Home' tenant category. :vartype tenant_branding_logo_url: str """ _validation = { "id": {"readonly": True}, "tenant_id": {"readonly": True}, "tenant_category": {"readonly": True}, "country": {"readonly": True}, "country_code": {"readonly": True}, "display_name": {"readonly": True}, "domains": {"readonly": True}, "default_domain": {"readonly": True}, "tenant_type": {"readonly": True}, "tenant_branding_logo_url": {"readonly": True}, } _attribute_map = { "id": {"key": "id", "type": "str"}, "tenant_id": {"key": "tenantId", "type": "str"}, "tenant_category": {"key": "tenantCategory", "type": "str"}, "country": {"key": "country", "type": "str"}, "country_code": {"key": "countryCode", "type": "str"}, "display_name": {"key": "displayName", "type": "str"}, "domains": {"key": "domains", "type": "[str]"}, "default_domain": {"key": "defaultDomain", "type": "str"}, "tenant_type": {"key": "tenantType", "type": "str"}, "tenant_branding_logo_url": {"key": "tenantBrandingLogoUrl", "type": "str"}, } def __init__(self, **kwargs: Any) -> None: """ """ super().__init__(**kwargs) self.id = None self.tenant_id = None self.tenant_category = None self.country = None self.country_code = None self.display_name = None self.domains = None self.default_domain = None self.tenant_type = None self.tenant_branding_logo_url = None class TenantListResult(_serialization.Model): """Tenant Ids information. All required parameters must be populated in order to send to Azure. :ivar value: An array of tenants. :vartype value: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.TenantIdDescription] :ivar next_link: The URL to use for getting the next set of results. Required. :vartype next_link: str """ _validation = { "next_link": {"required": True}, } _attribute_map = { "value": {"key": "value", "type": "[TenantIdDescription]"}, "next_link": {"key": "nextLink", "type": "str"}, } def __init__( self, *, next_link: str, value: Optional[List["_models.TenantIdDescription"]] = None, **kwargs: Any ) -> None: """ :keyword value: An array of tenants. :paramtype value: list[~azure.mgmt.resource.subscriptions.v2022_12_01.models.TenantIdDescription] :keyword next_link: The URL to use for getting the next set of results. Required. :paramtype next_link: str """ super().__init__(**kwargs) self.value = value self.next_link = next_link
PypiClean
/classiq_interface-0.8.1-py3-none-any.whl/classiq_interface/generator/model/model.py
from typing import Any, Dict, List, Optional, _Final # type: ignore[attr-defined] import pydantic import classiq_interface.generator.validations.flow_graph as flow_graph from classiq_interface._version import VERSION as _VERSION from classiq_interface.generator.function_call import FunctionCall from classiq_interface.generator.functions import FunctionLibraryData, FunctionType from classiq_interface.generator.functions.function_data import FunctionData from classiq_interface.generator.model.constraints import Constraints from classiq_interface.generator.model.preferences.preferences import Preferences from classiq_interface.generator.user_defined_function_params import CustomFunction LOGIC_FLOW_DUPLICATION_ERROR_MSG = ( "The same function call was included several times in the logic flow" ) class BackwardsCompatibleBaseModel(pydantic.BaseModel): def __init__(__pydantic_self__, **data: Any) -> None: data_for_this_object = { key: value for key, value in data.items() if key in __pydantic_self__.__annotations__ } data_for_child_objects = { key: value for key, value in data.items() if key not in __pydantic_self__.__annotations__ } # First, initialize this object super().__init__(**data_for_this_object) # Then, populate all the rest of the data __pydantic_self__._set_extra_params(**data_for_child_objects) def _set_extra_params(self, **kwargs) -> None: """ populate the children of this class with the values from kwargs """ # Iterate every item that we wish to populate for key, value in kwargs.items(): # Iterate every child this class has for obj_name, obj_cls in self.__annotations__.items(): obj_cls_properties = self._get_properties_of_class(obj_cls) # Check if the item we wish to populate is a child of this obj if key in obj_cls_properties: self._set_properties_for_child_class( child_obj_name=obj_name, key=key, value=value ) break # If no child was found to contain this key else: # else, in for-else, is entered when no `break` was called raise ValueError( f'"{self.__class__.__name__}" object has no field "{key}"' ) # Note: when sending multiple items in `kwargs`, # and, in the case where the 2nd item in `kwarg` will raise an error, # Then the first key will be set, and this (mutable) object will change, # And only then will the for-loop reach the 2nd key and raise an error # I'm okay with that @staticmethod def _get_properties_of_class(obj_cls) -> dict: # If the child is coming from typing (e.g. Union, List, etc.) # Specifically, `Optional`, which is `Union[something, None]` is what's expected. if isinstance(obj_cls, _Final): # Get the parameters that were sent to the union obj_cls_sub_classes = obj_cls.__args__ # get the children of each sub-class obj_cls_sub_properties: List[Dict[str, Any]] = [ getattr(cls, "__annotations__", dict()) for cls in obj_cls_sub_classes ] # combine all the dictionaries obj_cls_properties: Dict[str, Any] = dict() for d in obj_cls_sub_properties: obj_cls_properties.update(d) # If the child is a pydantic object else: # Get the childred of the child obj_cls_properties = getattr(obj_cls, "__annotations__", {}) return obj_cls_properties def _set_properties_for_child_class( self, child_obj_name: str, key: str, value: Any ) -> None: child_obj = getattr(self, child_obj_name) # First, set the attribute setattr(child_obj, key, value) # Then, manually validate the attribute child_obj.__init__(**child_obj.__dict__) # Next, patch `__fields_set` in order to support calls to # self.dict(exclude_unset=True) object.__setattr__( self, "__fields_set__", set.union(self.__fields_set__, {child_obj_name}), ) # Additionally, update child_obj's __fields_set__ object.__setattr__( child_obj, "__fields_set__", set.union(child_obj.__fields_set__, {key}) ) def __getattr__(self, key): """ Allow access to the grand-children of this object. """ # Not supporting private attributes # Additionaly, this prevents an infinite loop of accessing `__getattribute__` if key[0] == "_": return super().__getattribute__(key) # Next, iterate every child object for obj_name in self.__annotations__.keys(): # And access its child, if it exists if key in getattr(self, obj_name).__dir__(): # Yes, we can use `getattr(self, obj_name).key` # But I prefer calling the overwriten function explicitly return getattr(self, obj_name).__getattribute__(key) raise AttributeError( f"'{self.__class__.__name__}' object has no attribute '{key}'" ) class Model(BackwardsCompatibleBaseModel): """ All the relevand data for generating quantum circuit in one place. """ version: str = _VERSION # Must be validated before logic_flow function_library: Optional[FunctionLibraryData] = pydantic.Field( default=None, description="The user-defined custom function library.", ) logic_flow: List[FunctionCall] = pydantic.Field( default_factory=list, description="List of function calls to be applied in the circuit", ) constraints: Constraints = pydantic.Field(default_factory=Constraints) preferences: Preferences = pydantic.Field(default_factory=Preferences) class Config: extra = "forbid" @pydantic.validator("logic_flow") def validate_logic_flow( cls, logic_flow: List[FunctionCall], values: Dict[str, Any] ) -> List[FunctionCall]: if not logic_flow: return logic_flow function_call_names = set(call.name for call in logic_flow) if len(function_call_names) != len(logic_flow): raise ValueError(LOGIC_FLOW_DUPLICATION_ERROR_MSG) functions_to_validate = logic_flow.copy() library = values.get("function_library") while functions_to_validate: function_call = functions_to_validate.pop() params = function_call.function_params if not isinstance(params, CustomFunction): continue FunctionLibraryData.validate_function_in_library( library=library, function_params=params ) assert isinstance(library, FunctionLibraryData) function_data = library.function_dict[params.name] params.generate_io_names( input_set=function_data.input_set, output_set=function_data.output_set, ) function_call.validate_custom_function_io() if function_data.function_type == FunctionType.CompositeFunction: functions_to_validate.extend(function_data.logic_flow) flow_graph.validate_flow_graph(logic_flow) return logic_flow # TODO: Delete this method after importing tools from SDK to classiq_interface and use ModelSynthesizer instead def insert_function_data_to_logic_flow( logic_flow: List[FunctionCall], function_library: FunctionLibraryData, function_data: FunctionData, outputs: Dict[str, str], inputs: Dict[str, str], ): function_library.function_dict[function_data.name] = function_data function_library.functions += (function_data,) function_params = CustomFunction(name=function_data.name) function_params.generate_io_names( input_set=function_data.input_set, output_set=function_data.output_set ) function_call = FunctionCall( function_params=function_params, outputs=outputs, inputs=inputs ) logic_flow.append(function_call)
PypiClean
/kolla-ansible-16.1.0.tar.gz/kolla-ansible-16.1.0/tools/generate_passwords.py
# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import hmac import os import random import stat import string import sys from cryptography import fernet from cryptography.hazmat.backends import default_backend from cryptography.hazmat.primitives.asymmetric import rsa from cryptography.hazmat.primitives import serialization from hashlib import md5 from oslo_utils import uuidutils import yaml # NOTE(SamYaple): Update the search path to prefer PROJECT_ROOT as the source # of packages to import if we are using local tools instead of # pip installed kolla tools PROJECT_ROOT = os.path.abspath(os.path.join( os.path.dirname(os.path.realpath(__file__)), '../..')) if PROJECT_ROOT not in sys.path: sys.path.insert(0, PROJECT_ROOT) def generate_RSA(bits=4096): new_key = rsa.generate_private_key( public_exponent=65537, key_size=bits, backend=default_backend() ) private_key = new_key.private_bytes( encoding=serialization.Encoding.PEM, format=serialization.PrivateFormat.PKCS8, encryption_algorithm=serialization.NoEncryption() ).decode() public_key = new_key.public_key().public_bytes( encoding=serialization.Encoding.OpenSSH, format=serialization.PublicFormat.OpenSSH ).decode() return private_key, public_key def genpwd(passwords_file, length, uuid_keys, ssh_keys, blank_keys, fernet_keys, hmac_md5_keys): try: with open(passwords_file, 'r') as f: passwords = yaml.safe_load(f.read()) except FileNotFoundError: print(f"ERROR: Passwords file \"{passwords_file}\" is missing") sys.exit(1) if os.stat(passwords_file).st_mode & stat.S_IROTH: print(f"WARNING: Passwords file \"{passwords_file}\" is" " world-readable. The permissions will be changed.") if os.stat(passwords_file).st_mode & stat.S_IWOTH: print(f"WARNING: Passwords file \"{passwords_file}\" is" " world-writeable. The permissions will be changed.") if not isinstance(passwords, dict): print("ERROR: Passwords file not in expected key/value format") sys.exit(1) for k, v in passwords.items(): if (k in ssh_keys and (v is None or v.get('public_key') is None and v.get('private_key') is None)): private_key, public_key = generate_RSA() passwords[k] = { 'private_key': private_key, 'public_key': public_key } continue if v is None: if k in blank_keys and v is None: continue if k in uuid_keys: passwords[k] = uuidutils.generate_uuid() elif k in hmac_md5_keys: passwords[k] = (hmac.new( uuidutils.generate_uuid().encode(), ''.encode(), md5) .hexdigest()) elif k in fernet_keys: passwords[k] = fernet.Fernet.generate_key().decode() else: passwords[k] = ''.join([ random.SystemRandom().choice( string.ascii_letters + string.digits) for n in range(length) ]) try: os.remove(passwords_file) except OSError: pass flags = os.O_WRONLY | os.O_CREAT | os.O_TRUNC mode = 0o640 with os.fdopen(os.open(passwords_file, flags, mode=mode), 'w') as f: f.write(yaml.safe_dump(passwords, default_flow_style=False)) def main(): parser = argparse.ArgumentParser() parser.add_argument( '-p', '--passwords', type=str, default=os.path.abspath('/etc/kolla/passwords.yml'), help=('Path to the passwords.yml file')) args = parser.parse_args() passwords_file = os.path.expanduser(args.passwords) # These keys should be random uuids uuid_keys = ['rbd_secret_uuid', 'cinder_rbd_secret_uuid', 'gnocchi_project_id', 'gnocchi_resource_id', 'gnocchi_user_id', 'designate_pool_id'] # SSH key pair ssh_keys = ['kolla_ssh_key', 'nova_ssh_key', 'keystone_ssh_key', 'bifrost_ssh_key', 'octavia_amp_ssh_key', 'neutron_ssh_key'] # If these keys are None, leave them as None blank_keys = ['docker_registry_password'] # HMAC-MD5 keys hmac_md5_keys = ['designate_rndc_key', 'osprofiler_secret'] # Fernet keys fernet_keys = ['barbican_crypto_key'] # length of password length = 40 genpwd(passwords_file, length, uuid_keys, ssh_keys, blank_keys, fernet_keys, hmac_md5_keys) if __name__ == '__main__': main()
PypiClean
/sas-frontend-20211225.2.tar.gz/sas-frontend-20211225.2/sas_frontend/frontend_latest/e39068f3.js
"use strict";(self.webpackChunksas_frontend=self.webpackChunksas_frontend||[]).push([[87520],{4558:(e,t,a)=>{Object.defineProperty(t,"__esModule",{value:!0}),t.InitializeRelativeTimeFormat=void 0;var r=a(71160),n=a(17595),i=/^[a-z0-9]{3,8}(-[a-z0-9]{3,8})*$/i;t.InitializeRelativeTimeFormat=function(e,t,a,l){var o=l.getInternalSlots,u=l.availableLocales,s=l.relevantExtensionKeys,c=l.localeData,f=l.getDefaultLocale,v=o(e);v.initializedRelativeTimeFormat=!0;var d=r.CanonicalizeLocaleList(t),m=Object.create(null),p=r.CoerceOptionsToObject(a),y=r.GetOption(p,"localeMatcher","string",["best fit","lookup"],"best fit");m.localeMatcher=y;var b=r.GetOption(p,"numberingSystem","string",void 0,void 0);if(void 0!==b&&!i.test(b))throw new RangeError("Invalid numbering system "+b);m.nu=b;var h=n.ResolveLocale(u,d,m,s,c,f),g=h.locale,w=h.nu;v.locale=g,v.style=r.GetOption(p,"style","string",["long","narrow","short"],"long"),v.numeric=r.GetOption(p,"numeric","string",["always","auto"],"always");var T=c[h.dataLocale];return r.invariant(!!T,"Missing locale data for "+h.dataLocale),v.fields=T,v.numberFormat=new Intl.NumberFormat(t),v.pluralRules=new Intl.PluralRules(t),v.numberingSystem=w,e}},43606:(e,t,a)=>{Object.defineProperty(t,"__esModule",{value:!0}),t.MakePartsList=void 0;var r=a(71160);t.MakePartsList=function(e,t,a){for(var n=[],i=0,l=r.PartitionPattern(e);i<l.length;i++){var o=l[i];if("literal"===o.type)n.push({type:"literal",value:o.value});else{r.invariant("0"===o.type,"Malformed pattern "+e);for(var u=0,s=a;u<s.length;u++){var c=s[u];n.push({type:c.type,value:c.value,unit:t})}}}return n}},41979:(e,t,a)=>{Object.defineProperty(t,"__esModule",{value:!0}),t.PartitionRelativeTimePattern=void 0;var r=a(71160),n=a(8511),i=a(43606);t.PartitionRelativeTimePattern=function(e,t,a,l){var o=l.getInternalSlots;if(r.invariant("Number"===r.Type(t),"value must be number, instead got "+typeof t,TypeError),r.invariant("String"===r.Type(a),"unit must be number, instead got "+typeof t,TypeError),isNaN(t)||!isFinite(t))throw new RangeError("Invalid value "+t);var u=n.SingularRelativeTimeUnit(a),s=o(e),c=s.fields,f=s.style,v=s.numeric,d=s.pluralRules,m=s.numberFormat,p=u;"short"===f?p=u+"-short":"narrow"===f&&(p=u+"-narrow"),p in c||(p=u);var y=c[p];if("auto"===v&&r.ToString(t)in y)return[{type:"literal",value:y[r.ToString(t)]}];var b="future";(r.SameValue(t,-0)||t<0)&&(b="past");var h=y[b],g="function"==typeof m.formatToParts?m.formatToParts(Math.abs(t)):[{type:"literal",value:m.format(Math.abs(t)),unit:a}],w=h[d.select(t)];return i.MakePartsList(w,u,g)}},8511:(e,t,a)=>{Object.defineProperty(t,"__esModule",{value:!0}),t.SingularRelativeTimeUnit=void 0;var r=a(71160);t.SingularRelativeTimeUnit=function(e){if(r.invariant("String"===r.Type(e),"unit must be a string"),"seconds"===e)return"second";if("minutes"===e)return"minute";if("hours"===e)return"hour";if("days"===e)return"day";if("weeks"===e)return"week";if("months"===e)return"month";if("quarters"===e)return"quarter";if("years"===e)return"year";if("second"!==e&&"minute"!==e&&"hour"!==e&&"day"!==e&&"week"!==e&&"month"!==e&&"quarter"!==e&&"year"!==e)throw new RangeError("invalid unit");return e}},98584:(e,t)=>{Object.defineProperty(t,"__esModule",{value:!0});var a=new WeakMap;t.default=function(e){var t=a.get(e);return t||(t=Object.create(null),a.set(e,t)),t}},22114:(e,t,a)=>{Object.defineProperty(t,"__esModule",{value:!0});var r=a(87480),n=a(71160),i=a(4558),l=a(41979),o=r.__importDefault(a(98584)),u=function(){function e(t,a){if(!(this&&this instanceof e?this.constructor:void 0))throw new TypeError("Intl.RelativeTimeFormat must be called with 'new'");return i.InitializeRelativeTimeFormat(this,t,a,{getInternalSlots:o.default,availableLocales:e.availableLocales,relevantExtensionKeys:e.relevantExtensionKeys,localeData:e.localeData,getDefaultLocale:e.getDefaultLocale})}return e.prototype.format=function(e,t){if("object"!=typeof this)throw new TypeError("format was called on a non-object");if(!o.default(this).initializedRelativeTimeFormat)throw new TypeError("format was called on a invalid context");return l.PartitionRelativeTimePattern(this,Number(e),n.ToString(t),{getInternalSlots:o.default}).map((function(e){return e.value})).join("")},e.prototype.formatToParts=function(e,t){if("object"!=typeof this)throw new TypeError("formatToParts was called on a non-object");if(!o.default(this).initializedRelativeTimeFormat)throw new TypeError("formatToParts was called on a invalid context");return l.PartitionRelativeTimePattern(this,Number(e),n.ToString(t),{getInternalSlots:o.default})},e.prototype.resolvedOptions=function(){if("object"!=typeof this)throw new TypeError("resolvedOptions was called on a non-object");var e=o.default(this);if(!e.initializedRelativeTimeFormat)throw new TypeError("resolvedOptions was called on a invalid context");return{locale:e.locale,style:e.style,numeric:e.numeric,numberingSystem:e.numberingSystem}},e.supportedLocalesOf=function(t,a){return n.SupportedLocales(e.availableLocales,n.CanonicalizeLocaleList(t),a)},e.__addLocaleData=function(){for(var t=[],a=0;a<arguments.length;a++)t[a]=arguments[a];for(var r=0,n=t;r<n.length;r++){var i=n[r],l=i.data,o=i.locale,u=new Intl.Locale(o).minimize().toString();e.localeData[o]=e.localeData[u]=l,e.availableLocales.add(u),e.availableLocales.add(o),e.__defaultLocale||(e.__defaultLocale=u)}},e.getDefaultLocale=function(){return e.__defaultLocale},e.localeData={},e.availableLocales=new Set,e.__defaultLocale="",e.relevantExtensionKeys=["nu"],e.polyfilled=!0,e}();t.default=u;try{"undefined"!=typeof Symbol&&Object.defineProperty(u.prototype,Symbol.toStringTag,{value:"Intl.RelativeTimeFormat",writable:!1,enumerable:!1,configurable:!0}),Object.defineProperty(u.prototype.constructor,"length",{value:0,writable:!1,enumerable:!1,configurable:!0}),Object.defineProperty(u.supportedLocalesOf,"length",{value:1,writable:!1,enumerable:!1,configurable:!0})}catch(e){}},87520:(e,t,a)=>{Object.defineProperty(t,"__esModule",{value:!0});var r=a(87480).__importDefault(a(22114));a(64532).shouldPolyfill()&&Object.defineProperty(Intl,"RelativeTimeFormat",{value:r.default,writable:!0,enumerable:!1,configurable:!0})},64532:(e,t)=>{Object.defineProperty(t,"__esModule",{value:!0}),t.shouldPolyfill=void 0,t.shouldPolyfill=function(e){return!("RelativeTimeFormat"in Intl)||!function(e){if(!e)return!0;var t=Array.isArray(e)?e:[e];return Intl.RelativeTimeFormat.supportedLocalesOf(t).length===t.length}(e)||!function(e){try{return"numberingSystem"in new Intl.RelativeTimeFormat(e||"en",{numeric:"auto"}).resolvedOptions()}catch(e){return!1}}(e)}}}]);
PypiClean
/django-classic-user-accounts-1.0.39.tar.gz/django-classic-user-accounts-1.0.39/ClassicUserAccounts/static/matrix-admin/js/bootstrap-colorpicker.js
!function( $ ) { // Color object var Color = function(val) { this.value = { h: 1, s: 1, b: 1, a: 1 }; this.setColor(val); }; Color.prototype = { constructor: Color, //parse a string to HSB setColor: function(val){ val = val.toLowerCase(); var that = this; $.each( CPGlobal.stringParsers, function( i, parser ) { var match = parser.re.exec( val ), values = match && parser.parse( match ), space = parser.space||'rgba'; if ( values ) { if (space === 'hsla') { that.value = CPGlobal.RGBtoHSB.apply(null, CPGlobal.HSLtoRGB.apply(null, values)); } else { that.value = CPGlobal.RGBtoHSB.apply(null, values); } return false; } }); }, setHue: function(h) { this.value.h = 1- h; }, setSaturation: function(s) { this.value.s = s; }, setLightness: function(b) { this.value.b = 1- b; }, setAlpha: function(a) { this.value.a = parseInt((1 - a)*100, 10)/100; }, // HSBtoRGB from RaphaelJS // https://github.com/DmitryBaranovskiy/raphael/ toRGB: function(h, s, b, a) { if (!h) { h = this.value.h; s = this.value.s; b = this.value.b; } h *= 360; var R, G, B, X, C; h = (h % 360) / 60; C = b * s; X = C * (1 - Math.abs(h % 2 - 1)); R = G = B = b - C; h = ~~h; R += [C, X, 0, 0, X, C][h]; G += [X, C, C, X, 0, 0][h]; B += [0, 0, X, C, C, X][h]; return { r: Math.round(R*255), g: Math.round(G*255), b: Math.round(B*255), a: a||this.value.a }; }, toHex: function(h, s, b, a){ var rgb = this.toRGB(h, s, b, a); return '#'+((1 << 24) | (parseInt(rgb.r) << 16) | (parseInt(rgb.g) << 8) | parseInt(rgb.b)).toString(16).substr(1); }, toHSL: function(h, s, b, a){ if (!h) { h = this.value.h; s = this.value.s; b = this.value.b; } var H = h, L = (2 - s) * b, S = s * b; if (L > 0 && L <= 1) { S /= L; } else { S /= 2 - L; } L /= 2; if (S > 1) { S = 1; } return { h: H, s: S, l: L, a: a||this.value.a }; } }; // Picker object var Colorpicker = function(element, options){ this.element = $(element); var format = options.format||this.element.data('color-format')||'hex'; this.format = CPGlobal.translateFormats[format]; this.isInput = this.element.is('input'); this.component = this.element.is('.color') ? this.element.find('.add-on') : false; this.picker = $(CPGlobal.template) .appendTo('body') .on('mousedown', $.proxy(this.mousedown, this)); if (this.isInput) { this.element.on({ 'focus': $.proxy(this.show, this), 'keyup': $.proxy(this.update, this) }); } else if (this.component){ this.component.on({ 'click': $.proxy(this.show, this) }); } else { this.element.on({ 'click': $.proxy(this.show, this) }); } if (format === 'rgba' || format === 'hsla') { this.picker.addClass('alpha'); this.alpha = this.picker.find('.colorpicker-alpha')[0].style; } if (this.component){ this.picker.find('.colorpicker-color').hide(); this.preview = this.element.find('i')[0].style; } else { this.preview = this.picker.find('div:last')[0].style; } this.base = this.picker.find('div:first')[0].style; this.update(); }; Colorpicker.prototype = { constructor: Colorpicker, show: function(e) { this.picker.show(); this.height = this.component ? this.component.outerHeight() : this.element.outerHeight(); this.place(); $(window).on('resize', $.proxy(this.place, this)); if (!this.isInput) { if (e) { e.stopPropagation(); e.preventDefault(); } } $(document).on({ 'mousedown': $.proxy(this.hide, this) }); this.element.trigger({ type: 'show', color: this.color }); }, update: function(){ this.color = new Color(this.isInput ? this.element.prop('value') : this.element.data('color')); this.picker.find('i') .eq(0).css({left: this.color.value.s*100, top: 100 - this.color.value.b*100}).end() .eq(1).css('top', 100 * (1 - this.color.value.h)).end() .eq(2).css('top', 100 * (1 - this.color.value.a)); this.previewColor(); }, setValue: function(newColor) { this.color = new Color(newColor); this.picker.find('i') .eq(0).css({left: this.color.value.s*100, top: 100 - this.color.value.b*100}).end() .eq(1).css('top', 100 * (1 - this.color.value.h)).end() .eq(2).css('top', 100 * (1 - this.color.value.a)); this.previewColor(); this.element.trigger({ type: 'changeColor', color: this.color }); }, hide: function(){ this.picker.hide(); $(window).off('resize', this.place); if (!this.isInput) { $(document).off({ 'mousedown': this.hide }); if (this.component){ this.element.find('input').prop('value', this.format.call(this)); } this.element.data('color', this.format.call(this)); } else { this.element.prop('value', this.format.call(this)); } this.element.trigger({ type: 'hide', color: this.color }); }, place: function(){ var offset = this.component ? this.component.offset() : this.element.offset(); this.picker.css({ top: offset.top + this.height, left: offset.left }); }, //preview color change previewColor: function(){ try { this.preview.backgroundColor = this.format.call(this); } catch(e) { this.preview.backgroundColor = this.color.toHex(); } //set the color for brightness/saturation slider this.base.backgroundColor = this.color.toHex(this.color.value.h, 1, 1, 1); //set te color for alpha slider if (this.alpha) { this.alpha.backgroundColor = this.color.toHex(); } }, pointer: null, slider: null, mousedown: function(e){ e.stopPropagation(); e.preventDefault(); var target = $(e.target); //detect the slider and set the limits and callbacks var zone = target.closest('div'); if (!zone.is('.colorpicker')) { if (zone.is('.colorpicker-saturation')) { this.slider = $.extend({}, CPGlobal.sliders.saturation); } else if (zone.is('.colorpicker-hue')) { this.slider = $.extend({}, CPGlobal.sliders.hue); } else if (zone.is('.colorpicker-alpha')) { this.slider = $.extend({}, CPGlobal.sliders.alpha); } else { return false; } var offset = zone.offset(); //reference to knob's style this.slider.knob = zone.find('i')[0].style; this.slider.left = e.pageX - offset.left; this.slider.top = e.pageY - offset.top; this.pointer = { left: e.pageX, top: e.pageY }; //trigger mousemove to move the knob to the current position $(document).on({ mousemove: $.proxy(this.mousemove, this), mouseup: $.proxy(this.mouseup, this) }).trigger('mousemove'); } return false; }, mousemove: function(e){ e.stopPropagation(); e.preventDefault(); var left = Math.max( 0, Math.min( this.slider.maxLeft, this.slider.left + ((e.pageX||this.pointer.left) - this.pointer.left) ) ); var top = Math.max( 0, Math.min( this.slider.maxTop, this.slider.top + ((e.pageY||this.pointer.top) - this.pointer.top) ) ); this.slider.knob.left = left + 'px'; this.slider.knob.top = top + 'px'; if (this.slider.callLeft) { this.color[this.slider.callLeft].call(this.color, left/100); } if (this.slider.callTop) { this.color[this.slider.callTop].call(this.color, top/100); } this.previewColor(); this.element.trigger({ type: 'changeColor', color: this.color }); return false; }, mouseup: function(e){ e.stopPropagation(); e.preventDefault(); $(document).off({ mousemove: this.mousemove, mouseup: this.mouseup }); return false; } } $.fn.colorpicker = function ( option ) { return this.each(function () { var $this = $(this), data = $this.data('colorpicker'), options = typeof option === 'object' && option; if (!data) { $this.data('colorpicker', (data = new Colorpicker(this, $.extend({}, $.fn.colorpicker.defaults,options)))); } if (typeof option === 'string') data[option](); }); }; $.fn.colorpicker.defaults = { }; $.fn.colorpicker.Constructor = Colorpicker; var CPGlobal = { // translate a format from Color object to a string translateFormats: { 'rgb': function(){ var rgb = this.color.toRGB(); return 'rgb('+rgb.r+','+rgb.g+','+rgb.b+')'; }, 'rgba': function(){ var rgb = this.color.toRGB(); return 'rgba('+rgb.r+','+rgb.g+','+rgb.b+','+rgb.a+')'; }, 'hsl': function(){ var hsl = this.color.toHSL(); return 'hsl('+Math.round(hsl.h*360)+','+Math.round(hsl.s*100)+'%,'+Math.round(hsl.l*100)+'%)'; }, 'hsla': function(){ var hsl = this.color.toHSL(); return 'hsla('+Math.round(hsl.h*360)+','+Math.round(hsl.s*100)+'%,'+Math.round(hsl.l*100)+'%,'+hsl.a+')'; }, 'hex': function(){ return this.color.toHex(); } }, sliders: { saturation: { maxLeft: 100, maxTop: 100, callLeft: 'setSaturation', callTop: 'setLightness' }, hue: { maxLeft: 0, maxTop: 100, callLeft: false, callTop: 'setHue' }, alpha: { maxLeft: 0, maxTop: 100, callLeft: false, callTop: 'setAlpha' } }, // HSBtoRGB from RaphaelJS // https://github.com/DmitryBaranovskiy/raphael/ RGBtoHSB: function (r, g, b, a){ r /= 255; g /= 255; b /= 255; var H, S, V, C; V = Math.max(r, g, b); C = V - Math.min(r, g, b); H = (C === 0 ? null : V == r ? (g - b) / C : V == g ? (b - r) / C + 2 : (r - g) / C + 4 ); H = ((H + 360) % 6) * 60 / 360; S = C === 0 ? 0 : C / V; return {h: H||1, s: S, b: V, a: a||1}; }, HueToRGB: function (p, q, h) { if (h < 0) h += 1; else if (h > 1) h -= 1; if ((h * 6) < 1) return p + (q - p) * h * 6; else if ((h * 2) < 1) return q; else if ((h * 3) < 2) return p + (q - p) * ((2 / 3) - h) * 6; else return p; }, HSLtoRGB: function (h, s, l, a) { if (s < 0) { s = 0; } var q; if (l <= 0.5) { q = l * (1 + s); } else { q = l + s - (l * s); } var p = 2 * l - q; var tr = h + (1 / 3); var tg = h; var tb = h - (1 / 3); var r = Math.round(CPGlobal.HueToRGB(p, q, tr) * 255); var g = Math.round(CPGlobal.HueToRGB(p, q, tg) * 255); var b = Math.round(CPGlobal.HueToRGB(p, q, tb) * 255); return [r, g, b, a||1]; }, // a set of RE's that can match strings and generate color tuples. // from John Resig color plugin // https://github.com/jquery/jquery-color/ stringParsers: [ { re: /rgba?\(\s*(\d{1,3})\s*,\s*(\d{1,3})\s*,\s*(\d{1,3})\s*(?:,\s*(\d+(?:\.\d+)?)\s*)?\)/, parse: function( execResult ) { return [ execResult[ 1 ], execResult[ 2 ], execResult[ 3 ], execResult[ 4 ] ]; } }, { re: /rgba?\(\s*(\d+(?:\.\d+)?)\%\s*,\s*(\d+(?:\.\d+)?)\%\s*,\s*(\d+(?:\.\d+)?)\%\s*(?:,\s*(\d+(?:\.\d+)?)\s*)?\)/, parse: function( execResult ) { return [ 2.55 * execResult[1], 2.55 * execResult[2], 2.55 * execResult[3], execResult[ 4 ] ]; } }, { re: /#([a-fA-F0-9]{2})([a-fA-F0-9]{2})([a-fA-F0-9]{2})/, parse: function( execResult ) { return [ parseInt( execResult[ 1 ], 16 ), parseInt( execResult[ 2 ], 16 ), parseInt( execResult[ 3 ], 16 ) ]; } }, { re: /#([a-fA-F0-9])([a-fA-F0-9])([a-fA-F0-9])/, parse: function( execResult ) { return [ parseInt( execResult[ 1 ] + execResult[ 1 ], 16 ), parseInt( execResult[ 2 ] + execResult[ 2 ], 16 ), parseInt( execResult[ 3 ] + execResult[ 3 ], 16 ) ]; } }, { re: /hsla?\(\s*(\d+(?:\.\d+)?)\s*,\s*(\d+(?:\.\d+)?)\%\s*,\s*(\d+(?:\.\d+)?)\%\s*(?:,\s*(\d+(?:\.\d+)?)\s*)?\)/, space: 'hsla', parse: function( execResult ) { return [ execResult[1]/360, execResult[2] / 100, execResult[3] / 100, execResult[4] ]; } } ], template: '<div class="colorpicker dropdown-menu">'+ '<div class="colorpicker-saturation"><i><b></b></i></div>'+ '<div class="colorpicker-hue"><i></i></div>'+ '<div class="colorpicker-alpha"><i></i></div>'+ '<div class="colorpicker-color"><div /></div>'+ '</div>' }; }( window.jQuery )
PypiClean
/gooddata-api-client-1.5.0.tar.gz/gooddata-api-client-1.5.0/gooddata_api_client/paths/api_v1_entities_workspaces_workspace_id_workspace_data_filters_object_id/patch.py
from dataclasses import dataclass import typing_extensions import urllib3 from urllib3._collections import HTTPHeaderDict from gooddata_api_client import api_client, exceptions from datetime import date, datetime # noqa: F401 import decimal # noqa: F401 import functools # noqa: F401 import io # noqa: F401 import re # noqa: F401 import typing # noqa: F401 import typing_extensions # noqa: F401 import uuid # noqa: F401 import frozendict # noqa: F401 from gooddata_api_client import schemas # noqa: F401 from gooddata_api_client.model.json_api_workspace_data_filter_out_document import JsonApiWorkspaceDataFilterOutDocument from gooddata_api_client.model.json_api_workspace_data_filter_patch_document import JsonApiWorkspaceDataFilterPatchDocument from . import path # Query params FilterSchema = schemas.StrSchema class IncludeSchema( schemas.ListSchema ): class MetaOapg: class items( schemas.EnumBase, schemas.StrSchema ): class MetaOapg: enum_value_to_name = { "workspaceDataFilterSettings": "WORKSPACE_DATA_FILTER_SETTINGS", "filterSettings": "FILTER_SETTINGS", "ALL": "ALL", } @schemas.classproperty def WORKSPACE_DATA_FILTER_SETTINGS(cls): return cls("workspaceDataFilterSettings") @schemas.classproperty def FILTER_SETTINGS(cls): return cls("filterSettings") @schemas.classproperty def ALL(cls): return cls("ALL") def __new__( cls, _arg: typing.Union[typing.Tuple[typing.Union[MetaOapg.items, str, ]], typing.List[typing.Union[MetaOapg.items, str, ]]], _configuration: typing.Optional[schemas.Configuration] = None, ) -> 'IncludeSchema': return super().__new__( cls, _arg, _configuration=_configuration, ) def __getitem__(self, i: int) -> MetaOapg.items: return super().__getitem__(i) RequestRequiredQueryParams = typing_extensions.TypedDict( 'RequestRequiredQueryParams', { } ) RequestOptionalQueryParams = typing_extensions.TypedDict( 'RequestOptionalQueryParams', { 'filter': typing.Union[FilterSchema, str, ], 'include': typing.Union[IncludeSchema, list, tuple, ], }, total=False ) class RequestQueryParams(RequestRequiredQueryParams, RequestOptionalQueryParams): pass request_query_filter = api_client.QueryParameter( name="filter", style=api_client.ParameterStyle.FORM, schema=FilterSchema, explode=True, ) request_query_include = api_client.QueryParameter( name="include", style=api_client.ParameterStyle.FORM, schema=IncludeSchema, ) # Path params WorkspaceIdSchema = schemas.StrSchema ObjectIdSchema = schemas.StrSchema RequestRequiredPathParams = typing_extensions.TypedDict( 'RequestRequiredPathParams', { 'workspaceId': typing.Union[WorkspaceIdSchema, str, ], 'objectId': typing.Union[ObjectIdSchema, str, ], } ) RequestOptionalPathParams = typing_extensions.TypedDict( 'RequestOptionalPathParams', { }, total=False ) class RequestPathParams(RequestRequiredPathParams, RequestOptionalPathParams): pass request_path_workspace_id = api_client.PathParameter( name="workspaceId", style=api_client.ParameterStyle.SIMPLE, schema=WorkspaceIdSchema, required=True, ) request_path_object_id = api_client.PathParameter( name="objectId", style=api_client.ParameterStyle.SIMPLE, schema=ObjectIdSchema, required=True, ) # body param SchemaForRequestBodyApplicationVndGooddataApijson = JsonApiWorkspaceDataFilterPatchDocument request_body_json_api_workspace_data_filter_patch_document = api_client.RequestBody( content={ 'application/vnd.gooddata.api+json': api_client.MediaType( schema=SchemaForRequestBodyApplicationVndGooddataApijson), }, required=True, ) SchemaFor200ResponseBodyApplicationVndGooddataApijson = JsonApiWorkspaceDataFilterOutDocument @dataclass class ApiResponseFor200(api_client.ApiResponse): response: urllib3.HTTPResponse body: typing.Union[ SchemaFor200ResponseBodyApplicationVndGooddataApijson, ] headers: schemas.Unset = schemas.unset _response_for_200 = api_client.OpenApiResponse( response_cls=ApiResponseFor200, content={ 'application/vnd.gooddata.api+json': api_client.MediaType( schema=SchemaFor200ResponseBodyApplicationVndGooddataApijson), }, ) _status_code_to_response = { '200': _response_for_200, } _all_accept_content_types = ( 'application/vnd.gooddata.api+json', ) class BaseApi(api_client.Api): @typing.overload def _patch_entity_workspace_data_filters_oapg( self, body: typing.Union[SchemaForRequestBodyApplicationVndGooddataApijson,], content_type: typing_extensions.Literal["application/vnd.gooddata.api+json"] = ..., query_params: RequestQueryParams = frozendict.frozendict(), path_params: RequestPathParams = frozendict.frozendict(), accept_content_types: typing.Tuple[str] = _all_accept_content_types, stream: bool = False, timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None, skip_deserialization: typing_extensions.Literal[False] = ..., ) -> typing.Union[ ApiResponseFor200, ]: ... @typing.overload def _patch_entity_workspace_data_filters_oapg( self, body: typing.Union[SchemaForRequestBodyApplicationVndGooddataApijson,], content_type: str = ..., query_params: RequestQueryParams = frozendict.frozendict(), path_params: RequestPathParams = frozendict.frozendict(), accept_content_types: typing.Tuple[str] = _all_accept_content_types, stream: bool = False, timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None, skip_deserialization: typing_extensions.Literal[False] = ..., ) -> typing.Union[ ApiResponseFor200, ]: ... @typing.overload def _patch_entity_workspace_data_filters_oapg( self, body: typing.Union[SchemaForRequestBodyApplicationVndGooddataApijson,], skip_deserialization: typing_extensions.Literal[True], content_type: str = ..., query_params: RequestQueryParams = frozendict.frozendict(), path_params: RequestPathParams = frozendict.frozendict(), accept_content_types: typing.Tuple[str] = _all_accept_content_types, stream: bool = False, timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None, ) -> api_client.ApiResponseWithoutDeserialization: ... @typing.overload def _patch_entity_workspace_data_filters_oapg( self, body: typing.Union[SchemaForRequestBodyApplicationVndGooddataApijson,], content_type: str = ..., query_params: RequestQueryParams = frozendict.frozendict(), path_params: RequestPathParams = frozendict.frozendict(), accept_content_types: typing.Tuple[str] = _all_accept_content_types, stream: bool = False, timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None, skip_deserialization: bool = ..., ) -> typing.Union[ ApiResponseFor200, api_client.ApiResponseWithoutDeserialization, ]: ... def _patch_entity_workspace_data_filters_oapg( self, body: typing.Union[SchemaForRequestBodyApplicationVndGooddataApijson,], content_type: str = 'application/vnd.gooddata.api+json', query_params: RequestQueryParams = frozendict.frozendict(), path_params: RequestPathParams = frozendict.frozendict(), accept_content_types: typing.Tuple[str] = _all_accept_content_types, stream: bool = False, timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None, skip_deserialization: bool = False, ): """ Patch a Workspace Data Filter :param skip_deserialization: If true then api_response.response will be set but api_response.body and api_response.headers will not be deserialized into schema class instances """ self._verify_typed_dict_inputs_oapg(RequestQueryParams, query_params) self._verify_typed_dict_inputs_oapg(RequestPathParams, path_params) used_path = path.value _path_params = {} for parameter in ( request_path_workspace_id, request_path_object_id, ): parameter_data = path_params.get(parameter.name, schemas.unset) if parameter_data is schemas.unset: continue serialized_data = parameter.serialize(parameter_data) _path_params.update(serialized_data) for k, v in _path_params.items(): used_path = used_path.replace('{%s}' % k, v) prefix_separator_iterator = None for parameter in ( request_query_filter, request_query_include, ): parameter_data = query_params.get(parameter.name, schemas.unset) if parameter_data is schemas.unset: continue if prefix_separator_iterator is None: prefix_separator_iterator = parameter.get_prefix_separator_iterator() serialized_data = parameter.serialize(parameter_data, prefix_separator_iterator) for serialized_value in serialized_data.values(): used_path += serialized_value _headers = HTTPHeaderDict() # TODO add cookie handling if accept_content_types: for accept_content_type in accept_content_types: _headers.add('Accept', accept_content_type) if body is schemas.unset: raise exceptions.ApiValueError( 'The required body parameter has an invalid value of: unset. Set a valid value instead') _fields = None _body = None serialized_data = request_body_json_api_workspace_data_filter_patch_document.serialize(body, content_type) _headers.add('Content-Type', content_type) if 'fields' in serialized_data: _fields = serialized_data['fields'] elif 'body' in serialized_data: _body = serialized_data['body'] response = self.api_client.call_api( resource_path=used_path, method='patch'.upper(), headers=_headers, fields=_fields, body=_body, stream=stream, timeout=timeout, ) if skip_deserialization: api_response = api_client.ApiResponseWithoutDeserialization(response=response) else: response_for_status = _status_code_to_response.get(str(response.status)) if response_for_status: api_response = response_for_status.deserialize(response, self.api_client.configuration) else: api_response = api_client.ApiResponseWithoutDeserialization(response=response) if not 200 <= response.status <= 299: raise exceptions.ApiException( status=response.status, reason=response.reason, api_response=api_response ) return api_response class PatchEntityWorkspaceDataFilters(BaseApi): # this class is used by api classes that refer to endpoints with operationId fn names @typing.overload def patch_entity_workspace_data_filters( self, body: typing.Union[SchemaForRequestBodyApplicationVndGooddataApijson,], content_type: typing_extensions.Literal["application/vnd.gooddata.api+json"] = ..., query_params: RequestQueryParams = frozendict.frozendict(), path_params: RequestPathParams = frozendict.frozendict(), accept_content_types: typing.Tuple[str] = _all_accept_content_types, stream: bool = False, timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None, skip_deserialization: typing_extensions.Literal[False] = ..., ) -> typing.Union[ ApiResponseFor200, ]: ... @typing.overload def patch_entity_workspace_data_filters( self, body: typing.Union[SchemaForRequestBodyApplicationVndGooddataApijson,], content_type: str = ..., query_params: RequestQueryParams = frozendict.frozendict(), path_params: RequestPathParams = frozendict.frozendict(), accept_content_types: typing.Tuple[str] = _all_accept_content_types, stream: bool = False, timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None, skip_deserialization: typing_extensions.Literal[False] = ..., ) -> typing.Union[ ApiResponseFor200, ]: ... @typing.overload def patch_entity_workspace_data_filters( self, body: typing.Union[SchemaForRequestBodyApplicationVndGooddataApijson,], skip_deserialization: typing_extensions.Literal[True], content_type: str = ..., query_params: RequestQueryParams = frozendict.frozendict(), path_params: RequestPathParams = frozendict.frozendict(), accept_content_types: typing.Tuple[str] = _all_accept_content_types, stream: bool = False, timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None, ) -> api_client.ApiResponseWithoutDeserialization: ... @typing.overload def patch_entity_workspace_data_filters( self, body: typing.Union[SchemaForRequestBodyApplicationVndGooddataApijson,], content_type: str = ..., query_params: RequestQueryParams = frozendict.frozendict(), path_params: RequestPathParams = frozendict.frozendict(), accept_content_types: typing.Tuple[str] = _all_accept_content_types, stream: bool = False, timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None, skip_deserialization: bool = ..., ) -> typing.Union[ ApiResponseFor200, api_client.ApiResponseWithoutDeserialization, ]: ... def patch_entity_workspace_data_filters( self, body: typing.Union[SchemaForRequestBodyApplicationVndGooddataApijson,], content_type: str = 'application/vnd.gooddata.api+json', query_params: RequestQueryParams = frozendict.frozendict(), path_params: RequestPathParams = frozendict.frozendict(), accept_content_types: typing.Tuple[str] = _all_accept_content_types, stream: bool = False, timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None, skip_deserialization: bool = False, ): return self._patch_entity_workspace_data_filters_oapg( body=body, query_params=query_params, path_params=path_params, content_type=content_type, accept_content_types=accept_content_types, stream=stream, timeout=timeout, skip_deserialization=skip_deserialization ) class ApiForpatch(BaseApi): # this class is used by api classes that refer to endpoints by path and http method names @typing.overload def patch( self, body: typing.Union[SchemaForRequestBodyApplicationVndGooddataApijson,], content_type: typing_extensions.Literal["application/vnd.gooddata.api+json"] = ..., query_params: RequestQueryParams = frozendict.frozendict(), path_params: RequestPathParams = frozendict.frozendict(), accept_content_types: typing.Tuple[str] = _all_accept_content_types, stream: bool = False, timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None, skip_deserialization: typing_extensions.Literal[False] = ..., ) -> typing.Union[ ApiResponseFor200, ]: ... @typing.overload def patch( self, body: typing.Union[SchemaForRequestBodyApplicationVndGooddataApijson,], content_type: str = ..., query_params: RequestQueryParams = frozendict.frozendict(), path_params: RequestPathParams = frozendict.frozendict(), accept_content_types: typing.Tuple[str] = _all_accept_content_types, stream: bool = False, timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None, skip_deserialization: typing_extensions.Literal[False] = ..., ) -> typing.Union[ ApiResponseFor200, ]: ... @typing.overload def patch( self, body: typing.Union[SchemaForRequestBodyApplicationVndGooddataApijson,], skip_deserialization: typing_extensions.Literal[True], content_type: str = ..., query_params: RequestQueryParams = frozendict.frozendict(), path_params: RequestPathParams = frozendict.frozendict(), accept_content_types: typing.Tuple[str] = _all_accept_content_types, stream: bool = False, timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None, ) -> api_client.ApiResponseWithoutDeserialization: ... @typing.overload def patch( self, body: typing.Union[SchemaForRequestBodyApplicationVndGooddataApijson,], content_type: str = ..., query_params: RequestQueryParams = frozendict.frozendict(), path_params: RequestPathParams = frozendict.frozendict(), accept_content_types: typing.Tuple[str] = _all_accept_content_types, stream: bool = False, timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None, skip_deserialization: bool = ..., ) -> typing.Union[ ApiResponseFor200, api_client.ApiResponseWithoutDeserialization, ]: ... def patch( self, body: typing.Union[SchemaForRequestBodyApplicationVndGooddataApijson,], content_type: str = 'application/vnd.gooddata.api+json', query_params: RequestQueryParams = frozendict.frozendict(), path_params: RequestPathParams = frozendict.frozendict(), accept_content_types: typing.Tuple[str] = _all_accept_content_types, stream: bool = False, timeout: typing.Optional[typing.Union[int, typing.Tuple]] = None, skip_deserialization: bool = False, ): return self._patch_entity_workspace_data_filters_oapg( body=body, query_params=query_params, path_params=path_params, content_type=content_type, accept_content_types=accept_content_types, stream=stream, timeout=timeout, skip_deserialization=skip_deserialization )
PypiClean
/inqbus.ocf.generic-0.1.tar.gz/inqbus.ocf.generic-0.1/README.txt
================================================== inqbus.ocf.generic : OCF resource agents framework ================================================== :Version: 0.1 :Download: http://pypi.python.org/pypi/inqbus.ocf.generic :Keywords: python, OCF, resource agents, framework, pacemaker, .. contents:: :local: Overview ======== Inqbus.ocf.generic is a framework that helps you writing OCF compatible resource agents for e.g. the Pacemaker failover management system. The inqbus.ocf.generic framework keeps away from you the gory details you have to go into writing an OCF compatible resource agent. Powerfull base classes bring to you: * support of the complete set of OCF exitcodes and their respective business logik * OCF Paramter classes for integer, string, etc. values * predefined generic OCF handlers (meta-data, validate) * the generation of the XML meta data is done for you automagically * easy addition of handlers for e.g. start/stop/status * inheritance of resource agents: encapsulate agent business logic and share it among similiar reasource agents Installation ============ Please refer to the installation of the `inqbus.ocf.agents package <http://pypi.python.org/pypi/inqbus.ocf.agents>`_ . Documentation ============= The documentation of the inqbus.ocf.generic API is still in progress. Please refere in the meanwhile to inqbus.ocf.agents which is a good example of using inqbus.ocf.generic. Credits ======= I have stolen lots of ideas from Michael Samuel's `ocfra framework <https://code.launchpad.net/~therealmik/+recipe/python-ocfra-daily>`_ . License ======= This software is licensed under the New BSD License. See the LICENSE.txt file in the top distribution directory for the full license text.
PypiClean
/docker-emperor-0.2.2.tar.gz/docker-emperor-0.2.2/docker_emperor/nodes/machine.py
import os import six import collections from docker_emperor.commands import Command from docker_emperor.nodes.environment import Environment from docker_emperor.nodes.service import Services from docker_emperor.nodes.command import Commands from docker_emperor.utils import setdefaultdict, OrderedDict import docker_emperor.logger as logger __all__ = ['Machines', 'Machine'] # DRIVERS # Amazon Web Services # Microsoft Azure # Digital Ocean # Exoscale # Google Compute Engine # Generic # Microsoft Hyper-V # OpenStack # Rackspace # IBM Softlayer # Oracle VirtualBox # VMware vCloud Air # VMware Fusion # VMware vSphere # VMware Workstation (unofficial plugin, not supported by Docker) # Grid 5000 (unofficial plugin, not supported by Docker) class Machines(dict): DEFAULT = { 'localhost': {} } def __new__(cls, *args, **kwargs): return dict.__new__(cls, *args, **kwargs) def __init__(self, data): super(self.__class__, self).__init__(setdefaultdict(data)) if self: for key, val in self.items(): self[key] = Machine(key, val) else: self['localhost'] = Machine('localhost') def __iter__(self): for key, val in self.items(): yield val def __repr__(self): return ", ".join(str(m) for m in self) def __getitem__(self, i): if isinstance(i, int): return [c for c in self][i] else: return self.get(i, None) class Machine(dict): COMMANDS = [ 'ssh' ] LOCAL_MACHINE_WARNING = 'You are already on a local machine' class Drivers(object): LOCALHOST = 'localhost' GENERIC_LOCALHOST = 'generic --generic-ip-address localhost' def __new__(cls, *args, **kwargs): return dict.__new__(cls, *args, **kwargs) def __init__(self, name, data={}, bin="docker-machine"): self.key = name self.name = name self.bin = bin super(Machine, self).__init__(setdefaultdict(data)) for default_name, default_class in [ ('environment', Environment), ('services', Services), ('commands', Commands), ]: self[default_name] = default_class(self[default_name]) if not isinstance(self['driver'], six.string_types): self['driver'] = Machine.Drivers.LOCALHOST if not isinstance(self['hosts'], list): self['hosts'] = [] if not isinstance(self['files'], list): self['files'] = [] if not isinstance(self['workdir'], six.string_types): self['workdir'] = '/home/docker/' def __repr__(self): return '<{}: {}>'.format(self.__class__.__name__, self.name) def __getitem__(self, key): return self.get(key) def bash(self, *args, **kwargs): cmd = Command(self.bin, *args, **kwargs) cmd.run() return cmd @property def is_localhost(self): return self['driver'] == Machine.Drivers.LOCALHOST @property def is_generic(self): return self['driver'].startswith('generic') @property def exists(self): cmd = self._run("ls", "--filter", "NAME=" + self.name, "--format", "{{.Name}}", machine=self, tty=False) for line in cmd.lines: if line == self.name:return True return False @property def docker_env(self): n = '__docker_env' if not hasattr(self, n): if self.is_localhost: env = [] else: cmd = self.bash('env', self.name) starts = 'export ' env = [line.lstrip(starts) for line in cmd.lines if line.startswith(starts)] setattr(self, n, env) return getattr(self, n) def start(self): if self.is_localhost: if not self.is_running: self.bash('start', self.name, sys=True).run().log() return self.is_running @property def is_running(self): return self.status == 'Running' @property def is_startable(self): return not self.is_localhost and not self.is_generic @property def status(self): if self.is_localhost: return 'Running' else: return self.bash('status', self.name).out @property def ip(self): if self.is_localhost: return '0.0.0.0' else: return self.bash('ip', self.name, log=False, machine=self).out.strip() @property def pwd(self): return self.bash('ssh', self.name, 'pwd', log=False, machine=self).out.strip() @property def inspect(self): return self.bash('inspect', self.name, machine=self, tty=False).out @property def active(self): return self.bash('active', machine=self, tty=False).out def remove(self): return self.bash('rm', self.name, machine=self) # active # config # create # env # help # inspect # ip # kill # ls # mount # provision # regenerate-certs # restart # rm # scp # ssh # start # stop # upgrade # url Command.Machine = Machine
PypiClean
/cubicweb-inlinedit-2.0.0.tar.gz/cubicweb-inlinedit-2.0.0/cubicweb_inlinedit/data/cubes.inlinedit.js
cw.inlinedit = new Namespace('cw.inlinedit'); jQuery.extend(cw.inlinedit, { /* Unhides the part of inlinedit div containing the form * hides other parts */ showInlineEditionForm: function (divid) { jQuery('#' + divid).hide(); jQuery('#' + divid + '-value').hide(); jQuery('#' + divid + '-form').show(); }, /* Hides and removes edition parts, incl. messages * show initial widget state */ cleanupAfterCancel: function (divid, cbname) { jQuery('#appMsg').hide(); jQuery('div.errorMessage').remove(); var params = cw.inlinedit._formParams(divid + '-form', {fname: 'cancel_reledit'}); var d = jQuery('#' + params.divid + '-reledit').loadxhtml(AJAX_BASE_URL, params, 'post'); d.addCallback(function () { jQuery(cw).trigger('reledit.cancel-reloaded', params);}); }, _formidToFname: function (formid) { if (formid.startswith('none')) return 'edit_related_form'; else return 'reledit_form'; }, /* Extract specific reledit parameter values * from the form. Takes a dict, fills it and returns it. */ _formParams: function(formid, paramobj) { jQuery('#' + formid + ' input:hidden').each(function (elt) { var name = jQuery(this).attr('name'); if (name && name.startswith('__reledit|')) { paramobj[name.split('|')[1]] = this.value; } }); return paramobj; }, /* callback used on form validation success * refreshes the whole page or just the edited reledit zone * @param results: [status, ...] * @param formid: the dom id of the reledit form * @param cbargs: ... */ onSuccess: function (results, formid, cbargs) { var fname = cw.inlinedit._formidToFname(formid); var params = cw.inlinedit._formParams(formid, {fname: fname}); var reload = cw.evalJSON(params.reload); if (reload || (params.formid == 'deleteconf')) { if (typeof reload == 'string') { /* Sometimes we want to reload but the reledit thing * updated a key attribute which was a component of the * url */ document.location.href = reload; return; } else { document.location.reload(); return; } } var reledit_div = params.divid + '-reledit'; // on deletion we want to refresh the whole widget if (params.action == 'delete-related') { reledit_div = params.topleveldiv + '-reledit'; } var d = jQuery('#' + reledit_div).loadxhtml(AJAX_BASE_URL, params, 'post'); d.addCallback(function () { jQuery(cw).trigger('reledit.success-reloaded', params);}); d.addErrback(function (err, req) { if (err.startsWith('--GONE--:')){ cw.log('expected entity is gone, reloading the page'); cw.log(err); document.location.reload(); } }); }, loadInlineForm: function(args) { args['pageid'] = pageid; var divid = args['divid']; var d = jQuery('#' + divid + '-reledit').loadxhtml(AJAX_BASE_URL, args, 'post'); d.addCallback(function () { jQuery(cw).trigger('reledit.inlineform-loaded'); cw.inlinedit.showInlineEditionForm(divid); }); } }); var oldRemoteCallFailed = remoteCallFailed; /* disable default callback for our gone exception WE SHOULD USE 410 status instead */ remoteCallFailed = function(err, req) { cw.log(err); // if (!err.startsWith('--GONE--:')){ // oldRemoteCallFailed(err, req); // } };
PypiClean
/pipfile-0.0.2.tar.gz/pipfile-0.0.2/docs/development/submitting-patches.rst
Submitting patches ================== * Always make a new branch for your work. * Patches should be small to facilitate easier review. `Studies have shown`_ that review quality falls off as patch size grows. Sometimes this will result in many small PRs to land a single large feature. * Larger changes should be discussed in a ticket before submission. * New features and significant bug fixes should be documented in the :doc:`/changelog`. * You must have legal permission to distribute any code you contribute and it must be available under both the BSD and Apache Software License Version 2.0 licenses. If you believe you've identified a security issue in packaging, please follow the directions on the :doc:`security page </security>`. Code ---- When in doubt, refer to :pep:`8` for Python code. You can check if your code meets our automated requirements by running ``flake8`` against it. If you've installed the development requirements this will automatically use our configuration. You can also run the ``tox`` job with ``tox -e pep8``. `Write comments as complete sentences.`_ Every code file must start with the boilerplate licensing notice: .. code-block:: python # This file is dual licensed under the terms of the Apache License, Version # 2.0, and the BSD License. See the LICENSE file in the root of this repository # for complete details. Additionally, every Python code file must contain .. code-block:: python from __future__ import absolute_import, division, print_function Tests ----- All code changes must be accompanied by unit tests with 100% code coverage (as measured by the combined metrics across our build matrix). Documentation ------------- All features should be documented with prose in the ``docs`` section. When referring to a hypothetical individual (such as "a person receiving an encrypted message") use gender neutral pronouns (they/them/their). Docstrings are typically only used when writing abstract classes, but should be written like this if required: .. code-block:: python def some_function(some_arg): """ Does some things. :param some_arg: Some argument. """ So, specifically: * Always use three double quotes. * Put the three double quotes on their own line. * No blank line at the end. * Use Sphinx parameter/attribute documentation `syntax`_. .. _`Write comments as complete sentences.`: http://nedbatchelder.com/blog/201401/comments_should_be_sentences.html .. _`syntax`: http://sphinx-doc.org/domains.html#info-field-lists .. _`Studies have shown`: http://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/
PypiClean
/sinophone-0.0.2.tar.gz/sinophone-0.0.2/examples/example.wuu_hant.py
from sinophone import options from sinophone.phonetics import * from sinophone.phonology import * # 設置語言爲吳語 options.repr_lang = "wuu-Hant" # Syllable 音節 # 阿拉來建立一眼音節。 kaq = Syllable( Initial("k"), Final( nucleus=Nucleus("ɐ"), coda=Coda("ʔ"), ), Tone("˥˥"), ) kaq """ # 在墶終端裏向,實際輸出是彩色個。 <音節 [<聲母 'k'> <韻母 [<介音 ''> <韻腹 'ɐ'> <韻尾 'ʔ'>]> <聲調 '˥˥'>]> """ lon = Syllable(Initial("l"), Final(nucleus=Nucleus("o"), coda=Coda("ŋ")), Tone("˨˧")) bo = Syllable(Initial("b"), Final(nucleus=Nucleus("o")), Tone("˨˧")) # PhonologicalRule 音韻規則 # 阿拉來創建一隻音韻規則,講 /o/ 在墶鼻音前頭變爲 [ʊ̃]。 pr = PhonologicalRule( Nucleus("o"), IPAString("ʊ̃"), SyllableFeatures({"Final": {IPAFeatureGroup("+nasal")}}), ) pr """ <音韻規則 "o -> ʊ̃ / {'Final': {'+nasal'}}"> """ # PhonotacticConstraint 音位排列制約 # 阿拉來創建一隻音位排列制約,講任何非鼻音、非邊近音個濁音 # 弗能夠交極高調(˥)搭配。 pc = PhonotacticConstraint( SyllableFeatures( { "Initial": { IPAFeatureGroup("-nasal -lateral-approximant +voiced"), }, "Tone": {IPAFeatureGroup("+extra-high-level")}, } ), PhonotacticAcceptability(False, False), ) pc """ <音位排列制約 {'Initial': {'-nasal +voiced -lateral-approximant'}, 'Tone': {'+extra-high-level'}}: {'existent': False, 'grammatical': False}> """ # Phonology 音系 # 阿拉來根據上述對象建立一隻音系。 phonology = Phonology( syllables={kaq, bo, lon}, phonotactics={pc}, phonological_rules=[pr], ) # 從音節推導出音節要素 sorted(phonology.initials) """ [<聲母 'b'>, <聲母 'k'>, <聲母 'l'>] """ # 弗考慮音位排列規則,自動組合所有音節要素。 spc = sorted(phonology.collocations) # 拿上頭個列表上色,打印出來 for syllable in spc: phonology.pretty_print_syllable(syllable) """ .. 從簡省略輸出 拿完全交音位排列規則衝突個音節標紅,完全符合音位排列規則個音節標綠。 關於其他顏色代表個意思,請看 `sinophone.options.RAINBOW_COLOR_SCHEME`。 """ # 列舉完全交音位排列規則衝突個音節 [ syllable.phonetic_ipa_str for syllable in spc if phonology.render_syllable(syllable).acceptability != PhonotacticAcceptability(True, True) ] """ [<IPA 字符串 'bʊ̃ŋ˥˥'>, <IPA 字符串 'bo˥˥'>, <IPA 字符串 'bɐʔ˥˥'>] """
PypiClean
/jupyter_server_mathjax-0.2.6-py3-none-any.whl/jupyter_server_mathjax/static/localization/zh-hant/FontWarnings.js
MathJax.Localization.addTranslation("zh-hant","FontWarnings",{version:"2.7.9",isLoaded:true,strings:{webFont:"MathJax\u662F\u4F7F\u7528\u57FA\u65BC\u7DB2\u9801\u7684\u5B57\u578B\u4F86\u986F\u793A\u9801\u9762\u4E0A\u7684\u6578\u5B78\u76F8\u95DC\u5167\u5BB9\u3002\u56E0\u6B64\u6703\u82B1\u8CBB\u4E00\u4E9B\u6642\u9593\u4E0B\u8F09\u5B57\u578B\uFF0C\u82E5\u60F3\u8B93\u9801\u9762\u80FD\u8F03\u5FEB\u5448\u73FE\u51FA\u5167\u5BB9\uFF0C\u60A8\u53EF\u4EE5\u5C07\u6240\u4F7F\u7528\u5230\u7684\u6578\u5B78\u5B57\u578B\u4E0B\u8F09\u5B89\u88DD\u5230\u60A8\u96FB\u8166\u672C\u6A5F\u88E1\u7684\u7CFB\u7D71\u5B57\u578B\u8CC7\u6599\u593E\u3002",imageFonts:"MathJax\u4F7F\u7528\u5716\u50CF\u5B57\u578B\u800C\u975E\u672C\u5730\u7AEF\u6216\u57FA\u65BC\u7DB2\u9801\u7684\u5B57\u578B\u3002\u8207\u4E00\u822C\u60C5\u6CC1\u76F8\u6BD4\u6703\u8F03\u6162\u5448\u73FE\u51FA\u5167\u5BB9\uFF0C\u800C\u6240\u5448\u73FE\u6578\u5B78\u516C\u5F0F\u5728\u60A8\u7684\u5370\u8868\u6A5F\u4E0A\u53EF\u80FD\u6C92\u8FA6\u6CD5\u5B8C\u6574\u8FA8\u8B58\u5730\u5370\u51FA\u3002",noFonts:"\u56E0MathJax\u7121\u6CD5\u8A2D\u7F6E\u8981\u7528\u65BC\u986F\u793A\u6578\u5B78\u516C\u5F0F\u7684\u5B57\u578B\uFF0C\u800C\u4E14\u5716\u50CF\u5B57\u578B\u7121\u6CD5\u4F7F\u7528\uFF0C\u6240\u4EE5\u6539\u63A1\u901A\u7528\u842C\u570B\u78BC\u5B57\u5143\u4EE5\u5118\u91CF\u80FD\u8B93\u60A8\u7684\u700F\u89BD\u5668\u6B63\u78BA\u986F\u793A\u5167\u5BB9\u3002\u67D0\u4E9B\u3001\u6216\u751A\u81F3\u5168\u90E8\u5B57\u5143\u6709\u53EF\u80FD\u6703\u7121\u6CD5\u6B63\u78BA\u5730\u986F\u793A\u51FA\u3002",webFonts:"\u73FE\u4ECA\u591A\u6578\u7684\u700F\u89BD\u5668\u5141\u8A31\u7D93\u7531\u7DB2\u969B\u7DB2\u8DEF\u4E0B\u8F09\u5B57\u578B\u3002\u5C07\u60A8\u7684\u700F\u89BD\u5668\u66F4\u65B0\u81F3\u6700\u65B0\u7248\u672C\uFF08\u6216\u662F\u66F4\u63DB\u700F\u89BD\u5668\uFF09\u4F86\u52A0\u5F37\u9801\u9762\u4E0A\u6578\u5B78\u516C\u5F0F\u7684\u986F\u793A\u8CEA\u611F\u3002",fonts:"MathJax\u53EF\u4F7F\u7528[STIX fonts](%1)\u6216[MathJax TeX fonts](%2)\u3002\u4E0B\u8F09\u4F86\u5B89\u88DD\u9019\u4E9B\u5B57\u578B\u80FD\u6539\u5584\u60A8\u5728MathJax\u7684\u4F7F\u7528\u9AD4\u9A57\u3002",STIXPage:"\u6B64\u9801\u9762\u662F\u8A2D\u8A08\u4F86\u4F7F\u7528[STIX fonts](%1)\u3002\u4E0B\u8F09\u5B89\u88DD\u6B64\u5B57\u578B\u4EE5\u6539\u5584\u60A8\u5728MathJax\u7684\u4F7F\u7528\u9AD4\u9A57\u3002",TeXPage:"\u6B64\u9801\u9762\u662F\u8A2D\u8A08\u4F86\u4F7F\u7528[MathJax TeX fonts](%1)\u3002\u4E0B\u8F09\u5B89\u88DD\u6B64\u5B57\u578B\u4EE5\u6539\u5584\u60A8\u5728MathJax\u7684\u4F7F\u7528\u9AD4\u9A57\u3002"}});MathJax.Ajax.loadComplete("[MathJax]/localization/zh-hant/FontWarnings.js");
PypiClean
/django-bootstrap-datetimepicker-1.5.tar.gz/django-bootstrap-datetimepicker-1.5/django_bootstrap_datetimepicker/widgets.py
import json from django import forms from django.conf import settings from django.utils import translation from django.utils.safestring import mark_safe try: from django.utils.encoding import force_unicode as force_text except ImportError: # python3 from django.utils.encoding import force_text DATETIME_INPUT_FORMATS = getattr(settings, 'DATETIME_INPUT_FORMATS', None) if DATETIME_INPUT_FORMATS: DATETIME_INPUT_FORMATS = DATETIME_INPUT_FORMATS[0] class BootstrapDateTimeInput(forms.DateTimeInput): class Media: js = ( settings.STATIC_URL + 'datepicker/js/bootstrap-datetimepicker.min.js', ) lang = translation.get_language() lang = "%s-%s" % (lang.split('-')[0].lower(), lang.split('-')[1].upper()) if '-' in lang else lang if lang != 'en-US': js = js + ( settings.STATIC_URL + 'datepicker/js/locales/bootstrap-datetimepicker.%s.js' % lang, ) css = { 'all': ( settings.STATIC_URL + 'datepicker/css/bootstrap-datetimepicker.min.css', ) } format_map = ( ('dd', r'%d'), ('HH', r'%H'), ('hh', r'%I'), ('MM', r'%m'), ('mm', r'%M'), ('ss', r'%S'), ('yy', r'%y'), ('yyyy', r'%Y'), ) def __init__(self, attrs=None, format=None, options=None): super(BootstrapDateTimeInput, self).__init__(attrs, format) if options is False: self.options = False else: self.options = options and options.copy() or {} if 'language' not in self.options: lang = translation.get_language() self.options['language'] = "%s-%s" % (lang.split('-')[0].lower(), lang.split('-')[1].upper()) if '-' in lang else lang if format and not self.options.get('format'): self.options['format'] = self.conv_datetime_format_py2js(format) elif not format and not self.options.get('format'): self.options['format'] = self.conv_datetime_format_py2js(DATETIME_INPUT_FORMATS) def conv_datetime_format_py2js(self, input_format): for js, py in self.format_map: input_format = input_format.replace(py, js) return input_format def render(self, name, value, attrs=None): if value: if DATETIME_INPUT_FORMATS: value = value.strftime(DATETIME_INPUT_FORMATS) else: value = value.strftime('%d/%m/%Y %H:%M:%S') else: value = '' html = ''' <div id="id_%(name)s" class="input-append date" data-bootstrap-widget="datetimepicker"> <input value="%(value)s" name="%(name)s" type="text"></input> <span class="add-on"> <i data-time-icon="icon-time" data-date-icon="icon-calendar"></i> </span> </div>''' % {'name': name, 'value': value} js = '''<script type="text/javascript"> (function(window) { var callback = function() { $(function(){$("#id_%(name)s:has(input:not([readonly],[disabled]))").datetimepicker(%(options)s);}); }; if(window.addEventListener){window.addEventListener("load", callback, false);} else if (window.attachEvent){window.attachEvent("onload", callback);} else{window.onload = callback;} })(window); </script>''' % {'name': name, 'value': value, 'options': json.dumps(self.options or {})} return mark_safe(force_text(html + js))
PypiClean
/alarmdecoder-1.13.11.tar.gz/alarmdecoder-1.13.11/docs/build/html/searchindex.js
Search.setIndex({envversion:42,terms:{represent:2,all:[0,2],code:[3,2],sleep:3,on_boot:2,stage_don:2,backlight:2,zone:2,readabl:2,send:2,program:2,x03:2,x02:2,x01:2,sent:2,x04:2,sourc:[0,2,3],string:2,clear_zon:2,fals:2,on_messag:[3,2],perimeter_onli:2,lrr:2,on_alarm_restor:2,level:2,list:2,upload:2,dsc:2,"try":[3,2],emul:2,expandermessag:2,pleas:3,second:2,port:2,supervis:2,ad2seri:[3,2],current:2,version:2,"new":0,method:2,ser2sock:2,perimet:2,timeouterror:2,gener:2,usbdevic:[3,2],entry_delay_off:2,here:3,on_config_receiv:2,address:2,path:2,valu:2,fire_alarm:2,search:[3,2],sender:[0,3],checksum:2,prior:2,def:[3,2],invalidmessageerror:2,via:2,vid:2,appli:2,filenam:2,api:3,famili:[3,2],key_pan:2,from:[3,2],usb:[3,2],commun:2,is_reader_al:2,handler:[0,3],call:[0,2],type:2,more:3,relat:2,stage_boot:2,pkei:2,flag:2,templat:2,relai:2,actual:2,cach:2,serialdevic:2,must:0,none:[0,2],retriev:[3,2],key_f2:2,on_restor:2,restor:2,dev:2,itself:0,can:0,aliv:2,backlight_on:2,process:2,indic:[],high:2,cursor_loc:2,serial:2,occur:2,delai:2,progress_callback:2,secur:3,anoth:2,simulate_wire_problem:2,write:2,uploadchecksumerror:2,purg:2,low:2,instead:0,panic:2,panel_typ:2,updat:2,product:2,recogn:2,x509:2,ftdi:2,befor:2,attent:2,mai:3,data:2,classmethod:2,ssl_ca:2,issu:2,callback:2,"switch":2,ttimeout:2,socketdevic:2,disarm:2,jpath:2,through:2,paramet:2,bypass:2,on_read:2,main:[3,2],"return":2,python:3,timestamp:2,on_bypass:2,detach:2,name:2,revert:2,version_flag:2,authent:2,stage_wait:2,mode:2,timeout:2,debug:2,found:[3,2],nodeviceerror:2,"static":2,connect:2,our:2,read_lin:2,event:[],ad2pi:[3,2],reboot:2,content:3,reader:2,print:3,factori:2,written:2,standard:2,on_clos:2,base:[0,2],dictionari:2,"byte":2,armed_hom:2,on_detach:2,key_f4:2,product_id:2,thread:2,key_f3:2,emulate_relai:2,openssl:2,readthread:2,get_config:2,on_rfx_messag:2,find_al:2,ad2usb:[3,2],first:[3,2],oper:0,rang:2,number:2,done:2,on_writ:2,configbit:2,open:[3,2],on_power_chang:2,differ:2,unknown:2,interact:3,system:2,wrapper:2,attach:2,start_detect:2,on_open:2,termin:3,battery_low:2,specifi:2,rfmessag:2,on_fir:2,provid:[3,2],remov:[0,2],charact:2,project:3,save_config:2,bitfield:2,raw:[3,2],dedupl:2,expir:2,"__main__":3,programming_mod:2,also:[0,2],exampl:3,which:2,event_data:2,channel:2,thi:[3,2],index:3,buffer:2,object:[0,2],most:2,detect:2,basemessag:2,"class":[0,2],armed_awai:2,doc:0,clear:2,request:2,emulate_lrr:2,doe:2,on_low_batteri:2,error:2,text:2,default_product_id:2,ssl_kei:2,radio:2,find:[3,2],locat:2,configur:2,solut:2,fault_zon:2,should:2,key_f1:2,dict:2,get_vers:2,serial_numb:2,stop:2,ssl:2,progress:2,report:2,requir:[3,2],fileno:2,enabl:2,earg:0,whether:2,common:2,partit:2,contain:2,alarm_event_occur:2,certif:2,set:[3,2],keypad:2,ac_pow:2,on_alarm:2,see:3,arg:0,fail:2,close:2,arm:2,stop_read:2,pyseri:2,statu:2,wire:2,pattern:2,keypress:2,state:2,between:2,"import":3,awai:2,kei:2,numer:2,baudrat:2,alarmdecoder_object:2,last:2,fault:2,internal_address_mask:2,batteri:2,identif:2,detectthread:2,due:2,been:2,beep:2,trigger:2,basic:3,no_reader_thread:2,fire:[0,2],commerror:2,chime_on:2,convert:2,func:0,present:2,sound:2,check_zon:2,on_fault:2,cursor:2,defin:0,"while":[3,2],match:2,version_numb:2,loop:2,readi:2,kwarg:[0,2],ftdi_vendor_id:2,vendor:2,alarm_sound:2,panel_data:2,author:2,receiv:2,belong:2,handl:[3,2],status:2,finish:2,expans:2,rais:2,user:2,expand:2,lower:2,entri:2,client:2,zone_bypass:2,usual:2,boot:2,human:2,stage_error:2,expos:2,field:2,"_on_open":2,except:[3,2],on_attach:2,add:0,board:2,get_config_str:2,uploaderror:2,stage_upload:2,applic:3,on_zone_fault:2,around:2,format:2,read:2,numeric_cod:2,lcd:2,bit:2,associ:2,ad2:[3,2],like:0,deprec:2,singl:2,page:3,default_vendor_id:2,on_pan:2,intern:2,sampl:3,system_fault:2,fire_timeout:2,home:2,librari:3,definit:2,pyftdi:2,localhost:2,run:2,power:2,event_typ:2,stage_load:2,ssl_certif:2,"__name__":3,describ:2,expander_to_zon:2,simul:2,stage_start:2,address_mask:2,"float":2,automat:2,chime:2,crypto:2,support:[3,2],on_relay_chang:2,"long":2,start:2,interfac:2,includ:3,on_expander_messag:2,stop_detect:2,"function":[0,2],tupl:2,eventhandl:0,line:2,"true":3,emulate_zon:2,"default":2,displai:2,purge_buff:2,below:3,stage_debug:2,alarm:[],"int":2,descript:2,x05:2,pid:2,repres:2,on_zone_restor:2,exist:[0,2],ademco:2,read_timeout:2,ftdi_product_id:2,check:2,battery_timeout:2,handle_messag:3,when:2,invalid:2,on_disarm:2,bool:2,you:0,intend:2,firmwar:2,track:2,on_arm:2,on_sending_receiv:2,directori:3,mask:2,lrrmessag:2,on_lrr_messag:2,obj:0,time:3},objtypes:{"0":"py:module","1":"py:attribute","2":"py:class","3":"py:method","4":"py:exception","5":"py:classmethod","6":"py:staticmethod"},objnames:{"0":["py","module","Python module"],"1":["py","attribute","Python attribute"],"2":["py","class","Python class"],"3":["py","method","Python method"],"4":["py","exception","Python exception"],"5":["py","classmethod","Python class method"],"6":["py","staticmethod","Python static method"]},filenames:["alarmdecoder.event","modules","alarmdecoder","index"],titles:["event Package","alarmdecoder","alarmdecoder Package","Welcome to Alarm Decoder&#8217;s documentation!"],objects:{"alarmdecoder.messages.LRRMessage":{partition:[2,1,1,""],dict:[2,3,1,""],event_data:[2,1,1,""],event_type:[2,1,1,""]},"alarmdecoder.messages.BaseMessage":{raw:[2,1,1,""],dict:[2,3,1,""],timestamp:[2,1,1,""]},"alarmdecoder.messages.ExpanderMessage":{ZONE:[2,1,1,""],RELAY:[2,1,1,""],value:[2,1,1,""],dict:[2,3,1,""],address:[2,1,1,""],type:[2,1,1,""],channel:[2,1,1,""]},"alarmdecoder.event.event":{EventHandler:[0,2,1,""],Event:[0,2,1,""]},"alarmdecoder.zonetracking.Zone":{status:[2,1,1,""],STATUS:[2,1,1,""],name:[2,1,1,""],zone:[2,1,1,""],timestamp:[2,1,1,""],CLEAR:[2,1,1,""],expander:[2,1,1,""],FAULT:[2,1,1,""],CHECK:[2,1,1,""]},"alarmdecoder.devices.SerialDevice":{write:[2,3,1,""],BAUDRATE:[2,1,1,""],fileno:[2,3,1,""],read:[2,3,1,""],read_line:[2,3,1,""],purge:[2,3,1,""],find_all:[2,6,1,""],"interface":[2,1,1,""],close:[2,3,1,""],open:[2,3,1,""]},"alarmdecoder.zonetracking":{Zonetracker:[2,2,1,""],Zone:[2,2,1,""]},"alarmdecoder.zonetracking.Zonetracker":{faulted:[2,1,1,""],on_restore:[2,1,1,""],update:[2,3,1,""],zones:[2,1,1,""],on_fault:[2,1,1,""],EXPIRE:[2,1,1,""],expander_to_zone:[2,3,1,""]},"alarmdecoder.devices.Device.ReadThread":{READ_TIMEOUT:[2,1,1,""],stop:[2,3,1,""],run:[2,3,1,""]},"alarmdecoder.event":{event:[0,0,0,"-"]},"alarmdecoder.messages":{Message:[2,2,1,""],LRRMessage:[2,2,1,""],RFMessage:[2,2,1,""],ExpanderMessage:[2,2,1,""],BaseMessage:[2,2,1,""]},"alarmdecoder.devices":{Device:[2,2,1,""],SocketDevice:[2,2,1,""],USBDevice:[2,2,1,""],SerialDevice:[2,2,1,""]},"alarmdecoder.devices.USBDevice.DetectThread":{stop:[2,3,1,""],run:[2,3,1,""],on_attached:[2,1,1,""],on_detached:[2,1,1,""]},alarmdecoder:{zonetracking:[2,0,0,"-"],messages:[2,0,0,"-"],devices:[2,0,0,"-"],util:[2,0,0,"-"],decoder:[2,0,0,"-"],panels:[2,0,0,"-"],event:[0,0,0,"-"]},"alarmdecoder.decoder.AlarmDecoder":{configbits:[2,1,1,""],on_rfx_message:[2,1,1,""],fault_zone:[2,3,1,""],on_expander_message:[2,1,1,""],on_open:[2,1,1,""],save_config:[2,3,1,""],serial_number:[2,1,1,""],on_alarm:[2,1,1,""],on_arm:[2,1,1,""],internal_address_mask:[2,1,1,""],on_sending_received:[2,1,1,""],KEY_PANIC:[2,1,1,""],fire_timeout:[2,1,1,""],close:[2,3,1,""],open:[2,3,1,""],id:[2,1,1,""],on_power_changed:[2,1,1,""],BATTERY_TIMEOUT:[2,1,1,""],KEY_F1:[2,1,1,""],KEY_F2:[2,1,1,""],KEY_F3:[2,1,1,""],on_message:[2,1,1,""],get_version:[2,3,1,""],reboot:[2,3,1,""],send:[2,3,1,""],version_flags:[2,1,1,""],on_zone_restore:[2,1,1,""],on_disarm:[2,1,1,""],on_fire:[2,1,1,""],on_write:[2,1,1,""],on_read:[2,1,1,""],on_lrr_message:[2,1,1,""],KEY_F4:[2,1,1,""],clear_zone:[2,3,1,""],on_zone_fault:[2,1,1,""],on_config_received:[2,1,1,""],on_alarm_restored:[2,1,1,""],get_config_string:[2,3,1,""],emulate_relay:[2,1,1,""],on_close:[2,1,1,""],on_bypass:[2,1,1,""],address:[2,1,1,""],battery_timeout:[2,1,1,""],on_panic:[2,1,1,""],on_relay_changed:[2,1,1,""],version_number:[2,1,1,""],on_low_battery:[2,1,1,""],emulate_lrr:[2,1,1,""],deduplicate:[2,1,1,""],emulate_zone:[2,1,1,""],get_config:[2,3,1,""],mode:[2,1,1,""],address_mask:[2,1,1,""],FIRE_TIMEOUT:[2,1,1,""],on_boot:[2,1,1,""]},"alarmdecoder.devices.SocketDevice":{ssl_certificate:[2,1,1,""],ssl_key:[2,1,1,""],ssl:[2,1,1,""],fileno:[2,3,1,""],read:[2,3,1,""],ssl_ca:[2,1,1,""],read_line:[2,3,1,""],purge:[2,3,1,""],write:[2,3,1,""],"interface":[2,1,1,""],close:[2,3,1,""],open:[2,3,1,""]},"alarmdecoder.devices.USBDevice":{stop_detection:[2,5,1,""],start_detection:[2,5,1,""],close:[2,3,1,""],open:[2,3,1,""],find:[2,5,1,""],DEFAULT_VENDOR_ID:[2,1,1,""],write:[2,3,1,""],PRODUCT_IDS:[2,1,1,""],serial_number:[2,1,1,""],BAUDRATE:[2,1,1,""],description:[2,1,1,""],read:[2,3,1,""],DEFAULT_PRODUCT_ID:[2,1,1,""],read_line:[2,3,1,""],find_all:[2,5,1,""],FTDI_VENDOR_ID:[2,1,1,""],"interface":[2,1,1,""],fileno:[2,3,1,""],DetectThread:[2,2,1,""],devices:[2,5,1,""],purge:[2,3,1,""],FTDI_PRODUCT_ID:[2,1,1,""]},"alarmdecoder.messages.Message":{backlight_on:[2,1,1,""],alarm_event_occurred:[2,1,1,""],programming_mode:[2,1,1,""],text:[2,1,1,""],bitfield:[2,1,1,""],armed_home:[2,1,1,""],alarm_sounding:[2,1,1,""],ready:[2,1,1,""],zone_bypassed:[2,1,1,""],panel_data:[2,1,1,""],check_zone:[2,1,1,""],numeric_code:[2,1,1,""],dict:[2,3,1,""],battery_low:[2,1,1,""],chime_on:[2,1,1,""],entry_delay_off:[2,1,1,""],perimeter_only:[2,1,1,""],fire_alarm:[2,1,1,""],ac_power:[2,1,1,""],beeps:[2,1,1,""],mask:[2,1,1,""],system_fault:[2,1,1,""],armed_away:[2,1,1,""],panel_type:[2,1,1,""],cursor_location:[2,1,1,""]},"alarmdecoder.devices.Device":{stop_reader:[2,3,1,""],on_open:[2,1,1,""],on_write:[2,1,1,""],ReadThread:[2,2,1,""],on_close:[2,1,1,""],on_read:[2,1,1,""],close:[2,3,1,""],is_reader_alive:[2,3,1,""],id:[2,1,1,""]},"alarmdecoder.messages.RFMessage":{battery:[2,1,1,""],value:[2,1,1,""],dict:[2,3,1,""],supervision:[2,1,1,""],serial_number:[2,1,1,""],loop:[2,1,1,""]},"alarmdecoder.decoder":{AlarmDecoder:[2,2,1,""]},"alarmdecoder.event.event.EventHandler":{fire:[0,3,1,""],add:[0,3,1,""],remove:[0,3,1,""]},"alarmdecoder.util.Firmware":{STAGE_ERROR:[2,1,1,""],STAGE_LOAD:[2,1,1,""],upload:[2,6,1,""],STAGE_BOOT:[2,1,1,""],STAGE_START:[2,1,1,""],STAGE_UPLOADING:[2,1,1,""],STAGE_DEBUG:[2,1,1,""],STAGE_WAITING:[2,1,1,""],STAGE_DONE:[2,1,1,""]},"alarmdecoder.util":{Firmware:[2,2,1,""],TimeoutError:[2,4,1,""],NoDeviceError:[2,4,1,""],CommError:[2,4,1,""],UploadChecksumError:[2,4,1,""],UploadError:[2,4,1,""],InvalidMessageError:[2,4,1,""]}},titleterms:{alarmdecod:[2,1],welcom:3,alarm:3,devic:2,messag:2,event:0,util:2,packag:[0,2],decod:[3,2],zonetrack:2,indic:3,tabl:3,document:3,modul:[0,2],panel:2}})
PypiClean
/lxd_client-1.0.0-py3-none-any.whl/lxd_client/api/instances/post_instances_by_name_backups.py
from typing import Any, Dict, Optional, Union import httpx from ...client import Client from ...models.background_operation_response import BackgroundOperationResponse from ...models.create_instances_by_name_backups_request import CreateInstancesByNameBackupsRequest from ...models.error_response import ErrorResponse from ...types import Response def _get_kwargs( name: str, *, client: Client, json_body: CreateInstancesByNameBackupsRequest, ) -> Dict[str, Any]: url = "{}/1.0/instances/{name}/backups".format(client.base_url, name=name) headers: Dict[str, str] = client.get_headers() cookies: Dict[str, Any] = client.get_cookies() json_json_body = json_body.to_dict() return { "method": "post", "url": url, "headers": headers, "cookies": cookies, "timeout": client.get_timeout(), "json": json_json_body, } def _parse_response(*, response: httpx.Response) -> Optional[Union[BackgroundOperationResponse, ErrorResponse]]: if response.status_code == 202: response_202 = BackgroundOperationResponse.from_dict(response.json()) return response_202 if response.status_code == 400: response_400 = ErrorResponse.from_dict(response.json()) return response_400 if response.status_code == 401: response_401 = ErrorResponse.from_dict(response.json()) return response_401 return None def _build_response(*, response: httpx.Response) -> Response[Union[BackgroundOperationResponse, ErrorResponse]]: return Response( status_code=response.status_code, content=response.content, headers=response.headers, parsed=_parse_response(response=response), ) def sync_detailed( name: str, *, client: Client, json_body: CreateInstancesByNameBackupsRequest, ) -> Response[Union[BackgroundOperationResponse, ErrorResponse]]: """Create a new backup Args: name (str): json_body (CreateInstancesByNameBackupsRequest): Example: {'instance_only': True, 'optimized_storage': True, 'name': 'backupName', 'expiry': 3600}. Returns: Response[Union[BackgroundOperationResponse, ErrorResponse]] """ kwargs = _get_kwargs( name=name, client=client, json_body=json_body, ) response = httpx.request( verify=client.verify_ssl, **kwargs, ) return _build_response(response=response) def sync( name: str, *, client: Client, json_body: CreateInstancesByNameBackupsRequest, ) -> Optional[Union[BackgroundOperationResponse, ErrorResponse]]: """Create a new backup Args: name (str): json_body (CreateInstancesByNameBackupsRequest): Example: {'instance_only': True, 'optimized_storage': True, 'name': 'backupName', 'expiry': 3600}. Returns: Response[Union[BackgroundOperationResponse, ErrorResponse]] """ return sync_detailed( name=name, client=client, json_body=json_body, ).parsed async def asyncio_detailed( name: str, *, client: Client, json_body: CreateInstancesByNameBackupsRequest, ) -> Response[Union[BackgroundOperationResponse, ErrorResponse]]: """Create a new backup Args: name (str): json_body (CreateInstancesByNameBackupsRequest): Example: {'instance_only': True, 'optimized_storage': True, 'name': 'backupName', 'expiry': 3600}. Returns: Response[Union[BackgroundOperationResponse, ErrorResponse]] """ kwargs = _get_kwargs( name=name, client=client, json_body=json_body, ) async with httpx.AsyncClient(verify=client.verify_ssl) as _client: response = await _client.request(**kwargs) return _build_response(response=response) async def asyncio( name: str, *, client: Client, json_body: CreateInstancesByNameBackupsRequest, ) -> Optional[Union[BackgroundOperationResponse, ErrorResponse]]: """Create a new backup Args: name (str): json_body (CreateInstancesByNameBackupsRequest): Example: {'instance_only': True, 'optimized_storage': True, 'name': 'backupName', 'expiry': 3600}. Returns: Response[Union[BackgroundOperationResponse, ErrorResponse]] """ return ( await asyncio_detailed( name=name, client=client, json_body=json_body, ) ).parsed
PypiClean
/stargaze_protobuf-0.1.0.tar.gz/stargaze_protobuf-0.1.0/src/stargaze_protobuf/cosmos/evidence/v1beta1/query_pb2_grpc.py
import grpc from ....cosmos.evidence.v1beta1 import query_pb2 as cosmos_dot_evidence_dot_v1beta1_dot_query__pb2 class QueryStub(object): """Query defines the gRPC querier service. """ def __init__(self, channel): """Constructor. Args: channel: A grpc.Channel. """ self.Evidence = channel.unary_unary('/cosmos.evidence.v1beta1.Query/Evidence', request_serializer=cosmos_dot_evidence_dot_v1beta1_dot_query__pb2.QueryEvidenceRequest.SerializeToString, response_deserializer=cosmos_dot_evidence_dot_v1beta1_dot_query__pb2.QueryEvidenceResponse.FromString) self.AllEvidence = channel.unary_unary('/cosmos.evidence.v1beta1.Query/AllEvidence', request_serializer=cosmos_dot_evidence_dot_v1beta1_dot_query__pb2.QueryAllEvidenceRequest.SerializeToString, response_deserializer=cosmos_dot_evidence_dot_v1beta1_dot_query__pb2.QueryAllEvidenceResponse.FromString) class QueryServicer(object): """Query defines the gRPC querier service. """ def Evidence(self, request, context): """Evidence queries evidence based on evidence hash. """ context.set_code(grpc.StatusCode.UNIMPLEMENTED) context.set_details('Method not implemented!') raise NotImplementedError('Method not implemented!') def AllEvidence(self, request, context): """AllEvidence queries all evidence. """ context.set_code(grpc.StatusCode.UNIMPLEMENTED) context.set_details('Method not implemented!') raise NotImplementedError('Method not implemented!') def add_QueryServicer_to_server(servicer, server): rpc_method_handlers = {'Evidence': grpc.unary_unary_rpc_method_handler(servicer.Evidence, request_deserializer=cosmos_dot_evidence_dot_v1beta1_dot_query__pb2.QueryEvidenceRequest.FromString, response_serializer=cosmos_dot_evidence_dot_v1beta1_dot_query__pb2.QueryEvidenceResponse.SerializeToString), 'AllEvidence': grpc.unary_unary_rpc_method_handler(servicer.AllEvidence, request_deserializer=cosmos_dot_evidence_dot_v1beta1_dot_query__pb2.QueryAllEvidenceRequest.FromString, response_serializer=cosmos_dot_evidence_dot_v1beta1_dot_query__pb2.QueryAllEvidenceResponse.SerializeToString)} generic_handler = grpc.method_handlers_generic_handler('cosmos.evidence.v1beta1.Query', rpc_method_handlers) server.add_generic_rpc_handlers((generic_handler,)) class Query(object): """Query defines the gRPC querier service. """ @staticmethod def Evidence(request, target, options=(), channel_credentials=None, call_credentials=None, insecure=False, compression=None, wait_for_ready=None, timeout=None, metadata=None): return grpc.experimental.unary_unary(request, target, '/cosmos.evidence.v1beta1.Query/Evidence', cosmos_dot_evidence_dot_v1beta1_dot_query__pb2.QueryEvidenceRequest.SerializeToString, cosmos_dot_evidence_dot_v1beta1_dot_query__pb2.QueryEvidenceResponse.FromString, options, channel_credentials, insecure, call_credentials, compression, wait_for_ready, timeout, metadata) @staticmethod def AllEvidence(request, target, options=(), channel_credentials=None, call_credentials=None, insecure=False, compression=None, wait_for_ready=None, timeout=None, metadata=None): return grpc.experimental.unary_unary(request, target, '/cosmos.evidence.v1beta1.Query/AllEvidence', cosmos_dot_evidence_dot_v1beta1_dot_query__pb2.QueryAllEvidenceRequest.SerializeToString, cosmos_dot_evidence_dot_v1beta1_dot_query__pb2.QueryAllEvidenceResponse.FromString, options, channel_credentials, insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
PypiClean
/python-heatclient-3.3.0.tar.gz/python-heatclient-3.3.0/doc/source/man/heat.rst
==== heat ==== .. program:: heat SYNOPSIS ======== `heat` [options] <command> [command-options] `heat help` `heat help` <command> DESCRIPTION =========== `heat` is a command line client for controlling OpenStack Heat. Before the `heat` command is issued, ensure the environment contains the necessary variables so that the CLI can pass user credentials to the server. See `Getting Credentials for a CLI` section of `OpenStack CLI Guide` for more info. OPTIONS ======= To get a list of available commands and options run:: heat help To get usage and options of a command run:: heat help <command> EXAMPLES ======== Get information about stack-create command:: heat help stack-create List available stacks:: heat stack-list List available resources in a stack:: heat resource-list <stack name> Create a stack:: heat stack-create mystack -f some-template.yaml -P "KeyName=mine" View stack information:: heat stack-show mystack List stack outputs:: heat output-list <stack name> Show the value of a single output:: heat output-show <stack name> <output key> List events:: heat event-list mystack Delete a stack:: heat stack-delete mystack Abandon a stack:: heat stack-abandon mystack Adopt a stack :: heat stack-adopt -a <adopt_file> mystack List heat-engines running status :: heat service-list Note: stack-adopt and stack-abandon commands are not available by default. Please ask your OpenStack operator to enable this feature. BUGS ==== Heat client is hosted in Launchpad so you can view current bugs at https://storyboard.openstack.org/#!/project/openstack/python-heatclient.
PypiClean
/pytreaty-2019.1b4.tar.gz/pytreaty-2019.1b4/contracts/main.py
from __future__ import unicode_literals import sys import types from collections import defaultdict from textwrap import dedent from typing import List import six from .backported import getcallargs, getfullargspec from .docstring_parsing import Arg, DocStringInfo from .enabling import all_disabled from .inspection import ( can_accept_at_least_one_argument, can_accept_self, can_be_used_as_a_type) from .interface import ( CannotDecorateClassmethods, Contract, ContractDefinitionError, ContractException, ContractNotRespected, ContractSyntaxError, MissingContract, Where, describe_value) def check_contracts(contracts: List, values: List, context_variables=None): """ Checks that the values respect the contract. Not a public function -- no friendly messages. :param contracts: List of contracts. :type contracts: ``list[N](str),N>0`` :param values: Values that should match the contracts. :type values: ``list[N]`` :param context_variables: Initial context :type context_variables: ``dict(str[1]: *)`` :return: a Context variable :rtype: type(Context) :raise: ContractSyntaxError :raise: ContractNotRespected :raise: ValueError """ assert isinstance(contracts, list) assert isinstance(contracts, list) assert len(contracts) == len(values) if context_variables is None: context_variables = {} for var in context_variables: if not (isinstance(var, six.string_types) and len(var) == 1): # XXX: isalpha msg = ('Invalid name %r for a variable. ' 'I expect a string of length 1.' % var) raise ValueError(msg) C = [] for x in contracts: assert isinstance(x, six.string_types) C.append(parse_contract_string(x)) context = context_variables.copy() for i in range(len(contracts)): C[i]._check_contract(context, values[i], silent=False) return context class Storage: # Cache storage string2contract = {} def _cacheable(string, c): """ Returns whether the contract c defined by string string is cacheable. """ # XXX need a more general way of indicating # whether a contract is safely cacheable return '$' not in string def is_param_string(x): return isinstance(x, six.string_types) def check_param_is_string(x): if not is_param_string(x): msg = 'Expected a string, obtained %s' % type(x) raise ValueError(msg) # TODO: add decorator-specific exception def contract_decorator( *args, _evaluate_docstring=True, _evaluate_annotations=True, **kwargs ): """ Decorator for adding contracts to functions. It is smart enough to support functions with variable number of arguments and keyword arguments. There are three ways to specify the contracts. All contracts will be evaluated, unless directed to behave otherwise via _should_evaluate_{docstring/annotations}: - As arguments to this decorator. For example: :: @contract(a='int,>0',b='list[N],N>0',returns='list[N]') def my_function(a, b): # ... pass - As annotations: :: @contract def my_function(a:'int,>0', b:'list[N],N>0') -> 'list[N]': # ... pass - Using ``:type:`` and ``:rtype:`` tags in the function's docstring: :: @contract def my_function(a, b): """ # OK, this is black magic. You are not expected to understand this. if args: if isinstance(args[0], types.FunctionType): # We were called without parameters function = args[0] if all_disabled(): return function try: return contracts_decorate( function, _evaluate_docstring=_evaluate_docstring, _evaluate_annotations=_evaluate_annotations, **kwargs ) except ContractSyntaxError as es: # Erase the stack raise ContractSyntaxError(es.error, es.where) else: msg = ('I expect that contracts() is called with ' 'only keyword arguments (passed: %r)' % args) raise ContractException(msg) else: # !!! Do not change "tmp_wrap" name; we need it for the definition # of scoped variable # We were called *with* parameters. if all_disabled(): def tmp_wrap(f): # do not change name (see above) return f else: def tmp_wrap(f): # do not change name (see above) try: return contracts_decorate( f, _evaluate_docstring=_evaluate_docstring, _evaluate_annotations=_evaluate_annotations, **kwargs ) except ContractSyntaxError as e: msg = u"Cannot decorate function %s:" % f.__name__ from .utils import indent import traceback msg += u'\n\n' + indent(traceback.format_exc(), u' ') raise ContractSyntaxError(msg, e.where) # erase the stack except ContractDefinitionError as e: raise e.copy() # raise return tmp_wrap def contracts_decorate( function_, modify_docstring=True, _evaluate_docstring=True, _evaluate_annotations=True, **kwargs ): """ An explicit way to decorate a given function. The decorator :py:func:`decorate` calls this function internally. """ if isinstance(function_, classmethod): msg = dedent(""" The function is a classmethod; PyContracts cannot decorate a classmethod. You can, however, first decorate a function and then turn it into a classmethod. For example, instead of doing this: class A(): @contract(a='>0') @classmethod def f(cls, a): pass you can achieve the same goal by inverting the two decorators: class A(): @classmethod @contract(a='>0') def f(cls, a): pass """) raise CannotDecorateClassmethods(msg) all_args = get_all_arg_names(function_) accepts_dict = defaultdict(list) returns = [] returns_parsed = [] # Decorator args section # if kwargs: value = kwargs.pop('returns', None) if value: # When called via ContractsMeta, value can sometimes be a list of checks, so we # should extend the list instead of append. # TODO: Investigate why this happens and refactor accordingly if isinstance(value, (tuple, list)): returns.extend(value) else: returns.append(value) for kw in kwargs: if not kw in all_args: msg = 'Unknown parameter %r; I know %r.' % (kw, all_args) raise ContractException(msg) for k, v in dict(**kwargs).items(): if isinstance(v, list): # This gets hit when a contract is used in an ABC or # other metaclass accepts_dict[k].extend(v) else: accepts_dict[k].append(v) # Type Annotations section annotations = get_annotations(function_) if _evaluate_annotations and annotations: if 'return' in annotations: returns.append(annotations['return']) del annotations['return'] for k, v in annotations.items(): accepts_dict[k].append(v) # Docstring section # no_returns = not returns no_args = not len(accepts_dict) no_annotations = not len(annotations) # If we don't have a docstring, or we've been asked not to evaluate it, set this to False no_docstring = function_.__doc__ is None or not _evaluate_docstring if no_returns and no_args and no_annotations and no_docstring: raise ContractException( 'You did not specify a contract, nor I can ' 'find a docstring for %r.' % function_) update_accepts, update_returns = parse_contracts_from_docstring(function_) if _evaluate_docstring else (dict(), None) for k, v in update_accepts.items(): accepts_dict[k].append(v) if update_returns is not None: returns.append(update_returns) if not accepts_dict and not returns: raise ContractException('No contracts specified in docstring or via type annotations.') if returns is not None: returns_parsed = [parse_flexible_spec(x) for x in returns] accepts_parsed = dict([(k, [parse_flexible_spec(y) for y in v]) for k, v in accepts_dict.items()]) is_bound_method = 'self' in all_args def contracts_checker(unused, *args, **kwargs): do_checks = not all_disabled() if not do_checks: return function_(*args, **kwargs) def get_nice_function_display(): nice_function_display = '%s()' % function_.__name__ if is_bound_method: klass = type(args[0]).__name__ nice_function_display = klass + ':' + nice_function_display return nice_function_display bound = getcallargs(function_, *args, **kwargs) context = {} # add self if we are a bound method if is_bound_method: context['self'] = args[0] for arg in all_args: if arg in accepts_parsed: for check_item in accepts_parsed[arg]: try: check_item._check_contract( context, bound[arg], silent=False ) except ContractNotRespected as e: msg = ('Breach for argument %r to %s.\n' % (arg, get_nice_function_display())) e.error = msg + e.error raise e result = function_(*args, **kwargs) if returns_parsed: for item in returns_parsed: try: item._check_contract(context, result, silent=False) except ContractNotRespected as e: msg = ('Breach for return value of %s.\n' % (get_nice_function_display())) e.error = msg + e.error raise e return result # TODO: add rtype statements if missing if _evaluate_docstring and modify_docstring: def write_contract_as_rst(c): return '``%s``' % c if function_.__doc__ is not None: docs = DocStringInfo.parse(function_.__doc__) else: docs = DocStringInfo("") for param in accepts_parsed: if not param in docs.params: # default = '*not documented*' default = '' docs.params[param] = Arg(default, None) docs.params[param].type = \ write_contract_as_rst(accepts_parsed[param]) if returns_parsed is not None: if not docs.returns: docs.returns.append(Arg(None, None)) docs.returns[0].type = write_contract_as_rst(returns_parsed) new_docs = docs.__str__() else: new_docs = function_.__doc__ # XXX: why doesn't this work? name = ('checker-for-%s' % function_.__name__) if six.PY2: name = name.encode('utf-8') contracts_checker.__name__ = name contracts_checker.__module__ = function_.__module__ # TODO: is using functools.wraps better? from decorator import decorator # @UnresolvedImport wrapper = decorator(contracts_checker, function_) wrapper.__doc__ = new_docs wrapper.__name__ = function_.__name__ wrapper.__module__ = function_.__module__ wrapper.__contracts__ = dict(returns=returns_parsed, **accepts_parsed) return wrapper def parse_flexible_spec(spec): """ spec can be either a Contract, a type, or a contract string. In the latter case, the usual parsing takes place""" if isinstance(spec, Contract): return spec elif is_param_string(spec): return parse_contract_string(spec) elif can_be_used_as_a_type(spec): from .library import CheckType return CheckType(spec) else: msg = 'I want either a string or a type, not %s.' % describe_value(spec) raise ContractException(msg) def parse_contracts_from_docstring(function): annotations = DocStringInfo.parse(function.__doc__) if len(annotations.returns) > 1: raise ContractException('More than one return type specified.') def remove_quotes(x): """ Removes the double back-tick quotes if present. """ if x is None: return None if x.startswith('``') and x.endswith('``') and len(x) > 3: return x[2:-2] elif x.startswith('``') or x.endswith('``'): msg = 'Malformed quoting in string %r.' % x raise ContractException(msg) else: return x if len(annotations.returns) == 0: returns = None else: returns = remove_quotes(annotations.returns[0].type) # These are the annotations params = annotations.params name2type = dict([(name, remove_quotes(params[name].type)) for name in params]) # Check the ones that do not have contracts specified nullparams = [name for name in params if params[name].type is None] if nullparams: msg = ('The parameter(s) %r in this docstring have no type statement.' % (",".join(nullparams))) msg += """ Note: you can use the asterisk if you do not care about assigning a contract to a certain parameter: :param x: :type x: * """ raise MissingContract(msg) # Let's look at the parameters: all_args = get_all_arg_names(function) # Check we don't have extra: for name in name2type: if not name in all_args: msg = ('A contract was specified for argument %r which I cannot' ' find in my list of arguments (%r)' % (name, all_args)) raise ContractException(msg) if len(name2type) != len(all_args): # pragma: no cover pass # TODO: warn? return name2type, returns inPy3k = sys.version_info[0] == 3 def get_annotations(function): return getfullargspec(function).annotations def get_all_arg_names(function): spec = getfullargspec(function) possible = spec.args + [spec.varargs, spec.varkw] + spec.kwonlyargs all_args = [x for x in possible if x] return all_args def check(contract, object, desc=None, **context): # @ReservedAssignment """ Checks that ``object`` satisfies the contract described by ``contract``. :param contract: The contract string. :type contract: str :param object: Any object. :type object: ``*`` :param desc: An optional description of the error. If given, it is included in the error message. :type desc: ``None|str`` """ if all_disabled(): return {} if not is_param_string(contract): # XXX: make it more liberal? raise ValueError('I expect a string (contract spec) as the first ' 'argument, not a %s.' % describe_value(contract)) try: return check_contracts([contract], [object], context) except ContractNotRespected as e: if desc is not None: e.error = '%s\n%s' % (desc, e.error) raise e def fail(contract, value, **initial_context): """ Checks that the value **does not** respect this contract. Raises an exception if it does. :raise: ValueError """ try: parsed_contract = parse_contract_string(contract) context = check_contracts([contract], [value], initial_context) except ContractNotRespected: pass else: msg = 'I did not expect that this value would satisfy this contract.\n' msg += '- value: %s\n' % describe_value(value) msg += '- contract: %s\n' % parsed_contract msg += '- context: %r' % context raise ValueError(msg) def check_multiple(couples, desc=None): """ Checks multiple couples of (contract, value) in the same context. This means that the variables in each contract are shared with the others. :param couples: A list of tuple (contract, value) to check. :type couples: ``list[>0](tuple(str, *))`` :param desc: An optional description of the error. If given, it is included in the error message. :type desc: ``None|str`` """ check('list[>0](tuple(str, *))', couples, 'I expect a non-empty list of (object, string) tuples.') contracts = [x[0] for x in couples] values = [x[1] for x in couples] try: return check_contracts(contracts, values) except ContractNotRespected as e: if desc is not None: e.error = '%s\n%s' % (desc, e.error) raise e def new_contract(*args): """ Defines a new contract type. Used both as a decorator and as a function. **1) Use as a function.** The first parameter must be a string. The second parameter can be either a string or a callable function. :: new_contract('new_contract_name', 'list[N]') new_contract('new_contract_name', lambda x: isinstance(x, list) ) - If it is a string, it is interpreted as contract expression; the given identifier will become an alias for that expression. - If it is a callable, it must accept one parameter, and either: * return True or None, to signify it accepts. * return False or raise ValueError or AssertionError, to signify it doesn't. If ValueError is raised, its message is used in the error. **2) Use as a decorator.** Or, it can be used as a decorator (without arguments). The function name is used as the identifier. :: @new_contract def new_contract_name(x): return isinstance(x, list) This function returns a :py:class:`Contract` object. It might be useful to check right away if the declaration is what you meant, using :py:func:`Contract.check` and :py:func:`Contract.fail`. :param identifier: The identifier must be a string not already in use (you cannot redefine ``list``, ``tuple``, etc.). :type identifier: str :param condition: Definition of the new contract. :type condition: ``type|callable|str`` :return: The equivalent contract -- might be useful for debugging. :rtype: Contract """ if args and len(args) == 1 and isinstance(args[0], types.FunctionType): # TODO: add here for class decorator # We were called without parameters function = args[0] if all_disabled(): return function identifier = function.__name__ new_contract_impl(identifier, function) return function else: if all_disabled(): return None # XXX: not really sure about this return new_contract_impl(*args) def new_contract_impl(identifier, condition): from .syntax import ParseException from .library.extensions import CheckCallableWithSelf from .library import (CheckCallable, Extension, SeparateContext, identifier_expression) # Be friendly if not isinstance(identifier, six.string_types): msg = 'I expect the identifier to be a string; received %s.' % describe_value(identifier) raise ValueError(msg) # Make sure it is not already an expression that we know. # (exception: allow redundant definitions. To this purpose, # skip this test if the identifier is already known, and catch # later if the condition changed.) if identifier in Extension.registrar: # already known as identifier; check later if the condition # remained the same. pass else: # check it does not redefine list, tuple, etc. try: c = parse_contract_string(identifier) msg = ('Invalid identifier %r; it overwrites an already known ' 'expression. In fact, I can parse it as %s (%r).' % (identifier, c, c)) raise ValueError(msg) except ContractSyntaxError: pass # Make sure it corresponds to our idea of identifier try: c = identifier_expression.parseString(identifier, parseAll=True) except ParseException as e: loc = e.loc if loc >= len(identifier): loc -= 1 where = Where(identifier, character=loc) #line=e.lineno, column=e.col) # msg = 'Error in parsing string: %s' % e msg = ('The given identifier %r does not correspond to my idea ' 'of what an identifier should look like;\n%s\n%s' % (identifier, e, where)) raise ValueError(msg) # Now let's check the condition if isinstance(condition, six.string_types): # We assume it is a condition that should parse cleanly try: # could call parse_flexible_spec as well here bare_contract = parse_contract_string(condition) except ContractSyntaxError as e: msg = ('The given condition %r does not parse cleanly: %s' % (condition, e)) raise ValueError(msg) # Important: types are callable, so check this first. elif can_be_used_as_a_type(condition): # parse_flexible_spec can take care of types bare_contract = parse_flexible_spec(condition) # Lastly, it should be a callable elif hasattr(condition, '__call__'): # Check that the signature is right if can_accept_self(condition): bare_contract = CheckCallableWithSelf(condition) elif can_accept_at_least_one_argument(condition): bare_contract = CheckCallable(condition) else: raise ValueError("The given callable %r should be able to accept " "at least one argument" % condition) else: raise ValueError('I need either a string or a callable for the ' 'condition; found %s.' % describe_value(condition)) # Separate the context if needed if isinstance(bare_contract, (CheckCallable, CheckCallableWithSelf)): contract = bare_contract else: contract = SeparateContext(bare_contract) # It's okay if we define the same thing twice if identifier in Extension.registrar: old = Extension.registrar[identifier] if not (contract == old): msg = ('Tried to redefine %r with a definition that looks ' 'different to me.\n' % identifier) msg += ' - old: %r\n' % old msg += ' - new: %r\n' % contract raise ValueError(msg) else: Extension.registrar[identifier] = contract return contract def parse_contract_string(string): from .main_actual import parse_contract_string_actual return parse_contract_string_actual(string)
PypiClean
/alibabacloud_ddosbgp20171120-1.0.1.tar.gz/alibabacloud_ddosbgp20171120-1.0.1/README.md
English | [简体中文](README-CN.md) ![](https://aliyunsdk-pages.alicdn.com/icons/AlibabaCloud.svg) ## Alibaba Cloud ddosbgp SDK for Python ## Requirements - Python >= 3.6 ## Installation - **Install with pip** Python SDK uses a common package management tool named `pip`. If pip is not installed, see the [pip user guide](https://pip.pypa.io/en/stable/installing/ "pip User Guide") to install pip. ```bash # Install the alibabacloud_ddosbgp20171120 pip install alibabacloud_ddosbgp20171120 ``` ## Issues [Opening an Issue](https://github.com/aliyun/alibabacloud-sdk/issues/new), Issues not conforming to the guidelines may be closed immediately. ## Usage [Quick Examples](https://github.com/aliyun/alibabacloud-python-sdk/blob/master/docs/0-Usage-EN.md#quick-examples) ## Changelog Detailed changes for each release are documented in the [release notes](./ChangeLog.md). ## References - [Latest Release](https://github.com/aliyun/alibabacloud-sdk/tree/master/python) ## License [Apache-2.0](http://www.apache.org/licenses/LICENSE-2.0) Copyright (c) 2009-present, Alibaba Cloud All rights reserved.
PypiClean
/groceries-tobiasli-1.1.7.tar.gz/groceries-tobiasli-1.1.7/groceries/configs/unit_definition/metric_imperial.py
from groceries.configs.config_types import UnitDefinition from groceries.configs.unit_definition.unit_constants import unit_constants as c _units = { 'length': { 'm': { 'variants': ['meter', 'meters'], 'prefixes': c.metric_prefixes }, # Imperial 'inch': { 'plural': 'inches', 'variants': ['tomme', 'tommer', 'inch', 'inches', 'in'], 'scale': 0.0254, }, 'foot': { 'variants': ['fot', 'ft'], 'plural': 'feet', 'scale': 0.3048 } }, 'mass': { 'g': { 'variants': ['gram', 'grams'], 'prefixes': c.metric_prefixes, }, 'tonn': { 'scale': 1000000 }, # Imperial 'oz': { 'variants': ['ounce', 'ounces'], 'scale': 28.349523125, }, 'lb': { 'variants': ['lbs', 'pound', 'pounds'], 'scale': 453.59237, } }, 'volume': { 'l': { 'variants': ['liter', 'litre', 'liters', 'litres'], 'prefixes': c.metric_prefixes, }, # Imperial 'floz': { 'variants': ['fluid ounce', 'fluid ounces'], 'scale': 0.02957 }, 'cup': { 'plural': 'cups', 'scale': 0.2366 }, 'pint': { 'plural': 'pints', 'variants': ['pt'], 'scale': 0.4732 }, # Norwegian 'ss': { 'variants': ['spiseskje', 'spiseskjeer', 'tablespoon', 'tbsp', 'tbs', 'tbl'], 'scale': 0.015, }, 'ts': { 'variants': ['teskje', 'teskjeer', 'teaspoon', 'tsp'], 'scale': 0.005, }, 'kryddermål': { 'variants': ['krm'], 'scale': 0.001, }, 'dram': { 'scale': 0.003697, } }, 'hvitløk': { 'hel': { 'plural': 'hele', 'scale': 8 }, 'fedd': {} }, 'other': { # Norwegian 'pakke': { 'plural': 'pakker' }, 'boks': { 'plural': 'bokser' }, 'tube': { 'plural': 'tuber' }, 'eske': { 'plural': 'esker' }, 'glass': {}, 'pose': { 'plural': 'poser' }, 'porsjon': { 'plural': 'porsjoner' }, } } _formatting = { 'length': [ {'unit': 'cm', 'checks': [c.EqualTo(0)]}, {'unit': 'inch', 'checks': [c.LessThan(0.5), c.FractionOf(_units['length']['inch']['scale'])]}, {'unit': 'mm', 'checks': [c.LessThan(0.01)]}, {'unit': 'cm', 'checks': [c.GreaterThanOrEqualTo(0.01), c.LessThan(1)]}, {'unit': 'm', 'checks': [c.AlwaysTrue()]}, # Last check is always true, so the unit defaults to 'm'. ], 'mass': [ {'unit': 'g', 'checks': [c.EqualTo(0)]}, {'unit': 'lb', 'checks': [c.LessThan(2000), c.FractionOf(_units['mass']['lb']['scale'])]}, {'unit': 'oz', 'checks': [c.LessThan(1000), c.FractionOf(_units['mass']['oz']['scale'])]}, {'unit': 'kg', 'checks': [c.GreaterThanOrEqualTo(300), c.FractionOf(1000)]}, {'unit': 'kg', 'checks': [c.GreaterThanOrEqualTo(1000)]}, {'unit': 'g', 'checks': [c.LessThan(1000), c.GreaterThanOrEqualTo(0.5), c.FractionOf(1)]}, {'unit': 'g', 'checks': [c.LessThan(1000), c.GreaterThanOrEqualTo(0.5)]}, {'unit': 'mg', 'checks': [c.LessThan(1)]}, ], 'volume': [ {'unit': 'l', 'checks': [c.EqualTo(0)]}, {'unit': 'cup', 'checks': [c.FractionOf(_units['volume']['cup']['scale'])]}, {'unit': 'l', 'checks': [c.GreaterThanOrEqualTo(1)]}, {'unit': 'l', 'checks': [c.GreaterThanOrEqualTo(0.5), c.FractionOf(1)]}, {'unit': 'dl', 'checks': [c.GreaterThanOrEqualTo(0.01), c.LessThan(1), c.FractionOf(0.1)]}, {'unit': 'dl', 'checks': [c.GreaterThanOrEqualTo(0.01), c.LessThan(1)]}, {'unit': 'ts', 'checks': [c.LessThan(0.015), c.GreaterThanOrEqualTo(0.005 / 4), c.FractionOf(_units['volume']['ts']['scale'])]}, {'unit': 'ss', 'checks': [c.LessThan(0.01), c.GreaterThanOrEqualTo(0.015 / 4), c.FractionOf(_units['volume']['ss']['scale'])]}, {'unit': 'ml', 'checks': [c.LessThan(0.1)]}, ], 'hvitløk': [ {'unit': 'hel', 'checks': [c.GreaterThanOrEqualTo(8)]} ] } unit_definition = UnitDefinition( units=_units, formatting=_formatting, constants=c )
PypiClean
/pyfundamentus-0.0.7a0.tar.gz/pyfundamentus-0.0.7a0/README.md
# Python Fundamentus [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE) [![codecov](https://codecov.io/github/alexcamargos/pyFundamentus/branch/main/graph/badge.svg?token=44RJNBZZFQ)](https://codecov.io/github/alexcamargos/pyFundamentus) `Python Fundamentus` is a Python API that allows you to quickly access the main fundamental indicators of the main stocks in the Brazilian market. ## Installation `git clone https://github.com/alexcamargos/pyFundamentus.git` `pip install -r requirements.txt` ## API usage `python run_rich.py VALE3` ## Examples `python run_rich.py mglu3` ![](screenshot/mglu3.png) `python run_rich.py wege3` ![](screenshot/wege3.png) ## Autor Feito com :heart: por [Alexsander Lopes Camargos](https://github.com/alexcamargos) :wave: Entre em contato! [![GitHub](https://img.shields.io/badge/-AlexCamargos-1ca0f1?style=flat-square&labelColor=1ca0f1&logo=github&logoColor=white&link=https://github.com/alexcamargos)](https://github.com/alexcamargos) [![Twitter Badge](https://img.shields.io/badge/-@alcamargos-1ca0f1?style=flat-square&labelColor=1ca0f1&logo=twitter&logoColor=white&link=https://twitter.com/alcamargos)](https://twitter.com/alcamargos) [![Linkedin Badge](https://img.shields.io/badge/-alexcamargos-1ca0f1?style=flat-square&logo=Linkedin&logoColor=white&link=https://www.linkedin.com/in/alexcamargos/)](https://www.linkedin.com/in/alexcamargos/) [![Gmail Badge](https://img.shields.io/badge/[email protected]?style=flat-square&labelColor=1ca0f1&logo=Gmail&logoColor=white&link=mailto:[email protected])](mailto:[email protected]) ## Copyright Copyright 2022 by Alexsander Lopes Camargos. ## License [MIT License](LICENSE)
PypiClean
/tsml-eval-0.1.0.tar.gz/tsml-eval-0.1.0/tsml_eval/publications/2023/tser_archive_expansion/set_tser_exp_regressor.py
__author__ = ["TonyBagnall", "MatthewMiddlehurst", "dguijo"] import numpy as np from tsml_eval.utils.functions import str_in_nested_list expansion_regressors = [ ["1nn-dtw", "KNeighborsTimeSeriesRegressor"], "1nn-ed", "5nn-dtw", "5nn-ed", ["fcnn", "fcn", "fcnnregressor", "FCNRegressor"], ["FPCARegressor", "fpcregressor", "fpcr"], ["fpcar-b-spline", "fpcr-b-spline", "fpcr-bs"], ["grid-svr", "grid-supportvectorregressor"], [ "inception", "singleinception", "individualinception", "IndividualInceptionTimeRegressor", ], ["inceptione", "inception-e", "inceptiontime", "InceptionTimeRegressor"], ["rf", "randf", "randomforest", "RandomForestRegressor"], ["resnet", "ResNetRegressor"], ["rocket", "RocketRegressor"], ["multirocket", "multirocketregressor"], ["xgb", "xgboost", "xgboostregressor", "XGBRegressor"], ["cnn", "CNNRegressor"], ["RidgeCV", "ridge"], ["RotationForestRegressor", "rotf", "rotationforest"], ["tsf", "timeseriesforestregressor"], ["DrCIF", "drcifregressor"], ["fresh-prince", "freshprince", "FreshPRINCERegressor"], ] def _set_tser_exp_regressor( regressor_name, random_state=None, n_jobs=1, ): r = regressor_name.lower() if not str_in_nested_list(expansion_regressors, r): raise Exception("UNKNOWN REGRESSOR ", r, " in set_expansion_regressor") if r == "1nn-dtw" or r == "kneighborstimeseriesregressor": from aeon.regression.distance_based import KNeighborsTimeSeriesRegressor return KNeighborsTimeSeriesRegressor( n_neighbors=1, distance="dtw", distance_params={"window": 0.1}, ) elif r == "1nn-ed": from aeon.regression.distance_based import KNeighborsTimeSeriesRegressor return KNeighborsTimeSeriesRegressor( distance="euclidean", n_neighbors=1, ) elif r == "5nn-dtw": from aeon.regression.distance_based import KNeighborsTimeSeriesRegressor return KNeighborsTimeSeriesRegressor( n_neighbors=5, distance="dtw", distance_params={"window": 0.1}, ) elif r == "5nn-ed": from aeon.regression.distance_based import KNeighborsTimeSeriesRegressor return KNeighborsTimeSeriesRegressor( distance="euclidean", n_neighbors=5, ) elif r == "fcnn" or r == "fcn" or r == "fcnnregressor" or r == "fcnregressor": from tsml_eval.estimators.regression.deep_learning import FCNRegressor return FCNRegressor(random_state=random_state) elif r == "fpcaregressor" or r == "fpcregressor" or r == "fpcr": from tsml.feature_based import FPCARegressor return FPCARegressor(n_jobs=n_jobs) elif r == "fpcar-b-spline" or r == "fpcr-b-spline" or r == "fpcr-bs": from tsml.feature_based import FPCARegressor return FPCARegressor(n_jobs=n_jobs, bspline=True, order=4, n_basis=10) elif r == "grid-svr" or r == "grid-supportvectorregressor": from sklearn.model_selection import GridSearchCV from sklearn.svm import SVR param_grid = [ { "kernel": ["rbf", "sigmoid"], "C": [0.1, 1, 10, 100], "gamma": [0.001, 0.01, 0.1, 1], } ] return GridSearchCV( SVR(), param_grid, scoring="neg_mean_squared_error", cv=3, n_jobs=n_jobs ) elif ( r == "inception" or r == "singleinception" or r == "individualinception" or r == "individualinceptiontimeregressor" ): from tsml_eval.estimators.regression.deep_learning import ( IndividualInceptionTimeRegressor, ) return IndividualInceptionTimeRegressor(random_state=random_state) elif ( r == "inceptione" or r == "inception-e" or r == "inceptiontime" or r == "inceptiontimeregressor" ): from tsml_eval.estimators.regression.deep_learning import InceptionTimeRegressor return InceptionTimeRegressor(random_state=random_state) elif ( r == "rf" or r == "randf" or r == "randomforest" or r == "randomforestregressor" ): from sklearn.ensemble import RandomForestRegressor return RandomForestRegressor( n_estimators=500, n_jobs=n_jobs, random_state=random_state, ) elif r == "resnet" or r == "resnetregressor": from tsml_eval.estimators.regression.deep_learning import ResNetRegressor return ResNetRegressor(random_state=random_state) elif r == "rocket" or r == "rocketregressor": from aeon.regression.convolution_based import RocketRegressor return RocketRegressor( random_state=random_state, n_jobs=n_jobs, ) elif r == "multirocket" or r == "multirocketregressor": from aeon.regression.convolution_based import RocketRegressor return RocketRegressor( rocket_transform="multirocket", random_state=random_state, n_jobs=n_jobs, ) elif r == "xgb" or r == "xgboost" or r == "xgboostregressor" or r == "xgbregressor": from xgboost import XGBRegressor return XGBRegressor( n_estimators=500, n_jobs=n_jobs, learning_rate=0.1, random_state=random_state, ) elif r == "cnn" or r == "cnnregressor": from tsml_eval.estimators.regression.deep_learning import CNNRegressor return CNNRegressor(random_state=random_state) elif r == "ridgecv" or r == "ridge": from sklearn.linear_model import RidgeCV return RidgeCV( fit_intercept=True, alphas=np.logspace(-3, 3, 10), ) elif r == "rotationforestregressor" or r == "rotf" or r == "rotationforest": from aeon.regression.sklearn import RotationForestRegressor return RotationForestRegressor( random_state=random_state, n_jobs=n_jobs, ) elif r == "tsf" or r == "timeseriesforestregressor": from aeon.regression.interval_based import TimeSeriesForestRegressor from tsml_eval.estimators.regression.column_ensemble import ( ColumnEnsembleRegressor, ) estimators = [ ( "tsf", TimeSeriesForestRegressor( random_state=random_state, n_estimators=500, n_jobs=n_jobs ), None, ) ] return ColumnEnsembleRegressor(estimators) elif r == "drcif" or r == "drcifregressor": from tsml_eval.estimators.regression.interval_based import DrCIF return DrCIF( n_estimators=500, random_state=random_state, n_jobs=n_jobs, ) elif r == "fresh-prince" or r == "freshprince" or r == "freshprinceregressor": from aeon.regression.feature_based import FreshPRINCERegressor return FreshPRINCERegressor( n_estimators=500, random_state=random_state, n_jobs=n_jobs, )
PypiClean
/django-twemoir-0.2.1.tar.gz/django-twemoir-0.2.1/example/face/static/admin/js/jquery.min.js
(function(A,w){function ma(){if(!c.isReady){try{s.documentElement.doScroll("left")}catch(a){setTimeout(ma,1);return}c.ready()}}function Qa(a,b){b.src?c.ajax({url:b.src,async:false,dataType:"script"}):c.globalEval(b.text||b.textContent||b.innerHTML||"");b.parentNode&&b.parentNode.removeChild(b)}function X(a,b,d,f,e,j){var i=a.length;if(typeof b==="object"){for(var o in b)X(a,o,b[o],f,e,d);return a}if(d!==w){f=!j&&f&&c.isFunction(d);for(o=0;o<i;o++)e(a[o],b,f?d.call(a[o],o,e(a[o],b)):d,j);return a}return i? e(a[0],b):w}function J(){return(new Date).getTime()}function Y(){return false}function Z(){return true}function na(a,b,d){d[0].type=a;return c.event.handle.apply(b,d)}function oa(a){var b,d=[],f=[],e=arguments,j,i,o,k,n,r;i=c.data(this,"events");if(!(a.liveFired===this||!i||!i.live||a.button&&a.type==="click")){a.liveFired=this;var u=i.live.slice(0);for(k=0;k<u.length;k++){i=u[k];i.origType.replace(O,"")===a.type?f.push(i.selector):u.splice(k--,1)}j=c(a.target).closest(f,a.currentTarget);n=0;for(r= j.length;n<r;n++)for(k=0;k<u.length;k++){i=u[k];if(j[n].selector===i.selector){o=j[n].elem;f=null;if(i.preType==="mouseenter"||i.preType==="mouseleave")f=c(a.relatedTarget).closest(i.selector)[0];if(!f||f!==o)d.push({elem:o,handleObj:i})}}n=0;for(r=d.length;n<r;n++){j=d[n];a.currentTarget=j.elem;a.data=j.handleObj.data;a.handleObj=j.handleObj;if(j.handleObj.origHandler.apply(j.elem,e)===false){b=false;break}}return b}}function pa(a,b){return"live."+(a&&a!=="*"?a+".":"")+b.replace(/\./g,"`").replace(/ /g, "&")}function qa(a){return!a||!a.parentNode||a.parentNode.nodeType===11}function ra(a,b){var d=0;b.each(function(){if(this.nodeName===(a[d]&&a[d].nodeName)){var f=c.data(a[d++]),e=c.data(this,f);if(f=f&&f.events){delete e.handle;e.events={};for(var j in f)for(var i in f[j])c.event.add(this,j,f[j][i],f[j][i].data)}}})}function sa(a,b,d){var f,e,j;b=b&&b[0]?b[0].ownerDocument||b[0]:s;if(a.length===1&&typeof a[0]==="string"&&a[0].length<512&&b===s&&!ta.test(a[0])&&(c.support.checkClone||!ua.test(a[0]))){e= true;if(j=c.fragments[a[0]])if(j!==1)f=j}if(!f){f=b.createDocumentFragment();c.clean(a,b,f,d)}if(e)c.fragments[a[0]]=j?f:1;return{fragment:f,cacheable:e}}function K(a,b){var d={};c.each(va.concat.apply([],va.slice(0,b)),function(){d[this]=a});return d}function wa(a){return"scrollTo"in a&&a.document?a:a.nodeType===9?a.defaultView||a.parentWindow:false}var c=function(a,b){return new c.fn.init(a,b)},Ra=A.jQuery,Sa=A.$,s=A.document,T,Ta=/^[^<]*(<[\w\W]+>)[^>]*$|^#([\w-]+)$/,Ua=/^.[^:#\[\.,]*$/,Va=/\S/, Wa=/^(\s|\u00A0)+|(\s|\u00A0)+$/g,Xa=/^<(\w+)\s*\/?>(?:<\/\1>)?$/,P=navigator.userAgent,xa=false,Q=[],L,$=Object.prototype.toString,aa=Object.prototype.hasOwnProperty,ba=Array.prototype.push,R=Array.prototype.slice,ya=Array.prototype.indexOf;c.fn=c.prototype={init:function(a,b){var d,f;if(!a)return this;if(a.nodeType){this.context=this[0]=a;this.length=1;return this}if(a==="body"&&!b){this.context=s;this[0]=s.body;this.selector="body";this.length=1;return this}if(typeof a==="string")if((d=Ta.exec(a))&& (d[1]||!b))if(d[1]){f=b?b.ownerDocument||b:s;if(a=Xa.exec(a))if(c.isPlainObject(b)){a=[s.createElement(a[1])];c.fn.attr.call(a,b,true)}else a=[f.createElement(a[1])];else{a=sa([d[1]],[f]);a=(a.cacheable?a.fragment.cloneNode(true):a.fragment).childNodes}return c.merge(this,a)}else{if(b=s.getElementById(d[2])){if(b.id!==d[2])return T.find(a);this.length=1;this[0]=b}this.context=s;this.selector=a;return this}else if(!b&&/^\w+$/.test(a)){this.selector=a;this.context=s;a=s.getElementsByTagName(a);return c.merge(this, a)}else return!b||b.jquery?(b||T).find(a):c(b).find(a);else if(c.isFunction(a))return T.ready(a);if(a.selector!==w){this.selector=a.selector;this.context=a.context}return c.makeArray(a,this)},selector:"",jquery:"1.4.2",length:0,size:function(){return this.length},toArray:function(){return R.call(this,0)},get:function(a){return a==null?this.toArray():a<0?this.slice(a)[0]:this[a]},pushStack:function(a,b,d){var f=c();c.isArray(a)?ba.apply(f,a):c.merge(f,a);f.prevObject=this;f.context=this.context;if(b=== "find")f.selector=this.selector+(this.selector?" ":"")+d;else if(b)f.selector=this.selector+"."+b+"("+d+")";return f},each:function(a,b){return c.each(this,a,b)},ready:function(a){c.bindReady();if(c.isReady)a.call(s,c);else Q&&Q.push(a);return this},eq:function(a){return a===-1?this.slice(a):this.slice(a,+a+1)},first:function(){return this.eq(0)},last:function(){return this.eq(-1)},slice:function(){return this.pushStack(R.apply(this,arguments),"slice",R.call(arguments).join(","))},map:function(a){return this.pushStack(c.map(this, function(b,d){return a.call(b,d,b)}))},end:function(){return this.prevObject||c(null)},push:ba,sort:[].sort,splice:[].splice};c.fn.init.prototype=c.fn;c.extend=c.fn.extend=function(){var a=arguments[0]||{},b=1,d=arguments.length,f=false,e,j,i,o;if(typeof a==="boolean"){f=a;a=arguments[1]||{};b=2}if(typeof a!=="object"&&!c.isFunction(a))a={};if(d===b){a=this;--b}for(;b<d;b++)if((e=arguments[b])!=null)for(j in e){i=a[j];o=e[j];if(a!==o)if(f&&o&&(c.isPlainObject(o)||c.isArray(o))){i=i&&(c.isPlainObject(i)|| c.isArray(i))?i:c.isArray(o)?[]:{};a[j]=c.extend(f,i,o)}else if(o!==w)a[j]=o}return a};c.extend({noConflict:function(a){A.$=Sa;if(a)A.jQuery=Ra;return c},isReady:false,ready:function(){if(!c.isReady){if(!s.body)return setTimeout(c.ready,13);c.isReady=true;if(Q){for(var a,b=0;a=Q[b++];)a.call(s,c);Q=null}c.fn.triggerHandler&&c(s).triggerHandler("ready")}},bindReady:function(){if(!xa){xa=true;if(s.readyState==="complete")return c.ready();if(s.addEventListener){s.addEventListener("DOMContentLoaded", L,false);A.addEventListener("load",c.ready,false)}else if(s.attachEvent){s.attachEvent("onreadystatechange",L);A.attachEvent("onload",c.ready);var a=false;try{a=A.frameElement==null}catch(b){}s.documentElement.doScroll&&a&&ma()}}},isFunction:function(a){return $.call(a)==="[object Function]"},isArray:function(a){return $.call(a)==="[object Array]"},isPlainObject:function(a){if(!a||$.call(a)!=="[object Object]"||a.nodeType||a.setInterval)return false;if(a.constructor&&!aa.call(a,"constructor")&&!aa.call(a.constructor.prototype, "isPrototypeOf"))return false;var b;for(b in a);return b===w||aa.call(a,b)},isEmptyObject:function(a){for(var b in a)return false;return true},error:function(a){throw a;},parseJSON:function(a){if(typeof a!=="string"||!a)return null;a=c.trim(a);if(/^[\],:{}\s]*$/.test(a.replace(/\\(?:["\\\/bfnrt]|u[0-9a-fA-F]{4})/g,"@").replace(/"[^"\\\n\r]*"|true|false|null|-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?/g,"]").replace(/(?:^|:|,)(?:\s*\[)+/g,"")))return A.JSON&&A.JSON.parse?A.JSON.parse(a):(new Function("return "+ a))();else c.error("Invalid JSON: "+a)},noop:function(){},globalEval:function(a){if(a&&Va.test(a)){var b=s.getElementsByTagName("head")[0]||s.documentElement,d=s.createElement("script");d.type="text/javascript";if(c.support.scriptEval)d.appendChild(s.createTextNode(a));else d.text=a;b.insertBefore(d,b.firstChild);b.removeChild(d)}},nodeName:function(a,b){return a.nodeName&&a.nodeName.toUpperCase()===b.toUpperCase()},each:function(a,b,d){var f,e=0,j=a.length,i=j===w||c.isFunction(a);if(d)if(i)for(f in a){if(b.apply(a[f], d)===false)break}else for(;e<j;){if(b.apply(a[e++],d)===false)break}else if(i)for(f in a){if(b.call(a[f],f,a[f])===false)break}else for(d=a[0];e<j&&b.call(d,e,d)!==false;d=a[++e]);return a},trim:function(a){return(a||"").replace(Wa,"")},makeArray:function(a,b){b=b||[];if(a!=null)a.length==null||typeof a==="string"||c.isFunction(a)||typeof a!=="function"&&a.setInterval?ba.call(b,a):c.merge(b,a);return b},inArray:function(a,b){if(b.indexOf)return b.indexOf(a);for(var d=0,f=b.length;d<f;d++)if(b[d]=== a)return d;return-1},merge:function(a,b){var d=a.length,f=0;if(typeof b.length==="number")for(var e=b.length;f<e;f++)a[d++]=b[f];else for(;b[f]!==w;)a[d++]=b[f++];a.length=d;return a},grep:function(a,b,d){for(var f=[],e=0,j=a.length;e<j;e++)!d!==!b(a[e],e)&&f.push(a[e]);return f},map:function(a,b,d){for(var f=[],e,j=0,i=a.length;j<i;j++){e=b(a[j],j,d);if(e!=null)f[f.length]=e}return f.concat.apply([],f)},guid:1,proxy:function(a,b,d){if(arguments.length===2)if(typeof b==="string"){d=a;a=d[b];b=w}else if(b&& !c.isFunction(b)){d=b;b=w}if(!b&&a)b=function(){return a.apply(d||this,arguments)};if(a)b.guid=a.guid=a.guid||b.guid||c.guid++;return b},uaMatch:function(a){a=a.toLowerCase();a=/(webkit)[ \/]([\w.]+)/.exec(a)||/(opera)(?:.*version)?[ \/]([\w.]+)/.exec(a)||/(msie) ([\w.]+)/.exec(a)||!/compatible/.test(a)&&/(mozilla)(?:.*? rv:([\w.]+))?/.exec(a)||[];return{browser:a[1]||"",version:a[2]||"0"}},browser:{}});P=c.uaMatch(P);if(P.browser){c.browser[P.browser]=true;c.browser.version=P.version}if(c.browser.webkit)c.browser.safari= true;if(ya)c.inArray=function(a,b){return ya.call(b,a)};T=c(s);if(s.addEventListener)L=function(){s.removeEventListener("DOMContentLoaded",L,false);c.ready()};else if(s.attachEvent)L=function(){if(s.readyState==="complete"){s.detachEvent("onreadystatechange",L);c.ready()}};(function(){c.support={};var a=s.documentElement,b=s.createElement("script"),d=s.createElement("div"),f="script"+J();d.style.display="none";d.innerHTML=" <link/><table></table><a href='/a' style='color:red;float:left;opacity:.55;'>a</a><input type='checkbox'/>"; var e=d.getElementsByTagName("*"),j=d.getElementsByTagName("a")[0];if(!(!e||!e.length||!j)){c.support={leadingWhitespace:d.firstChild.nodeType===3,tbody:!d.getElementsByTagName("tbody").length,htmlSerialize:!!d.getElementsByTagName("link").length,style:/red/.test(j.getAttribute("style")),hrefNormalized:j.getAttribute("href")==="/a",opacity:/^0.55$/.test(j.style.opacity),cssFloat:!!j.style.cssFloat,checkOn:d.getElementsByTagName("input")[0].value==="on",optSelected:s.createElement("select").appendChild(s.createElement("option")).selected, parentNode:d.removeChild(d.appendChild(s.createElement("div"))).parentNode===null,deleteExpando:true,checkClone:false,scriptEval:false,noCloneEvent:true,boxModel:null};b.type="text/javascript";try{b.appendChild(s.createTextNode("window."+f+"=1;"))}catch(i){}a.insertBefore(b,a.firstChild);if(A[f]){c.support.scriptEval=true;delete A[f]}try{delete b.test}catch(o){c.support.deleteExpando=false}a.removeChild(b);if(d.attachEvent&&d.fireEvent){d.attachEvent("onclick",function k(){c.support.noCloneEvent= false;d.detachEvent("onclick",k)});d.cloneNode(true).fireEvent("onclick")}d=s.createElement("div");d.innerHTML="<input type='radio' name='radiotest' checked='checked'/>";a=s.createDocumentFragment();a.appendChild(d.firstChild);c.support.checkClone=a.cloneNode(true).cloneNode(true).lastChild.checked;c(function(){var k=s.createElement("div");k.style.width=k.style.paddingLeft="1px";s.body.appendChild(k);c.boxModel=c.support.boxModel=k.offsetWidth===2;s.body.removeChild(k).style.display="none"});a=function(k){var n= s.createElement("div");k="on"+k;var r=k in n;if(!r){n.setAttribute(k,"return;");r=typeof n[k]==="function"}return r};c.support.submitBubbles=a("submit");c.support.changeBubbles=a("change");a=b=d=e=j=null}})();c.props={"for":"htmlFor","class":"className",readonly:"readOnly",maxlength:"maxLength",cellspacing:"cellSpacing",rowspan:"rowSpan",colspan:"colSpan",tabindex:"tabIndex",usemap:"useMap",frameborder:"frameBorder"};var G="jQuery"+J(),Ya=0,za={};c.extend({cache:{},expando:G,noData:{embed:true,object:true, applet:true},data:function(a,b,d){if(!(a.nodeName&&c.noData[a.nodeName.toLowerCase()])){a=a==A?za:a;var f=a[G],e=c.cache;if(!f&&typeof b==="string"&&d===w)return null;f||(f=++Ya);if(typeof b==="object"){a[G]=f;e[f]=c.extend(true,{},b)}else if(!e[f]){a[G]=f;e[f]={}}a=e[f];if(d!==w)a[b]=d;return typeof b==="string"?a[b]:a}},removeData:function(a,b){if(!(a.nodeName&&c.noData[a.nodeName.toLowerCase()])){a=a==A?za:a;var d=a[G],f=c.cache,e=f[d];if(b){if(e){delete e[b];c.isEmptyObject(e)&&c.removeData(a)}}else{if(c.support.deleteExpando)delete a[c.expando]; else a.removeAttribute&&a.removeAttribute(c.expando);delete f[d]}}}});c.fn.extend({data:function(a,b){if(typeof a==="undefined"&&this.length)return c.data(this[0]);else if(typeof a==="object")return this.each(function(){c.data(this,a)});var d=a.split(".");d[1]=d[1]?"."+d[1]:"";if(b===w){var f=this.triggerHandler("getData"+d[1]+"!",[d[0]]);if(f===w&&this.length)f=c.data(this[0],a);return f===w&&d[1]?this.data(d[0]):f}else return this.trigger("setData"+d[1]+"!",[d[0],b]).each(function(){c.data(this, a,b)})},removeData:function(a){return this.each(function(){c.removeData(this,a)})}});c.extend({queue:function(a,b,d){if(a){b=(b||"fx")+"queue";var f=c.data(a,b);if(!d)return f||[];if(!f||c.isArray(d))f=c.data(a,b,c.makeArray(d));else f.push(d);return f}},dequeue:function(a,b){b=b||"fx";var d=c.queue(a,b),f=d.shift();if(f==="inprogress")f=d.shift();if(f){b==="fx"&&d.unshift("inprogress");f.call(a,function(){c.dequeue(a,b)})}}});c.fn.extend({queue:function(a,b){if(typeof a!=="string"){b=a;a="fx"}if(b=== w)return c.queue(this[0],a);return this.each(function(){var d=c.queue(this,a,b);a==="fx"&&d[0]!=="inprogress"&&c.dequeue(this,a)})},dequeue:function(a){return this.each(function(){c.dequeue(this,a)})},delay:function(a,b){a=c.fx?c.fx.speeds[a]||a:a;b=b||"fx";return this.queue(b,function(){var d=this;setTimeout(function(){c.dequeue(d,b)},a)})},clearQueue:function(a){return this.queue(a||"fx",[])}});var Aa=/[\n\t]/g,ca=/\s+/,Za=/\r/g,$a=/href|src|style/,ab=/(button|input)/i,bb=/(button|input|object|select|textarea)/i, cb=/^(a|area)$/i,Ba=/radio|checkbox/;c.fn.extend({attr:function(a,b){return X(this,a,b,true,c.attr)},removeAttr:function(a){return this.each(function(){c.attr(this,a,"");this.nodeType===1&&this.removeAttribute(a)})},addClass:function(a){if(c.isFunction(a))return this.each(function(n){var r=c(this);r.addClass(a.call(this,n,r.attr("class")))});if(a&&typeof a==="string")for(var b=(a||"").split(ca),d=0,f=this.length;d<f;d++){var e=this[d];if(e.nodeType===1)if(e.className){for(var j=" "+e.className+" ", i=e.className,o=0,k=b.length;o<k;o++)if(j.indexOf(" "+b[o]+" ")<0)i+=" "+b[o];e.className=c.trim(i)}else e.className=a}return this},removeClass:function(a){if(c.isFunction(a))return this.each(function(k){var n=c(this);n.removeClass(a.call(this,k,n.attr("class")))});if(a&&typeof a==="string"||a===w)for(var b=(a||"").split(ca),d=0,f=this.length;d<f;d++){var e=this[d];if(e.nodeType===1&&e.className)if(a){for(var j=(" "+e.className+" ").replace(Aa," "),i=0,o=b.length;i<o;i++)j=j.replace(" "+b[i]+" ", " ");e.className=c.trim(j)}else e.className=""}return this},toggleClass:function(a,b){var d=typeof a,f=typeof b==="boolean";if(c.isFunction(a))return this.each(function(e){var j=c(this);j.toggleClass(a.call(this,e,j.attr("class"),b),b)});return this.each(function(){if(d==="string")for(var e,j=0,i=c(this),o=b,k=a.split(ca);e=k[j++];){o=f?o:!i.hasClass(e);i[o?"addClass":"removeClass"](e)}else if(d==="undefined"||d==="boolean"){this.className&&c.data(this,"__className__",this.className);this.className= this.className||a===false?"":c.data(this,"__className__")||""}})},hasClass:function(a){a=" "+a+" ";for(var b=0,d=this.length;b<d;b++)if((" "+this[b].className+" ").replace(Aa," ").indexOf(a)>-1)return true;return false},val:function(a){if(a===w){var b=this[0];if(b){if(c.nodeName(b,"option"))return(b.attributes.value||{}).specified?b.value:b.text;if(c.nodeName(b,"select")){var d=b.selectedIndex,f=[],e=b.options;b=b.type==="select-one";if(d<0)return null;var j=b?d:0;for(d=b?d+1:e.length;j<d;j++){var i= e[j];if(i.selected){a=c(i).val();if(b)return a;f.push(a)}}return f}if(Ba.test(b.type)&&!c.support.checkOn)return b.getAttribute("value")===null?"on":b.value;return(b.value||"").replace(Za,"")}return w}var o=c.isFunction(a);return this.each(function(k){var n=c(this),r=a;if(this.nodeType===1){if(o)r=a.call(this,k,n.val());if(typeof r==="number")r+="";if(c.isArray(r)&&Ba.test(this.type))this.checked=c.inArray(n.val(),r)>=0;else if(c.nodeName(this,"select")){var u=c.makeArray(r);c("option",this).each(function(){this.selected= c.inArray(c(this).val(),u)>=0});if(!u.length)this.selectedIndex=-1}else this.value=r}})}});c.extend({attrFn:{val:true,css:true,html:true,text:true,data:true,width:true,height:true,offset:true},attr:function(a,b,d,f){if(!a||a.nodeType===3||a.nodeType===8)return w;if(f&&b in c.attrFn)return c(a)[b](d);f=a.nodeType!==1||!c.isXMLDoc(a);var e=d!==w;b=f&&c.props[b]||b;if(a.nodeType===1){var j=$a.test(b);if(b in a&&f&&!j){if(e){b==="type"&&ab.test(a.nodeName)&&a.parentNode&&c.error("type property can't be changed"); a[b]=d}if(c.nodeName(a,"form")&&a.getAttributeNode(b))return a.getAttributeNode(b).nodeValue;if(b==="tabIndex")return(b=a.getAttributeNode("tabIndex"))&&b.specified?b.value:bb.test(a.nodeName)||cb.test(a.nodeName)&&a.href?0:w;return a[b]}if(!c.support.style&&f&&b==="style"){if(e)a.style.cssText=""+d;return a.style.cssText}e&&a.setAttribute(b,""+d);a=!c.support.hrefNormalized&&f&&j?a.getAttribute(b,2):a.getAttribute(b);return a===null?w:a}return c.style(a,b,d)}});var O=/\.(.*)$/,db=function(a){return a.replace(/[^\w\s\.\|`]/g, function(b){return"\\"+b})};c.event={add:function(a,b,d,f){if(!(a.nodeType===3||a.nodeType===8)){if(a.setInterval&&a!==A&&!a.frameElement)a=A;var e,j;if(d.handler){e=d;d=e.handler}if(!d.guid)d.guid=c.guid++;if(j=c.data(a)){var i=j.events=j.events||{},o=j.handle;if(!o)j.handle=o=function(){return typeof c!=="undefined"&&!c.event.triggered?c.event.handle.apply(o.elem,arguments):w};o.elem=a;b=b.split(" ");for(var k,n=0,r;k=b[n++];){j=e?c.extend({},e):{handler:d,data:f};if(k.indexOf(".")>-1){r=k.split("."); k=r.shift();j.namespace=r.slice(0).sort().join(".")}else{r=[];j.namespace=""}j.type=k;j.guid=d.guid;var u=i[k],z=c.event.special[k]||{};if(!u){u=i[k]=[];if(!z.setup||z.setup.call(a,f,r,o)===false)if(a.addEventListener)a.addEventListener(k,o,false);else a.attachEvent&&a.attachEvent("on"+k,o)}if(z.add){z.add.call(a,j);if(!j.handler.guid)j.handler.guid=d.guid}u.push(j);c.event.global[k]=true}a=null}}},global:{},remove:function(a,b,d,f){if(!(a.nodeType===3||a.nodeType===8)){var e,j=0,i,o,k,n,r,u,z=c.data(a), C=z&&z.events;if(z&&C){if(b&&b.type){d=b.handler;b=b.type}if(!b||typeof b==="string"&&b.charAt(0)==="."){b=b||"";for(e in C)c.event.remove(a,e+b)}else{for(b=b.split(" ");e=b[j++];){n=e;i=e.indexOf(".")<0;o=[];if(!i){o=e.split(".");e=o.shift();k=new RegExp("(^|\\.)"+c.map(o.slice(0).sort(),db).join("\\.(?:.*\\.)?")+"(\\.|$)")}if(r=C[e])if(d){n=c.event.special[e]||{};for(B=f||0;B<r.length;B++){u=r[B];if(d.guid===u.guid){if(i||k.test(u.namespace)){f==null&&r.splice(B--,1);n.remove&&n.remove.call(a,u)}if(f!= null)break}}if(r.length===0||f!=null&&r.length===1){if(!n.teardown||n.teardown.call(a,o)===false)Ca(a,e,z.handle);delete C[e]}}else for(var B=0;B<r.length;B++){u=r[B];if(i||k.test(u.namespace)){c.event.remove(a,n,u.handler,B);r.splice(B--,1)}}}if(c.isEmptyObject(C)){if(b=z.handle)b.elem=null;delete z.events;delete z.handle;c.isEmptyObject(z)&&c.removeData(a)}}}}},trigger:function(a,b,d,f){var e=a.type||a;if(!f){a=typeof a==="object"?a[G]?a:c.extend(c.Event(e),a):c.Event(e);if(e.indexOf("!")>=0){a.type= e=e.slice(0,-1);a.exclusive=true}if(!d){a.stopPropagation();c.event.global[e]&&c.each(c.cache,function(){this.events&&this.events[e]&&c.event.trigger(a,b,this.handle.elem)})}if(!d||d.nodeType===3||d.nodeType===8)return w;a.result=w;a.target=d;b=c.makeArray(b);b.unshift(a)}a.currentTarget=d;(f=c.data(d,"handle"))&&f.apply(d,b);f=d.parentNode||d.ownerDocument;try{if(!(d&&d.nodeName&&c.noData[d.nodeName.toLowerCase()]))if(d["on"+e]&&d["on"+e].apply(d,b)===false)a.result=false}catch(j){}if(!a.isPropagationStopped()&& f)c.event.trigger(a,b,f,true);else if(!a.isDefaultPrevented()){f=a.target;var i,o=c.nodeName(f,"a")&&e==="click",k=c.event.special[e]||{};if((!k._default||k._default.call(d,a)===false)&&!o&&!(f&&f.nodeName&&c.noData[f.nodeName.toLowerCase()])){try{if(f[e]){if(i=f["on"+e])f["on"+e]=null;c.event.triggered=true;f[e]()}}catch(n){}if(i)f["on"+e]=i;c.event.triggered=false}}},handle:function(a){var b,d,f,e;a=arguments[0]=c.event.fix(a||A.event);a.currentTarget=this;b=a.type.indexOf(".")<0&&!a.exclusive; if(!b){d=a.type.split(".");a.type=d.shift();f=new RegExp("(^|\\.)"+d.slice(0).sort().join("\\.(?:.*\\.)?")+"(\\.|$)")}e=c.data(this,"events");d=e[a.type];if(e&&d){d=d.slice(0);e=0;for(var j=d.length;e<j;e++){var i=d[e];if(b||f.test(i.namespace)){a.handler=i.handler;a.data=i.data;a.handleObj=i;i=i.handler.apply(this,arguments);if(i!==w){a.result=i;if(i===false){a.preventDefault();a.stopPropagation()}}if(a.isImmediatePropagationStopped())break}}}return a.result},props:"altKey attrChange attrName bubbles button cancelable charCode clientX clientY ctrlKey currentTarget data detail eventPhase fromElement handler keyCode layerX layerY metaKey newValue offsetX offsetY originalTarget pageX pageY prevValue relatedNode relatedTarget screenX screenY shiftKey srcElement target toElement view wheelDelta which".split(" "), fix:function(a){if(a[G])return a;var b=a;a=c.Event(b);for(var d=this.props.length,f;d;){f=this.props[--d];a[f]=b[f]}if(!a.target)a.target=a.srcElement||s;if(a.target.nodeType===3)a.target=a.target.parentNode;if(!a.relatedTarget&&a.fromElement)a.relatedTarget=a.fromElement===a.target?a.toElement:a.fromElement;if(a.pageX==null&&a.clientX!=null){b=s.documentElement;d=s.body;a.pageX=a.clientX+(b&&b.scrollLeft||d&&d.scrollLeft||0)-(b&&b.clientLeft||d&&d.clientLeft||0);a.pageY=a.clientY+(b&&b.scrollTop|| d&&d.scrollTop||0)-(b&&b.clientTop||d&&d.clientTop||0)}if(!a.which&&(a.charCode||a.charCode===0?a.charCode:a.keyCode))a.which=a.charCode||a.keyCode;if(!a.metaKey&&a.ctrlKey)a.metaKey=a.ctrlKey;if(!a.which&&a.button!==w)a.which=a.button&1?1:a.button&2?3:a.button&4?2:0;return a},guid:1E8,proxy:c.proxy,special:{ready:{setup:c.bindReady,teardown:c.noop},live:{add:function(a){c.event.add(this,a.origType,c.extend({},a,{handler:oa}))},remove:function(a){var b=true,d=a.origType.replace(O,"");c.each(c.data(this, "events").live||[],function(){if(d===this.origType.replace(O,""))return b=false});b&&c.event.remove(this,a.origType,oa)}},beforeunload:{setup:function(a,b,d){if(this.setInterval)this.onbeforeunload=d;return false},teardown:function(a,b){if(this.onbeforeunload===b)this.onbeforeunload=null}}}};var Ca=s.removeEventListener?function(a,b,d){a.removeEventListener(b,d,false)}:function(a,b,d){a.detachEvent("on"+b,d)};c.Event=function(a){if(!this.preventDefault)return new c.Event(a);if(a&&a.type){this.originalEvent= a;this.type=a.type}else this.type=a;this.timeStamp=J();this[G]=true};c.Event.prototype={preventDefault:function(){this.isDefaultPrevented=Z;var a=this.originalEvent;if(a){a.preventDefault&&a.preventDefault();a.returnValue=false}},stopPropagation:function(){this.isPropagationStopped=Z;var a=this.originalEvent;if(a){a.stopPropagation&&a.stopPropagation();a.cancelBubble=true}},stopImmediatePropagation:function(){this.isImmediatePropagationStopped=Z;this.stopPropagation()},isDefaultPrevented:Y,isPropagationStopped:Y, isImmediatePropagationStopped:Y};var Da=function(a){var b=a.relatedTarget;try{for(;b&&b!==this;)b=b.parentNode;if(b!==this){a.type=a.data;c.event.handle.apply(this,arguments)}}catch(d){}},Ea=function(a){a.type=a.data;c.event.handle.apply(this,arguments)};c.each({mouseenter:"mouseover",mouseleave:"mouseout"},function(a,b){c.event.special[a]={setup:function(d){c.event.add(this,b,d&&d.selector?Ea:Da,a)},teardown:function(d){c.event.remove(this,b,d&&d.selector?Ea:Da)}}});if(!c.support.submitBubbles)c.event.special.submit= {setup:function(){if(this.nodeName.toLowerCase()!=="form"){c.event.add(this,"click.specialSubmit",function(a){var b=a.target,d=b.type;if((d==="submit"||d==="image")&&c(b).closest("form").length)return na("submit",this,arguments)});c.event.add(this,"keypress.specialSubmit",function(a){var b=a.target,d=b.type;if((d==="text"||d==="password")&&c(b).closest("form").length&&a.keyCode===13)return na("submit",this,arguments)})}else return false},teardown:function(){c.event.remove(this,".specialSubmit")}}; if(!c.support.changeBubbles){var da=/textarea|input|select/i,ea,Fa=function(a){var b=a.type,d=a.value;if(b==="radio"||b==="checkbox")d=a.checked;else if(b==="select-multiple")d=a.selectedIndex>-1?c.map(a.options,function(f){return f.selected}).join("-"):"";else if(a.nodeName.toLowerCase()==="select")d=a.selectedIndex;return d},fa=function(a,b){var d=a.target,f,e;if(!(!da.test(d.nodeName)||d.readOnly)){f=c.data(d,"_change_data");e=Fa(d);if(a.type!=="focusout"||d.type!=="radio")c.data(d,"_change_data", e);if(!(f===w||e===f))if(f!=null||e){a.type="change";return c.event.trigger(a,b,d)}}};c.event.special.change={filters:{focusout:fa,click:function(a){var b=a.target,d=b.type;if(d==="radio"||d==="checkbox"||b.nodeName.toLowerCase()==="select")return fa.call(this,a)},keydown:function(a){var b=a.target,d=b.type;if(a.keyCode===13&&b.nodeName.toLowerCase()!=="textarea"||a.keyCode===32&&(d==="checkbox"||d==="radio")||d==="select-multiple")return fa.call(this,a)},beforeactivate:function(a){a=a.target;c.data(a, "_change_data",Fa(a))}},setup:function(){if(this.type==="file")return false;for(var a in ea)c.event.add(this,a+".specialChange",ea[a]);return da.test(this.nodeName)},teardown:function(){c.event.remove(this,".specialChange");return da.test(this.nodeName)}};ea=c.event.special.change.filters}s.addEventListener&&c.each({focus:"focusin",blur:"focusout"},function(a,b){function d(f){f=c.event.fix(f);f.type=b;return c.event.handle.call(this,f)}c.event.special[b]={setup:function(){this.addEventListener(a, d,true)},teardown:function(){this.removeEventListener(a,d,true)}}});c.each(["bind","one"],function(a,b){c.fn[b]=function(d,f,e){if(typeof d==="object"){for(var j in d)this[b](j,f,d[j],e);return this}if(c.isFunction(f)){e=f;f=w}var i=b==="one"?c.proxy(e,function(k){c(this).unbind(k,i);return e.apply(this,arguments)}):e;if(d==="unload"&&b!=="one")this.one(d,f,e);else{j=0;for(var o=this.length;j<o;j++)c.event.add(this[j],d,i,f)}return this}});c.fn.extend({unbind:function(a,b){if(typeof a==="object"&& !a.preventDefault)for(var d in a)this.unbind(d,a[d]);else{d=0;for(var f=this.length;d<f;d++)c.event.remove(this[d],a,b)}return this},delegate:function(a,b,d,f){return this.live(b,d,f,a)},undelegate:function(a,b,d){return arguments.length===0?this.unbind("live"):this.die(b,null,d,a)},trigger:function(a,b){return this.each(function(){c.event.trigger(a,b,this)})},triggerHandler:function(a,b){if(this[0]){a=c.Event(a);a.preventDefault();a.stopPropagation();c.event.trigger(a,b,this[0]);return a.result}}, toggle:function(a){for(var b=arguments,d=1;d<b.length;)c.proxy(a,b[d++]);return this.click(c.proxy(a,function(f){var e=(c.data(this,"lastToggle"+a.guid)||0)%d;c.data(this,"lastToggle"+a.guid,e+1);f.preventDefault();return b[e].apply(this,arguments)||false}))},hover:function(a,b){return this.mouseenter(a).mouseleave(b||a)}});var Ga={focus:"focusin",blur:"focusout",mouseenter:"mouseover",mouseleave:"mouseout"};c.each(["live","die"],function(a,b){c.fn[b]=function(d,f,e,j){var i,o=0,k,n,r=j||this.selector, u=j?this:c(this.context);if(c.isFunction(f)){e=f;f=w}for(d=(d||"").split(" ");(i=d[o++])!=null;){j=O.exec(i);k="";if(j){k=j[0];i=i.replace(O,"")}if(i==="hover")d.push("mouseenter"+k,"mouseleave"+k);else{n=i;if(i==="focus"||i==="blur"){d.push(Ga[i]+k);i+=k}else i=(Ga[i]||i)+k;b==="live"?u.each(function(){c.event.add(this,pa(i,r),{data:f,selector:r,handler:e,origType:i,origHandler:e,preType:n})}):u.unbind(pa(i,r),e)}}return this}});c.each("blur focus focusin focusout load resize scroll unload click dblclick mousedown mouseup mousemove mouseover mouseout mouseenter mouseleave change select submit keydown keypress keyup error".split(" "), function(a,b){c.fn[b]=function(d){return d?this.bind(b,d):this.trigger(b)};if(c.attrFn)c.attrFn[b]=true});A.attachEvent&&!A.addEventListener&&A.attachEvent("onunload",function(){for(var a in c.cache)if(c.cache[a].handle)try{c.event.remove(c.cache[a].handle.elem)}catch(b){}});(function(){function a(g){for(var h="",l,m=0;g[m];m++){l=g[m];if(l.nodeType===3||l.nodeType===4)h+=l.nodeValue;else if(l.nodeType!==8)h+=a(l.childNodes)}return h}function b(g,h,l,m,q,p){q=0;for(var v=m.length;q<v;q++){var t=m[q]; if(t){t=t[g];for(var y=false;t;){if(t.sizcache===l){y=m[t.sizset];break}if(t.nodeType===1&&!p){t.sizcache=l;t.sizset=q}if(t.nodeName.toLowerCase()===h){y=t;break}t=t[g]}m[q]=y}}}function d(g,h,l,m,q,p){q=0;for(var v=m.length;q<v;q++){var t=m[q];if(t){t=t[g];for(var y=false;t;){if(t.sizcache===l){y=m[t.sizset];break}if(t.nodeType===1){if(!p){t.sizcache=l;t.sizset=q}if(typeof h!=="string"){if(t===h){y=true;break}}else if(k.filter(h,[t]).length>0){y=t;break}}t=t[g]}m[q]=y}}}var f=/((?:\((?:\([^()]+\)|[^()]+)+\)|\[(?:\[[^[\]]*\]|['"][^'"]*['"]|[^[\]'"]+)+\]|\\.|[^ >+~,(\[\\]+)+|[>+~])(\s*,\s*)?((?:.|\r|\n)*)/g, e=0,j=Object.prototype.toString,i=false,o=true;[0,0].sort(function(){o=false;return 0});var k=function(g,h,l,m){l=l||[];var q=h=h||s;if(h.nodeType!==1&&h.nodeType!==9)return[];if(!g||typeof g!=="string")return l;for(var p=[],v,t,y,S,H=true,M=x(h),I=g;(f.exec(""),v=f.exec(I))!==null;){I=v[3];p.push(v[1]);if(v[2]){S=v[3];break}}if(p.length>1&&r.exec(g))if(p.length===2&&n.relative[p[0]])t=ga(p[0]+p[1],h);else for(t=n.relative[p[0]]?[h]:k(p.shift(),h);p.length;){g=p.shift();if(n.relative[g])g+=p.shift(); t=ga(g,t)}else{if(!m&&p.length>1&&h.nodeType===9&&!M&&n.match.ID.test(p[0])&&!n.match.ID.test(p[p.length-1])){v=k.find(p.shift(),h,M);h=v.expr?k.filter(v.expr,v.set)[0]:v.set[0]}if(h){v=m?{expr:p.pop(),set:z(m)}:k.find(p.pop(),p.length===1&&(p[0]==="~"||p[0]==="+")&&h.parentNode?h.parentNode:h,M);t=v.expr?k.filter(v.expr,v.set):v.set;if(p.length>0)y=z(t);else H=false;for(;p.length;){var D=p.pop();v=D;if(n.relative[D])v=p.pop();else D="";if(v==null)v=h;n.relative[D](y,v,M)}}else y=[]}y||(y=t);y||k.error(D|| g);if(j.call(y)==="[object Array]")if(H)if(h&&h.nodeType===1)for(g=0;y[g]!=null;g++){if(y[g]&&(y[g]===true||y[g].nodeType===1&&E(h,y[g])))l.push(t[g])}else for(g=0;y[g]!=null;g++)y[g]&&y[g].nodeType===1&&l.push(t[g]);else l.push.apply(l,y);else z(y,l);if(S){k(S,q,l,m);k.uniqueSort(l)}return l};k.uniqueSort=function(g){if(B){i=o;g.sort(B);if(i)for(var h=1;h<g.length;h++)g[h]===g[h-1]&&g.splice(h--,1)}return g};k.matches=function(g,h){return k(g,null,null,h)};k.find=function(g,h,l){var m,q;if(!g)return[]; for(var p=0,v=n.order.length;p<v;p++){var t=n.order[p];if(q=n.leftMatch[t].exec(g)){var y=q[1];q.splice(1,1);if(y.substr(y.length-1)!=="\\"){q[1]=(q[1]||"").replace(/\\/g,"");m=n.find[t](q,h,l);if(m!=null){g=g.replace(n.match[t],"");break}}}}m||(m=h.getElementsByTagName("*"));return{set:m,expr:g}};k.filter=function(g,h,l,m){for(var q=g,p=[],v=h,t,y,S=h&&h[0]&&x(h[0]);g&&h.length;){for(var H in n.filter)if((t=n.leftMatch[H].exec(g))!=null&&t[2]){var M=n.filter[H],I,D;D=t[1];y=false;t.splice(1,1);if(D.substr(D.length- 1)!=="\\"){if(v===p)p=[];if(n.preFilter[H])if(t=n.preFilter[H](t,v,l,p,m,S)){if(t===true)continue}else y=I=true;if(t)for(var U=0;(D=v[U])!=null;U++)if(D){I=M(D,t,U,v);var Ha=m^!!I;if(l&&I!=null)if(Ha)y=true;else v[U]=false;else if(Ha){p.push(D);y=true}}if(I!==w){l||(v=p);g=g.replace(n.match[H],"");if(!y)return[];break}}}if(g===q)if(y==null)k.error(g);else break;q=g}return v};k.error=function(g){throw"Syntax error, unrecognized expression: "+g;};var n=k.selectors={order:["ID","NAME","TAG"],match:{ID:/#((?:[\w\u00c0-\uFFFF-]|\\.)+)/, CLASS:/\.((?:[\w\u00c0-\uFFFF-]|\\.)+)/,NAME:/\[name=['"]*((?:[\w\u00c0-\uFFFF-]|\\.)+)['"]*\]/,ATTR:/\[\s*((?:[\w\u00c0-\uFFFF-]|\\.)+)\s*(?:(\S?=)\s*(['"]*)(.*?)\3|)\s*\]/,TAG:/^((?:[\w\u00c0-\uFFFF\*-]|\\.)+)/,CHILD:/:(only|nth|last|first)-child(?:\((even|odd|[\dn+-]*)\))?/,POS:/:(nth|eq|gt|lt|first|last|even|odd)(?:\((\d*)\))?(?=[^-]|$)/,PSEUDO:/:((?:[\w\u00c0-\uFFFF-]|\\.)+)(?:\((['"]?)((?:\([^\)]+\)|[^\(\)]*)+)\2\))?/},leftMatch:{},attrMap:{"class":"className","for":"htmlFor"},attrHandle:{href:function(g){return g.getAttribute("href")}}, relative:{"+":function(g,h){var l=typeof h==="string",m=l&&!/\W/.test(h);l=l&&!m;if(m)h=h.toLowerCase();m=0;for(var q=g.length,p;m<q;m++)if(p=g[m]){for(;(p=p.previousSibling)&&p.nodeType!==1;);g[m]=l||p&&p.nodeName.toLowerCase()===h?p||false:p===h}l&&k.filter(h,g,true)},">":function(g,h){var l=typeof h==="string";if(l&&!/\W/.test(h)){h=h.toLowerCase();for(var m=0,q=g.length;m<q;m++){var p=g[m];if(p){l=p.parentNode;g[m]=l.nodeName.toLowerCase()===h?l:false}}}else{m=0;for(q=g.length;m<q;m++)if(p=g[m])g[m]= l?p.parentNode:p.parentNode===h;l&&k.filter(h,g,true)}},"":function(g,h,l){var m=e++,q=d;if(typeof h==="string"&&!/\W/.test(h)){var p=h=h.toLowerCase();q=b}q("parentNode",h,m,g,p,l)},"~":function(g,h,l){var m=e++,q=d;if(typeof h==="string"&&!/\W/.test(h)){var p=h=h.toLowerCase();q=b}q("previousSibling",h,m,g,p,l)}},find:{ID:function(g,h,l){if(typeof h.getElementById!=="undefined"&&!l)return(g=h.getElementById(g[1]))?[g]:[]},NAME:function(g,h){if(typeof h.getElementsByName!=="undefined"){var l=[]; h=h.getElementsByName(g[1]);for(var m=0,q=h.length;m<q;m++)h[m].getAttribute("name")===g[1]&&l.push(h[m]);return l.length===0?null:l}},TAG:function(g,h){return h.getElementsByTagName(g[1])}},preFilter:{CLASS:function(g,h,l,m,q,p){g=" "+g[1].replace(/\\/g,"")+" ";if(p)return g;p=0;for(var v;(v=h[p])!=null;p++)if(v)if(q^(v.className&&(" "+v.className+" ").replace(/[\t\n]/g," ").indexOf(g)>=0))l||m.push(v);else if(l)h[p]=false;return false},ID:function(g){return g[1].replace(/\\/g,"")},TAG:function(g){return g[1].toLowerCase()}, CHILD:function(g){if(g[1]==="nth"){var h=/(-?)(\d*)n((?:\+|-)?\d*)/.exec(g[2]==="even"&&"2n"||g[2]==="odd"&&"2n+1"||!/\D/.test(g[2])&&"0n+"+g[2]||g[2]);g[2]=h[1]+(h[2]||1)-0;g[3]=h[3]-0}g[0]=e++;return g},ATTR:function(g,h,l,m,q,p){h=g[1].replace(/\\/g,"");if(!p&&n.attrMap[h])g[1]=n.attrMap[h];if(g[2]==="~=")g[4]=" "+g[4]+" ";return g},PSEUDO:function(g,h,l,m,q){if(g[1]==="not")if((f.exec(g[3])||"").length>1||/^\w/.test(g[3]))g[3]=k(g[3],null,null,h);else{g=k.filter(g[3],h,l,true^q);l||m.push.apply(m, g);return false}else if(n.match.POS.test(g[0])||n.match.CHILD.test(g[0]))return true;return g},POS:function(g){g.unshift(true);return g}},filters:{enabled:function(g){return g.disabled===false&&g.type!=="hidden"},disabled:function(g){return g.disabled===true},checked:function(g){return g.checked===true},selected:function(g){return g.selected===true},parent:function(g){return!!g.firstChild},empty:function(g){return!g.firstChild},has:function(g,h,l){return!!k(l[3],g).length},header:function(g){return/h\d/i.test(g.nodeName)}, text:function(g){return"text"===g.type},radio:function(g){return"radio"===g.type},checkbox:function(g){return"checkbox"===g.type},file:function(g){return"file"===g.type},password:function(g){return"password"===g.type},submit:function(g){return"submit"===g.type},image:function(g){return"image"===g.type},reset:function(g){return"reset"===g.type},button:function(g){return"button"===g.type||g.nodeName.toLowerCase()==="button"},input:function(g){return/input|select|textarea|button/i.test(g.nodeName)}}, setFilters:{first:function(g,h){return h===0},last:function(g,h,l,m){return h===m.length-1},even:function(g,h){return h%2===0},odd:function(g,h){return h%2===1},lt:function(g,h,l){return h<l[3]-0},gt:function(g,h,l){return h>l[3]-0},nth:function(g,h,l){return l[3]-0===h},eq:function(g,h,l){return l[3]-0===h}},filter:{PSEUDO:function(g,h,l,m){var q=h[1],p=n.filters[q];if(p)return p(g,l,h,m);else if(q==="contains")return(g.textContent||g.innerText||a([g])||"").indexOf(h[3])>=0;else if(q==="not"){h= h[3];l=0;for(m=h.length;l<m;l++)if(h[l]===g)return false;return true}else k.error("Syntax error, unrecognized expression: "+q)},CHILD:function(g,h){var l=h[1],m=g;switch(l){case "only":case "first":for(;m=m.previousSibling;)if(m.nodeType===1)return false;if(l==="first")return true;m=g;case "last":for(;m=m.nextSibling;)if(m.nodeType===1)return false;return true;case "nth":l=h[2];var q=h[3];if(l===1&&q===0)return true;h=h[0];var p=g.parentNode;if(p&&(p.sizcache!==h||!g.nodeIndex)){var v=0;for(m=p.firstChild;m;m= m.nextSibling)if(m.nodeType===1)m.nodeIndex=++v;p.sizcache=h}g=g.nodeIndex-q;return l===0?g===0:g%l===0&&g/l>=0}},ID:function(g,h){return g.nodeType===1&&g.getAttribute("id")===h},TAG:function(g,h){return h==="*"&&g.nodeType===1||g.nodeName.toLowerCase()===h},CLASS:function(g,h){return(" "+(g.className||g.getAttribute("class"))+" ").indexOf(h)>-1},ATTR:function(g,h){var l=h[1];g=n.attrHandle[l]?n.attrHandle[l](g):g[l]!=null?g[l]:g.getAttribute(l);l=g+"";var m=h[2];h=h[4];return g==null?m==="!=":m=== "="?l===h:m==="*="?l.indexOf(h)>=0:m==="~="?(" "+l+" ").indexOf(h)>=0:!h?l&&g!==false:m==="!="?l!==h:m==="^="?l.indexOf(h)===0:m==="$="?l.substr(l.length-h.length)===h:m==="|="?l===h||l.substr(0,h.length+1)===h+"-":false},POS:function(g,h,l,m){var q=n.setFilters[h[2]];if(q)return q(g,l,h,m)}}},r=n.match.POS;for(var u in n.match){n.match[u]=new RegExp(n.match[u].source+/(?![^\[]*\])(?![^\(]*\))/.source);n.leftMatch[u]=new RegExp(/(^(?:.|\r|\n)*?)/.source+n.match[u].source.replace(/\\(\d+)/g,function(g, h){return"\\"+(h-0+1)}))}var z=function(g,h){g=Array.prototype.slice.call(g,0);if(h){h.push.apply(h,g);return h}return g};try{Array.prototype.slice.call(s.documentElement.childNodes,0)}catch(C){z=function(g,h){h=h||[];if(j.call(g)==="[object Array]")Array.prototype.push.apply(h,g);else if(typeof g.length==="number")for(var l=0,m=g.length;l<m;l++)h.push(g[l]);else for(l=0;g[l];l++)h.push(g[l]);return h}}var B;if(s.documentElement.compareDocumentPosition)B=function(g,h){if(!g.compareDocumentPosition|| !h.compareDocumentPosition){if(g==h)i=true;return g.compareDocumentPosition?-1:1}g=g.compareDocumentPosition(h)&4?-1:g===h?0:1;if(g===0)i=true;return g};else if("sourceIndex"in s.documentElement)B=function(g,h){if(!g.sourceIndex||!h.sourceIndex){if(g==h)i=true;return g.sourceIndex?-1:1}g=g.sourceIndex-h.sourceIndex;if(g===0)i=true;return g};else if(s.createRange)B=function(g,h){if(!g.ownerDocument||!h.ownerDocument){if(g==h)i=true;return g.ownerDocument?-1:1}var l=g.ownerDocument.createRange(),m= h.ownerDocument.createRange();l.setStart(g,0);l.setEnd(g,0);m.setStart(h,0);m.setEnd(h,0);g=l.compareBoundaryPoints(Range.START_TO_END,m);if(g===0)i=true;return g};(function(){var g=s.createElement("div"),h="script"+(new Date).getTime();g.innerHTML="<a name='"+h+"'/>";var l=s.documentElement;l.insertBefore(g,l.firstChild);if(s.getElementById(h)){n.find.ID=function(m,q,p){if(typeof q.getElementById!=="undefined"&&!p)return(q=q.getElementById(m[1]))?q.id===m[1]||typeof q.getAttributeNode!=="undefined"&& q.getAttributeNode("id").nodeValue===m[1]?[q]:w:[]};n.filter.ID=function(m,q){var p=typeof m.getAttributeNode!=="undefined"&&m.getAttributeNode("id");return m.nodeType===1&&p&&p.nodeValue===q}}l.removeChild(g);l=g=null})();(function(){var g=s.createElement("div");g.appendChild(s.createComment(""));if(g.getElementsByTagName("*").length>0)n.find.TAG=function(h,l){l=l.getElementsByTagName(h[1]);if(h[1]==="*"){h=[];for(var m=0;l[m];m++)l[m].nodeType===1&&h.push(l[m]);l=h}return l};g.innerHTML="<a href='#'></a>"; if(g.firstChild&&typeof g.firstChild.getAttribute!=="undefined"&&g.firstChild.getAttribute("href")!=="#")n.attrHandle.href=function(h){return h.getAttribute("href",2)};g=null})();s.querySelectorAll&&function(){var g=k,h=s.createElement("div");h.innerHTML="<p class='TEST'></p>";if(!(h.querySelectorAll&&h.querySelectorAll(".TEST").length===0)){k=function(m,q,p,v){q=q||s;if(!v&&q.nodeType===9&&!x(q))try{return z(q.querySelectorAll(m),p)}catch(t){}return g(m,q,p,v)};for(var l in g)k[l]=g[l];h=null}}(); (function(){var g=s.createElement("div");g.innerHTML="<div class='test e'></div><div class='test'></div>";if(!(!g.getElementsByClassName||g.getElementsByClassName("e").length===0)){g.lastChild.className="e";if(g.getElementsByClassName("e").length!==1){n.order.splice(1,0,"CLASS");n.find.CLASS=function(h,l,m){if(typeof l.getElementsByClassName!=="undefined"&&!m)return l.getElementsByClassName(h[1])};g=null}}})();var E=s.compareDocumentPosition?function(g,h){return!!(g.compareDocumentPosition(h)&16)}: function(g,h){return g!==h&&(g.contains?g.contains(h):true)},x=function(g){return(g=(g?g.ownerDocument||g:0).documentElement)?g.nodeName!=="HTML":false},ga=function(g,h){var l=[],m="",q;for(h=h.nodeType?[h]:h;q=n.match.PSEUDO.exec(g);){m+=q[0];g=g.replace(n.match.PSEUDO,"")}g=n.relative[g]?g+"*":g;q=0;for(var p=h.length;q<p;q++)k(g,h[q],l);return k.filter(m,l)};c.find=k;c.expr=k.selectors;c.expr[":"]=c.expr.filters;c.unique=k.uniqueSort;c.text=a;c.isXMLDoc=x;c.contains=E})();var eb=/Until$/,fb=/^(?:parents|prevUntil|prevAll)/, gb=/,/;R=Array.prototype.slice;var Ia=function(a,b,d){if(c.isFunction(b))return c.grep(a,function(e,j){return!!b.call(e,j,e)===d});else if(b.nodeType)return c.grep(a,function(e){return e===b===d});else if(typeof b==="string"){var f=c.grep(a,function(e){return e.nodeType===1});if(Ua.test(b))return c.filter(b,f,!d);else b=c.filter(b,f)}return c.grep(a,function(e){return c.inArray(e,b)>=0===d})};c.fn.extend({find:function(a){for(var b=this.pushStack("","find",a),d=0,f=0,e=this.length;f<e;f++){d=b.length; c.find(a,this[f],b);if(f>0)for(var j=d;j<b.length;j++)for(var i=0;i<d;i++)if(b[i]===b[j]){b.splice(j--,1);break}}return b},has:function(a){var b=c(a);return this.filter(function(){for(var d=0,f=b.length;d<f;d++)if(c.contains(this,b[d]))return true})},not:function(a){return this.pushStack(Ia(this,a,false),"not",a)},filter:function(a){return this.pushStack(Ia(this,a,true),"filter",a)},is:function(a){return!!a&&c.filter(a,this).length>0},closest:function(a,b){if(c.isArray(a)){var d=[],f=this[0],e,j= {},i;if(f&&a.length){e=0;for(var o=a.length;e<o;e++){i=a[e];j[i]||(j[i]=c.expr.match.POS.test(i)?c(i,b||this.context):i)}for(;f&&f.ownerDocument&&f!==b;){for(i in j){e=j[i];if(e.jquery?e.index(f)>-1:c(f).is(e)){d.push({selector:i,elem:f});delete j[i]}}f=f.parentNode}}return d}var k=c.expr.match.POS.test(a)?c(a,b||this.context):null;return this.map(function(n,r){for(;r&&r.ownerDocument&&r!==b;){if(k?k.index(r)>-1:c(r).is(a))return r;r=r.parentNode}return null})},index:function(a){if(!a||typeof a=== "string")return c.inArray(this[0],a?c(a):this.parent().children());return c.inArray(a.jquery?a[0]:a,this)},add:function(a,b){a=typeof a==="string"?c(a,b||this.context):c.makeArray(a);b=c.merge(this.get(),a);return this.pushStack(qa(a[0])||qa(b[0])?b:c.unique(b))},andSelf:function(){return this.add(this.prevObject)}});c.each({parent:function(a){return(a=a.parentNode)&&a.nodeType!==11?a:null},parents:function(a){return c.dir(a,"parentNode")},parentsUntil:function(a,b,d){return c.dir(a,"parentNode", d)},next:function(a){return c.nth(a,2,"nextSibling")},prev:function(a){return c.nth(a,2,"previousSibling")},nextAll:function(a){return c.dir(a,"nextSibling")},prevAll:function(a){return c.dir(a,"previousSibling")},nextUntil:function(a,b,d){return c.dir(a,"nextSibling",d)},prevUntil:function(a,b,d){return c.dir(a,"previousSibling",d)},siblings:function(a){return c.sibling(a.parentNode.firstChild,a)},children:function(a){return c.sibling(a.firstChild)},contents:function(a){return c.nodeName(a,"iframe")? a.contentDocument||a.contentWindow.document:c.makeArray(a.childNodes)}},function(a,b){c.fn[a]=function(d,f){var e=c.map(this,b,d);eb.test(a)||(f=d);if(f&&typeof f==="string")e=c.filter(f,e);e=this.length>1?c.unique(e):e;if((this.length>1||gb.test(f))&&fb.test(a))e=e.reverse();return this.pushStack(e,a,R.call(arguments).join(","))}});c.extend({filter:function(a,b,d){if(d)a=":not("+a+")";return c.find.matches(a,b)},dir:function(a,b,d){var f=[];for(a=a[b];a&&a.nodeType!==9&&(d===w||a.nodeType!==1||!c(a).is(d));){a.nodeType=== 1&&f.push(a);a=a[b]}return f},nth:function(a,b,d){b=b||1;for(var f=0;a;a=a[d])if(a.nodeType===1&&++f===b)break;return a},sibling:function(a,b){for(var d=[];a;a=a.nextSibling)a.nodeType===1&&a!==b&&d.push(a);return d}});var Ja=/ jQuery\d+="(?:\d+|null)"/g,V=/^\s+/,Ka=/(<([\w:]+)[^>]*?)\/>/g,hb=/^(?:area|br|col|embed|hr|img|input|link|meta|param)$/i,La=/<([\w:]+)/,ib=/<tbody/i,jb=/<|&#?\w+;/,ta=/<script|<object|<embed|<option|<style/i,ua=/checked\s*(?:[^=]|=\s*.checked.)/i,Ma=function(a,b,d){return hb.test(d)? a:b+"></"+d+">"},F={option:[1,"<select multiple='multiple'>","</select>"],legend:[1,"<fieldset>","</fieldset>"],thead:[1,"<table>","</table>"],tr:[2,"<table><tbody>","</tbody></table>"],td:[3,"<table><tbody><tr>","</tr></tbody></table>"],col:[2,"<table><tbody></tbody><colgroup>","</colgroup></table>"],area:[1,"<map>","</map>"],_default:[0,"",""]};F.optgroup=F.option;F.tbody=F.tfoot=F.colgroup=F.caption=F.thead;F.th=F.td;if(!c.support.htmlSerialize)F._default=[1,"div<div>","</div>"];c.fn.extend({text:function(a){if(c.isFunction(a))return this.each(function(b){var d= c(this);d.text(a.call(this,b,d.text()))});if(typeof a!=="object"&&a!==w)return this.empty().append((this[0]&&this[0].ownerDocument||s).createTextNode(a));return c.text(this)},wrapAll:function(a){if(c.isFunction(a))return this.each(function(d){c(this).wrapAll(a.call(this,d))});if(this[0]){var b=c(a,this[0].ownerDocument).eq(0).clone(true);this[0].parentNode&&b.insertBefore(this[0]);b.map(function(){for(var d=this;d.firstChild&&d.firstChild.nodeType===1;)d=d.firstChild;return d}).append(this)}return this}, wrapInner:function(a){if(c.isFunction(a))return this.each(function(b){c(this).wrapInner(a.call(this,b))});return this.each(function(){var b=c(this),d=b.contents();d.length?d.wrapAll(a):b.append(a)})},wrap:function(a){return this.each(function(){c(this).wrapAll(a)})},unwrap:function(){return this.parent().each(function(){c.nodeName(this,"body")||c(this).replaceWith(this.childNodes)}).end()},append:function(){return this.domManip(arguments,true,function(a){this.nodeType===1&&this.appendChild(a)})}, prepend:function(){return this.domManip(arguments,true,function(a){this.nodeType===1&&this.insertBefore(a,this.firstChild)})},before:function(){if(this[0]&&this[0].parentNode)return this.domManip(arguments,false,function(b){this.parentNode.insertBefore(b,this)});else if(arguments.length){var a=c(arguments[0]);a.push.apply(a,this.toArray());return this.pushStack(a,"before",arguments)}},after:function(){if(this[0]&&this[0].parentNode)return this.domManip(arguments,false,function(b){this.parentNode.insertBefore(b, this.nextSibling)});else if(arguments.length){var a=this.pushStack(this,"after",arguments);a.push.apply(a,c(arguments[0]).toArray());return a}},remove:function(a,b){for(var d=0,f;(f=this[d])!=null;d++)if(!a||c.filter(a,[f]).length){if(!b&&f.nodeType===1){c.cleanData(f.getElementsByTagName("*"));c.cleanData([f])}f.parentNode&&f.parentNode.removeChild(f)}return this},empty:function(){for(var a=0,b;(b=this[a])!=null;a++)for(b.nodeType===1&&c.cleanData(b.getElementsByTagName("*"));b.firstChild;)b.removeChild(b.firstChild); return this},clone:function(a){var b=this.map(function(){if(!c.support.noCloneEvent&&!c.isXMLDoc(this)){var d=this.outerHTML,f=this.ownerDocument;if(!d){d=f.createElement("div");d.appendChild(this.cloneNode(true));d=d.innerHTML}return c.clean([d.replace(Ja,"").replace(/=([^="'>\s]+\/)>/g,'="$1">').replace(V,"")],f)[0]}else return this.cloneNode(true)});if(a===true){ra(this,b);ra(this.find("*"),b.find("*"))}return b},html:function(a){if(a===w)return this[0]&&this[0].nodeType===1?this[0].innerHTML.replace(Ja, ""):null;else if(typeof a==="string"&&!ta.test(a)&&(c.support.leadingWhitespace||!V.test(a))&&!F[(La.exec(a)||["",""])[1].toLowerCase()]){a=a.replace(Ka,Ma);try{for(var b=0,d=this.length;b<d;b++)if(this[b].nodeType===1){c.cleanData(this[b].getElementsByTagName("*"));this[b].innerHTML=a}}catch(f){this.empty().append(a)}}else c.isFunction(a)?this.each(function(e){var j=c(this),i=j.html();j.empty().append(function(){return a.call(this,e,i)})}):this.empty().append(a);return this},replaceWith:function(a){if(this[0]&& this[0].parentNode){if(c.isFunction(a))return this.each(function(b){var d=c(this),f=d.html();d.replaceWith(a.call(this,b,f))});if(typeof a!=="string")a=c(a).detach();return this.each(function(){var b=this.nextSibling,d=this.parentNode;c(this).remove();b?c(b).before(a):c(d).append(a)})}else return this.pushStack(c(c.isFunction(a)?a():a),"replaceWith",a)},detach:function(a){return this.remove(a,true)},domManip:function(a,b,d){function f(u){return c.nodeName(u,"table")?u.getElementsByTagName("tbody")[0]|| u.appendChild(u.ownerDocument.createElement("tbody")):u}var e,j,i=a[0],o=[],k;if(!c.support.checkClone&&arguments.length===3&&typeof i==="string"&&ua.test(i))return this.each(function(){c(this).domManip(a,b,d,true)});if(c.isFunction(i))return this.each(function(u){var z=c(this);a[0]=i.call(this,u,b?z.html():w);z.domManip(a,b,d)});if(this[0]){e=i&&i.parentNode;e=c.support.parentNode&&e&&e.nodeType===11&&e.childNodes.length===this.length?{fragment:e}:sa(a,this,o);k=e.fragment;if(j=k.childNodes.length=== 1?(k=k.firstChild):k.firstChild){b=b&&c.nodeName(j,"tr");for(var n=0,r=this.length;n<r;n++)d.call(b?f(this[n],j):this[n],n>0||e.cacheable||this.length>1?k.cloneNode(true):k)}o.length&&c.each(o,Qa)}return this}});c.fragments={};c.each({appendTo:"append",prependTo:"prepend",insertBefore:"before",insertAfter:"after",replaceAll:"replaceWith"},function(a,b){c.fn[a]=function(d){var f=[];d=c(d);var e=this.length===1&&this[0].parentNode;if(e&&e.nodeType===11&&e.childNodes.length===1&&d.length===1){d[b](this[0]); return this}else{e=0;for(var j=d.length;e<j;e++){var i=(e>0?this.clone(true):this).get();c.fn[b].apply(c(d[e]),i);f=f.concat(i)}return this.pushStack(f,a,d.selector)}}});c.extend({clean:function(a,b,d,f){b=b||s;if(typeof b.createElement==="undefined")b=b.ownerDocument||b[0]&&b[0].ownerDocument||s;for(var e=[],j=0,i;(i=a[j])!=null;j++){if(typeof i==="number")i+="";if(i){if(typeof i==="string"&&!jb.test(i))i=b.createTextNode(i);else if(typeof i==="string"){i=i.replace(Ka,Ma);var o=(La.exec(i)||["", ""])[1].toLowerCase(),k=F[o]||F._default,n=k[0],r=b.createElement("div");for(r.innerHTML=k[1]+i+k[2];n--;)r=r.lastChild;if(!c.support.tbody){n=ib.test(i);o=o==="table"&&!n?r.firstChild&&r.firstChild.childNodes:k[1]==="<table>"&&!n?r.childNodes:[];for(k=o.length-1;k>=0;--k)c.nodeName(o[k],"tbody")&&!o[k].childNodes.length&&o[k].parentNode.removeChild(o[k])}!c.support.leadingWhitespace&&V.test(i)&&r.insertBefore(b.createTextNode(V.exec(i)[0]),r.firstChild);i=r.childNodes}if(i.nodeType)e.push(i);else e= c.merge(e,i)}}if(d)for(j=0;e[j];j++)if(f&&c.nodeName(e[j],"script")&&(!e[j].type||e[j].type.toLowerCase()==="text/javascript"))f.push(e[j].parentNode?e[j].parentNode.removeChild(e[j]):e[j]);else{e[j].nodeType===1&&e.splice.apply(e,[j+1,0].concat(c.makeArray(e[j].getElementsByTagName("script"))));d.appendChild(e[j])}return e},cleanData:function(a){for(var b,d,f=c.cache,e=c.event.special,j=c.support.deleteExpando,i=0,o;(o=a[i])!=null;i++)if(d=o[c.expando]){b=f[d];if(b.events)for(var k in b.events)e[k]? c.event.remove(o,k):Ca(o,k,b.handle);if(j)delete o[c.expando];else o.removeAttribute&&o.removeAttribute(c.expando);delete f[d]}}});var kb=/z-?index|font-?weight|opacity|zoom|line-?height/i,Na=/alpha\([^)]*\)/,Oa=/opacity=([^)]*)/,ha=/float/i,ia=/-([a-z])/ig,lb=/([A-Z])/g,mb=/^-?\d+(?:px)?$/i,nb=/^-?\d/,ob={position:"absolute",visibility:"hidden",display:"block"},pb=["Left","Right"],qb=["Top","Bottom"],rb=s.defaultView&&s.defaultView.getComputedStyle,Pa=c.support.cssFloat?"cssFloat":"styleFloat",ja= function(a,b){return b.toUpperCase()};c.fn.css=function(a,b){return X(this,a,b,true,function(d,f,e){if(e===w)return c.curCSS(d,f);if(typeof e==="number"&&!kb.test(f))e+="px";c.style(d,f,e)})};c.extend({style:function(a,b,d){if(!a||a.nodeType===3||a.nodeType===8)return w;if((b==="width"||b==="height")&&parseFloat(d)<0)d=w;var f=a.style||a,e=d!==w;if(!c.support.opacity&&b==="opacity"){if(e){f.zoom=1;b=parseInt(d,10)+""==="NaN"?"":"alpha(opacity="+d*100+")";a=f.filter||c.curCSS(a,"filter")||"";f.filter= Na.test(a)?a.replace(Na,b):b}return f.filter&&f.filter.indexOf("opacity=")>=0?parseFloat(Oa.exec(f.filter)[1])/100+"":""}if(ha.test(b))b=Pa;b=b.replace(ia,ja);if(e)f[b]=d;return f[b]},css:function(a,b,d,f){if(b==="width"||b==="height"){var e,j=b==="width"?pb:qb;function i(){e=b==="width"?a.offsetWidth:a.offsetHeight;f!=="border"&&c.each(j,function(){f||(e-=parseFloat(c.curCSS(a,"padding"+this,true))||0);if(f==="margin")e+=parseFloat(c.curCSS(a,"margin"+this,true))||0;else e-=parseFloat(c.curCSS(a, "border"+this+"Width",true))||0})}a.offsetWidth!==0?i():c.swap(a,ob,i);return Math.max(0,Math.round(e))}return c.curCSS(a,b,d)},curCSS:function(a,b,d){var f,e=a.style;if(!c.support.opacity&&b==="opacity"&&a.currentStyle){f=Oa.test(a.currentStyle.filter||"")?parseFloat(RegExp.$1)/100+"":"";return f===""?"1":f}if(ha.test(b))b=Pa;if(!d&&e&&e[b])f=e[b];else if(rb){if(ha.test(b))b="float";b=b.replace(lb,"-$1").toLowerCase();e=a.ownerDocument.defaultView;if(!e)return null;if(a=e.getComputedStyle(a,null))f= a.getPropertyValue(b);if(b==="opacity"&&f==="")f="1"}else if(a.currentStyle){d=b.replace(ia,ja);f=a.currentStyle[b]||a.currentStyle[d];if(!mb.test(f)&&nb.test(f)){b=e.left;var j=a.runtimeStyle.left;a.runtimeStyle.left=a.currentStyle.left;e.left=d==="fontSize"?"1em":f||0;f=e.pixelLeft+"px";e.left=b;a.runtimeStyle.left=j}}return f},swap:function(a,b,d){var f={};for(var e in b){f[e]=a.style[e];a.style[e]=b[e]}d.call(a);for(e in b)a.style[e]=f[e]}});if(c.expr&&c.expr.filters){c.expr.filters.hidden=function(a){var b= a.offsetWidth,d=a.offsetHeight,f=a.nodeName.toLowerCase()==="tr";return b===0&&d===0&&!f?true:b>0&&d>0&&!f?false:c.curCSS(a,"display")==="none"};c.expr.filters.visible=function(a){return!c.expr.filters.hidden(a)}}var sb=J(),tb=/<script(.|\s)*?\/script>/gi,ub=/select|textarea/i,vb=/color|date|datetime|email|hidden|month|number|password|range|search|tel|text|time|url|week/i,N=/=\?(&|$)/,ka=/\?/,wb=/(\?|&)_=.*?(&|$)/,xb=/^(\w+:)?\/\/([^\/?#]+)/,yb=/%20/g,zb=c.fn.load;c.fn.extend({load:function(a,b,d){if(typeof a!== "string")return zb.call(this,a);else if(!this.length)return this;var f=a.indexOf(" ");if(f>=0){var e=a.slice(f,a.length);a=a.slice(0,f)}f="GET";if(b)if(c.isFunction(b)){d=b;b=null}else if(typeof b==="object"){b=c.param(b,c.ajaxSettings.traditional);f="POST"}var j=this;c.ajax({url:a,type:f,dataType:"html",data:b,complete:function(i,o){if(o==="success"||o==="notmodified")j.html(e?c("<div />").append(i.responseText.replace(tb,"")).find(e):i.responseText);d&&j.each(d,[i.responseText,o,i])}});return this}, serialize:function(){return c.param(this.serializeArray())},serializeArray:function(){return this.map(function(){return this.elements?c.makeArray(this.elements):this}).filter(function(){return this.name&&!this.disabled&&(this.checked||ub.test(this.nodeName)||vb.test(this.type))}).map(function(a,b){a=c(this).val();return a==null?null:c.isArray(a)?c.map(a,function(d){return{name:b.name,value:d}}):{name:b.name,value:a}}).get()}});c.each("ajaxStart ajaxStop ajaxComplete ajaxError ajaxSuccess ajaxSend".split(" "), function(a,b){c.fn[b]=function(d){return this.bind(b,d)}});c.extend({get:function(a,b,d,f){if(c.isFunction(b)){f=f||d;d=b;b=null}return c.ajax({type:"GET",url:a,data:b,success:d,dataType:f})},getScript:function(a,b){return c.get(a,null,b,"script")},getJSON:function(a,b,d){return c.get(a,b,d,"json")},post:function(a,b,d,f){if(c.isFunction(b)){f=f||d;d=b;b={}}return c.ajax({type:"POST",url:a,data:b,success:d,dataType:f})},ajaxSetup:function(a){c.extend(c.ajaxSettings,a)},ajaxSettings:{url:location.href, global:true,type:"GET",contentType:"application/x-www-form-urlencoded",processData:true,async:true,xhr:A.XMLHttpRequest&&(A.location.protocol!=="file:"||!A.ActiveXObject)?function(){return new A.XMLHttpRequest}:function(){try{return new A.ActiveXObject("Microsoft.XMLHTTP")}catch(a){}},accepts:{xml:"application/xml, text/xml",html:"text/html",script:"text/javascript, application/javascript",json:"application/json, text/javascript",text:"text/plain",_default:"*/*"}},lastModified:{},etag:{},ajax:function(a){function b(){e.success&& e.success.call(k,o,i,x);e.global&&f("ajaxSuccess",[x,e])}function d(){e.complete&&e.complete.call(k,x,i);e.global&&f("ajaxComplete",[x,e]);e.global&&!--c.active&&c.event.trigger("ajaxStop")}function f(q,p){(e.context?c(e.context):c.event).trigger(q,p)}var e=c.extend(true,{},c.ajaxSettings,a),j,i,o,k=a&&a.context||e,n=e.type.toUpperCase();if(e.data&&e.processData&&typeof e.data!=="string")e.data=c.param(e.data,e.traditional);if(e.dataType==="jsonp"){if(n==="GET")N.test(e.url)||(e.url+=(ka.test(e.url)? "&":"?")+(e.jsonp||"callback")+"=?");else if(!e.data||!N.test(e.data))e.data=(e.data?e.data+"&":"")+(e.jsonp||"callback")+"=?";e.dataType="json"}if(e.dataType==="json"&&(e.data&&N.test(e.data)||N.test(e.url))){j=e.jsonpCallback||"jsonp"+sb++;if(e.data)e.data=(e.data+"").replace(N,"="+j+"$1");e.url=e.url.replace(N,"="+j+"$1");e.dataType="script";A[j]=A[j]||function(q){o=q;b();d();A[j]=w;try{delete A[j]}catch(p){}z&&z.removeChild(C)}}if(e.dataType==="script"&&e.cache===null)e.cache=false;if(e.cache=== false&&n==="GET"){var r=J(),u=e.url.replace(wb,"$1_="+r+"$2");e.url=u+(u===e.url?(ka.test(e.url)?"&":"?")+"_="+r:"")}if(e.data&&n==="GET")e.url+=(ka.test(e.url)?"&":"?")+e.data;e.global&&!c.active++&&c.event.trigger("ajaxStart");r=(r=xb.exec(e.url))&&(r[1]&&r[1]!==location.protocol||r[2]!==location.host);if(e.dataType==="script"&&n==="GET"&&r){var z=s.getElementsByTagName("head")[0]||s.documentElement,C=s.createElement("script");C.src=e.url;if(e.scriptCharset)C.charset=e.scriptCharset;if(!j){var B= false;C.onload=C.onreadystatechange=function(){if(!B&&(!this.readyState||this.readyState==="loaded"||this.readyState==="complete")){B=true;b();d();C.onload=C.onreadystatechange=null;z&&C.parentNode&&z.removeChild(C)}}}z.insertBefore(C,z.firstChild);return w}var E=false,x=e.xhr();if(x){e.username?x.open(n,e.url,e.async,e.username,e.password):x.open(n,e.url,e.async);try{if(e.data||a&&a.contentType)x.setRequestHeader("Content-Type",e.contentType);if(e.ifModified){c.lastModified[e.url]&&x.setRequestHeader("If-Modified-Since", c.lastModified[e.url]);c.etag[e.url]&&x.setRequestHeader("If-None-Match",c.etag[e.url])}r||x.setRequestHeader("X-Requested-With","XMLHttpRequest");x.setRequestHeader("Accept",e.dataType&&e.accepts[e.dataType]?e.accepts[e.dataType]+", */*":e.accepts._default)}catch(ga){}if(e.beforeSend&&e.beforeSend.call(k,x,e)===false){e.global&&!--c.active&&c.event.trigger("ajaxStop");x.abort();return false}e.global&&f("ajaxSend",[x,e]);var g=x.onreadystatechange=function(q){if(!x||x.readyState===0||q==="abort"){E|| d();E=true;if(x)x.onreadystatechange=c.noop}else if(!E&&x&&(x.readyState===4||q==="timeout")){E=true;x.onreadystatechange=c.noop;i=q==="timeout"?"timeout":!c.httpSuccess(x)?"error":e.ifModified&&c.httpNotModified(x,e.url)?"notmodified":"success";var p;if(i==="success")try{o=c.httpData(x,e.dataType,e)}catch(v){i="parsererror";p=v}if(i==="success"||i==="notmodified")j||b();else c.handleError(e,x,i,p);d();q==="timeout"&&x.abort();if(e.async)x=null}};try{var h=x.abort;x.abort=function(){x&&h.call(x); g("abort")}}catch(l){}e.async&&e.timeout>0&&setTimeout(function(){x&&!E&&g("timeout")},e.timeout);try{x.send(n==="POST"||n==="PUT"||n==="DELETE"?e.data:null)}catch(m){c.handleError(e,x,null,m);d()}e.async||g();return x}},handleError:function(a,b,d,f){if(a.error)a.error.call(a.context||a,b,d,f);if(a.global)(a.context?c(a.context):c.event).trigger("ajaxError",[b,a,f])},active:0,httpSuccess:function(a){try{return!a.status&&location.protocol==="file:"||a.status>=200&&a.status<300||a.status===304||a.status=== 1223||a.status===0}catch(b){}return false},httpNotModified:function(a,b){var d=a.getResponseHeader("Last-Modified"),f=a.getResponseHeader("Etag");if(d)c.lastModified[b]=d;if(f)c.etag[b]=f;return a.status===304||a.status===0},httpData:function(a,b,d){var f=a.getResponseHeader("content-type")||"",e=b==="xml"||!b&&f.indexOf("xml")>=0;a=e?a.responseXML:a.responseText;e&&a.documentElement.nodeName==="parsererror"&&c.error("parsererror");if(d&&d.dataFilter)a=d.dataFilter(a,b);if(typeof a==="string")if(b=== "json"||!b&&f.indexOf("json")>=0)a=c.parseJSON(a);else if(b==="script"||!b&&f.indexOf("javascript")>=0)c.globalEval(a);return a},param:function(a,b){function d(i,o){if(c.isArray(o))c.each(o,function(k,n){b||/\[\]$/.test(i)?f(i,n):d(i+"["+(typeof n==="object"||c.isArray(n)?k:"")+"]",n)});else!b&&o!=null&&typeof o==="object"?c.each(o,function(k,n){d(i+"["+k+"]",n)}):f(i,o)}function f(i,o){o=c.isFunction(o)?o():o;e[e.length]=encodeURIComponent(i)+"="+encodeURIComponent(o)}var e=[];if(b===w)b=c.ajaxSettings.traditional; if(c.isArray(a)||a.jquery)c.each(a,function(){f(this.name,this.value)});else for(var j in a)d(j,a[j]);return e.join("&").replace(yb,"+")}});var la={},Ab=/toggle|show|hide/,Bb=/^([+-]=)?([\d+-.]+)(.*)$/,W,va=[["height","marginTop","marginBottom","paddingTop","paddingBottom"],["width","marginLeft","marginRight","paddingLeft","paddingRight"],["opacity"]];c.fn.extend({show:function(a,b){if(a||a===0)return this.animate(K("show",3),a,b);else{a=0;for(b=this.length;a<b;a++){var d=c.data(this[a],"olddisplay"); this[a].style.display=d||"";if(c.css(this[a],"display")==="none"){d=this[a].nodeName;var f;if(la[d])f=la[d];else{var e=c("<"+d+" />").appendTo("body");f=e.css("display");if(f==="none")f="block";e.remove();la[d]=f}c.data(this[a],"olddisplay",f)}}a=0;for(b=this.length;a<b;a++)this[a].style.display=c.data(this[a],"olddisplay")||"";return this}},hide:function(a,b){if(a||a===0)return this.animate(K("hide",3),a,b);else{a=0;for(b=this.length;a<b;a++){var d=c.data(this[a],"olddisplay");!d&&d!=="none"&&c.data(this[a], "olddisplay",c.css(this[a],"display"))}a=0;for(b=this.length;a<b;a++)this[a].style.display="none";return this}},_toggle:c.fn.toggle,toggle:function(a,b){var d=typeof a==="boolean";if(c.isFunction(a)&&c.isFunction(b))this._toggle.apply(this,arguments);else a==null||d?this.each(function(){var f=d?a:c(this).is(":hidden");c(this)[f?"show":"hide"]()}):this.animate(K("toggle",3),a,b);return this},fadeTo:function(a,b,d){return this.filter(":hidden").css("opacity",0).show().end().animate({opacity:b},a,d)}, animate:function(a,b,d,f){var e=c.speed(b,d,f);if(c.isEmptyObject(a))return this.each(e.complete);return this[e.queue===false?"each":"queue"](function(){var j=c.extend({},e),i,o=this.nodeType===1&&c(this).is(":hidden"),k=this;for(i in a){var n=i.replace(ia,ja);if(i!==n){a[n]=a[i];delete a[i];i=n}if(a[i]==="hide"&&o||a[i]==="show"&&!o)return j.complete.call(this);if((i==="height"||i==="width")&&this.style){j.display=c.css(this,"display");j.overflow=this.style.overflow}if(c.isArray(a[i])){(j.specialEasing= j.specialEasing||{})[i]=a[i][1];a[i]=a[i][0]}}if(j.overflow!=null)this.style.overflow="hidden";j.curAnim=c.extend({},a);c.each(a,function(r,u){var z=new c.fx(k,j,r);if(Ab.test(u))z[u==="toggle"?o?"show":"hide":u](a);else{var C=Bb.exec(u),B=z.cur(true)||0;if(C){u=parseFloat(C[2]);var E=C[3]||"px";if(E!=="px"){k.style[r]=(u||1)+E;B=(u||1)/z.cur(true)*B;k.style[r]=B+E}if(C[1])u=(C[1]==="-="?-1:1)*u+B;z.custom(B,u,E)}else z.custom(B,u,"")}});return true})},stop:function(a,b){var d=c.timers;a&&this.queue([]); this.each(function(){for(var f=d.length-1;f>=0;f--)if(d[f].elem===this){b&&d[f](true);d.splice(f,1)}});b||this.dequeue();return this}});c.each({slideDown:K("show",1),slideUp:K("hide",1),slideToggle:K("toggle",1),fadeIn:{opacity:"show"},fadeOut:{opacity:"hide"}},function(a,b){c.fn[a]=function(d,f){return this.animate(b,d,f)}});c.extend({speed:function(a,b,d){var f=a&&typeof a==="object"?a:{complete:d||!d&&b||c.isFunction(a)&&a,duration:a,easing:d&&b||b&&!c.isFunction(b)&&b};f.duration=c.fx.off?0:typeof f.duration=== "number"?f.duration:c.fx.speeds[f.duration]||c.fx.speeds._default;f.old=f.complete;f.complete=function(){f.queue!==false&&c(this).dequeue();c.isFunction(f.old)&&f.old.call(this)};return f},easing:{linear:function(a,b,d,f){return d+f*a},swing:function(a,b,d,f){return(-Math.cos(a*Math.PI)/2+0.5)*f+d}},timers:[],fx:function(a,b,d){this.options=b;this.elem=a;this.prop=d;if(!b.orig)b.orig={}}});c.fx.prototype={update:function(){this.options.step&&this.options.step.call(this.elem,this.now,this);(c.fx.step[this.prop]|| c.fx.step._default)(this);if((this.prop==="height"||this.prop==="width")&&this.elem.style)this.elem.style.display="block"},cur:function(a){if(this.elem[this.prop]!=null&&(!this.elem.style||this.elem.style[this.prop]==null))return this.elem[this.prop];return(a=parseFloat(c.css(this.elem,this.prop,a)))&&a>-10000?a:parseFloat(c.curCSS(this.elem,this.prop))||0},custom:function(a,b,d){function f(j){return e.step(j)}this.startTime=J();this.start=a;this.end=b;this.unit=d||this.unit||"px";this.now=this.start; this.pos=this.state=0;var e=this;f.elem=this.elem;if(f()&&c.timers.push(f)&&!W)W=setInterval(c.fx.tick,13)},show:function(){this.options.orig[this.prop]=c.style(this.elem,this.prop);this.options.show=true;this.custom(this.prop==="width"||this.prop==="height"?1:0,this.cur());c(this.elem).show()},hide:function(){this.options.orig[this.prop]=c.style(this.elem,this.prop);this.options.hide=true;this.custom(this.cur(),0)},step:function(a){var b=J(),d=true;if(a||b>=this.options.duration+this.startTime){this.now= this.end;this.pos=this.state=1;this.update();this.options.curAnim[this.prop]=true;for(var f in this.options.curAnim)if(this.options.curAnim[f]!==true)d=false;if(d){if(this.options.display!=null){this.elem.style.overflow=this.options.overflow;a=c.data(this.elem,"olddisplay");this.elem.style.display=a?a:this.options.display;if(c.css(this.elem,"display")==="none")this.elem.style.display="block"}this.options.hide&&c(this.elem).hide();if(this.options.hide||this.options.show)for(var e in this.options.curAnim)c.style(this.elem, e,this.options.orig[e]);this.options.complete.call(this.elem)}return false}else{e=b-this.startTime;this.state=e/this.options.duration;a=this.options.easing||(c.easing.swing?"swing":"linear");this.pos=c.easing[this.options.specialEasing&&this.options.specialEasing[this.prop]||a](this.state,e,0,1,this.options.duration);this.now=this.start+(this.end-this.start)*this.pos;this.update()}return true}};c.extend(c.fx,{tick:function(){for(var a=c.timers,b=0;b<a.length;b++)a[b]()||a.splice(b--,1);a.length|| c.fx.stop()},stop:function(){clearInterval(W);W=null},speeds:{slow:600,fast:200,_default:400},step:{opacity:function(a){c.style(a.elem,"opacity",a.now)},_default:function(a){if(a.elem.style&&a.elem.style[a.prop]!=null)a.elem.style[a.prop]=(a.prop==="width"||a.prop==="height"?Math.max(0,a.now):a.now)+a.unit;else a.elem[a.prop]=a.now}}});if(c.expr&&c.expr.filters)c.expr.filters.animated=function(a){return c.grep(c.timers,function(b){return a===b.elem}).length};c.fn.offset="getBoundingClientRect"in s.documentElement? function(a){var b=this[0];if(a)return this.each(function(e){c.offset.setOffset(this,a,e)});if(!b||!b.ownerDocument)return null;if(b===b.ownerDocument.body)return c.offset.bodyOffset(b);var d=b.getBoundingClientRect(),f=b.ownerDocument;b=f.body;f=f.documentElement;return{top:d.top+(self.pageYOffset||c.support.boxModel&&f.scrollTop||b.scrollTop)-(f.clientTop||b.clientTop||0),left:d.left+(self.pageXOffset||c.support.boxModel&&f.scrollLeft||b.scrollLeft)-(f.clientLeft||b.clientLeft||0)}}:function(a){var b= this[0];if(a)return this.each(function(r){c.offset.setOffset(this,a,r)});if(!b||!b.ownerDocument)return null;if(b===b.ownerDocument.body)return c.offset.bodyOffset(b);c.offset.initialize();var d=b.offsetParent,f=b,e=b.ownerDocument,j,i=e.documentElement,o=e.body;f=(e=e.defaultView)?e.getComputedStyle(b,null):b.currentStyle;for(var k=b.offsetTop,n=b.offsetLeft;(b=b.parentNode)&&b!==o&&b!==i;){if(c.offset.supportsFixedPosition&&f.position==="fixed")break;j=e?e.getComputedStyle(b,null):b.currentStyle; k-=b.scrollTop;n-=b.scrollLeft;if(b===d){k+=b.offsetTop;n+=b.offsetLeft;if(c.offset.doesNotAddBorder&&!(c.offset.doesAddBorderForTableAndCells&&/^t(able|d|h)$/i.test(b.nodeName))){k+=parseFloat(j.borderTopWidth)||0;n+=parseFloat(j.borderLeftWidth)||0}f=d;d=b.offsetParent}if(c.offset.subtractsBorderForOverflowNotVisible&&j.overflow!=="visible"){k+=parseFloat(j.borderTopWidth)||0;n+=parseFloat(j.borderLeftWidth)||0}f=j}if(f.position==="relative"||f.position==="static"){k+=o.offsetTop;n+=o.offsetLeft}if(c.offset.supportsFixedPosition&& f.position==="fixed"){k+=Math.max(i.scrollTop,o.scrollTop);n+=Math.max(i.scrollLeft,o.scrollLeft)}return{top:k,left:n}};c.offset={initialize:function(){var a=s.body,b=s.createElement("div"),d,f,e,j=parseFloat(c.curCSS(a,"marginTop",true))||0;c.extend(b.style,{position:"absolute",top:0,left:0,margin:0,border:0,width:"1px",height:"1px",visibility:"hidden"});b.innerHTML="<div style='position:absolute;top:0;left:0;margin:0;border:5px solid #000;padding:0;width:1px;height:1px;'><div></div></div><table style='position:absolute;top:0;left:0;margin:0;border:5px solid #000;padding:0;width:1px;height:1px;' cellpadding='0' cellspacing='0'><tr><td></td></tr></table>"; a.insertBefore(b,a.firstChild);d=b.firstChild;f=d.firstChild;e=d.nextSibling.firstChild.firstChild;this.doesNotAddBorder=f.offsetTop!==5;this.doesAddBorderForTableAndCells=e.offsetTop===5;f.style.position="fixed";f.style.top="20px";this.supportsFixedPosition=f.offsetTop===20||f.offsetTop===15;f.style.position=f.style.top="";d.style.overflow="hidden";d.style.position="relative";this.subtractsBorderForOverflowNotVisible=f.offsetTop===-5;this.doesNotIncludeMarginInBodyOffset=a.offsetTop!==j;a.removeChild(b); c.offset.initialize=c.noop},bodyOffset:function(a){var b=a.offsetTop,d=a.offsetLeft;c.offset.initialize();if(c.offset.doesNotIncludeMarginInBodyOffset){b+=parseFloat(c.curCSS(a,"marginTop",true))||0;d+=parseFloat(c.curCSS(a,"marginLeft",true))||0}return{top:b,left:d}},setOffset:function(a,b,d){if(/static/.test(c.curCSS(a,"position")))a.style.position="relative";var f=c(a),e=f.offset(),j=parseInt(c.curCSS(a,"top",true),10)||0,i=parseInt(c.curCSS(a,"left",true),10)||0;if(c.isFunction(b))b=b.call(a, d,e);d={top:b.top-e.top+j,left:b.left-e.left+i};"using"in b?b.using.call(a,d):f.css(d)}};c.fn.extend({position:function(){if(!this[0])return null;var a=this[0],b=this.offsetParent(),d=this.offset(),f=/^body|html$/i.test(b[0].nodeName)?{top:0,left:0}:b.offset();d.top-=parseFloat(c.curCSS(a,"marginTop",true))||0;d.left-=parseFloat(c.curCSS(a,"marginLeft",true))||0;f.top+=parseFloat(c.curCSS(b[0],"borderTopWidth",true))||0;f.left+=parseFloat(c.curCSS(b[0],"borderLeftWidth",true))||0;return{top:d.top- f.top,left:d.left-f.left}},offsetParent:function(){return this.map(function(){for(var a=this.offsetParent||s.body;a&&!/^body|html$/i.test(a.nodeName)&&c.css(a,"position")==="static";)a=a.offsetParent;return a})}});c.each(["Left","Top"],function(a,b){var d="scroll"+b;c.fn[d]=function(f){var e=this[0],j;if(!e)return null;if(f!==w)return this.each(function(){if(j=wa(this))j.scrollTo(!a?f:c(j).scrollLeft(),a?f:c(j).scrollTop());else this[d]=f});else return(j=wa(e))?"pageXOffset"in j?j[a?"pageYOffset": "pageXOffset"]:c.support.boxModel&&j.document.documentElement[d]||j.document.body[d]:e[d]}});c.each(["Height","Width"],function(a,b){var d=b.toLowerCase();c.fn["inner"+b]=function(){return this[0]?c.css(this[0],d,false,"padding"):null};c.fn["outer"+b]=function(f){return this[0]?c.css(this[0],d,false,f?"margin":"border"):null};c.fn[d]=function(f){var e=this[0];if(!e)return f==null?null:this;if(c.isFunction(f))return this.each(function(j){var i=c(this);i[d](f.call(this,j,i[d]()))});return"scrollTo"in e&&e.document?e.document.compatMode==="CSS1Compat"&&e.document.documentElement["client"+b]||e.document.body["client"+b]:e.nodeType===9?Math.max(e.documentElement["client"+b],e.body["scroll"+b],e.documentElement["scroll"+b],e.body["offset"+b],e.documentElement["offset"+b]):f===w?c.css(e,d):this.css(d,typeof f==="string"?f:f+"px")}});A.jQuery=A.$=c})(window);
PypiClean
/ibm_watson_machine_learning-1.0.320-py3-none-any.whl/ibm_watson_machine_learning/libs/repo/mlrepositoryartifact/hybrid_artifact_loader.py
import os import shutil import zipfile from ibm_watson_machine_learning.utils.autoai.utils import try_import_joblib from ibm_watson_machine_learning.libs.repo.util.unique_id_gen import uid_generate class HybridArtifactLoader(object): def load(self, artifact_queryparam): if artifact_queryparam is None: return self.extract_content(artifact_queryparam, HybridArtifactLoader._load_content) if artifact_queryparam is not None and artifact_queryparam == "full": return self.extract_content(artifact_queryparam, HybridArtifactLoader._load_content) if artifact_queryparam is not None and artifact_queryparam == "pipeline_model": return self.extract_content_json() def extract_content(self, queryparam_val, callback): directory_name = 'artifact' try: shutil.rmtree(directory_name) except: pass try: id_length = 20 dir_id = uid_generate(id_length) model_dir_name = directory_name + dir_id gz_file_name = '{}/artifact_content.tar.gz'.format(model_dir_name) input_stream = None os.makedirs(model_dir_name) if queryparam_val is None: input_stream = self.reader().read() if queryparam_val is not None and queryparam_val == 'full': input_stream = self._content_reader_gzip() file_content = input_stream.read() gz_f = open(gz_file_name, 'wb+') gz_f.write(file_content) gz_f.close() if queryparam_val is None: self.reader().close() with zipfile.ZipFile(gz_file_name) as zip_ref: zip_ref.extractall(model_dir_name) artifact_instance = callback(model_dir_name) shutil.rmtree(model_dir_name) return artifact_instance except Exception as ex: shutil.rmtree(model_dir_name) raise ex def extract_content_json(self): directory_name = 'artifact' try: shutil.rmtree(directory_name) except: pass try: id_length = 20 dir_id = uid_generate(id_length) model_dir_name = directory_name + dir_id output_file_name = '{}/pipeline_model.json'.format(model_dir_name) os.makedirs(model_dir_name) input_stream = self._content_reader_json() file_content = input_stream.read() gz_f = open(output_file_name, 'wb+') gz_f.write(file_content) gz_f.close() return output_file_name except Exception as ex: shutil.rmtree(model_dir_name) raise ex def _content_reader_gzip(self): if self._download_href is not None: if self._download_href.__contains__("models"): if self.meta.prop('space_id'): download_url = self._download_href + f"&space_id={self.meta.prop('space_id')}"\ "&content_format=pipeline-node&pipeline_node_id=automl" elif self.meta.prop('project_id'): download_url = self._download_href + f"&project_id={self.meta.prop('project_id')}" \ "&content_format=pipeline-node&pipeline_node_id=automl" return self.client.repository_api.download_artifact_content_v4_cloud(download_url, 'true') def _content_reader_json(self): if self._download_href is not None: if self._download_href.__contains__("models"): download_url = self._download_href + f"&space_id={self.meta.prop('space_id')}&content_format=native" return self.client.repository_api.download_artifact_content_v4_cloud(download_url, 'true') @staticmethod def _load_content(content_dir): """ Load AutoAI pipeline into a local runtime. """ joblib = try_import_joblib() extracted_file = [file for file in os.listdir(content_dir) if file.endswith(".pickle")][0] return joblib.load(os.path.join(content_dir, extracted_file))
PypiClean
/azure_mgmt_connectedvmware-1.0.0b3-py3-none-any.whl/azure/mgmt/connectedvmware/operations/_virtual_networks_operations.py
import sys from typing import Any, Callable, Dict, IO, Iterable, Optional, TypeVar, Union, cast, overload import urllib.parse from azure.core.exceptions import ( ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, ResourceNotModifiedError, map_error, ) from azure.core.paging import ItemPaged from azure.core.pipeline import PipelineResponse from azure.core.pipeline.transport import HttpResponse from azure.core.polling import LROPoller, NoPolling, PollingMethod from azure.core.rest import HttpRequest from azure.core.tracing.decorator import distributed_trace from azure.core.utils import case_insensitive_dict from azure.mgmt.core.exceptions import ARMErrorFormat from azure.mgmt.core.polling.arm_polling import ARMPolling from .. import models as _models from .._serialization import Serializer from .._vendor import AzureArcVMwareManagementServiceAPIMixinABC, _convert_request, _format_url_section if sys.version_info >= (3, 8): from typing import Literal # pylint: disable=no-name-in-module, ungrouped-imports else: from typing_extensions import Literal # type: ignore # pylint: disable=ungrouped-imports T = TypeVar("T") ClsType = Optional[Callable[[PipelineResponse[HttpRequest, HttpResponse], T, Dict[str, Any]], Any]] _SERIALIZER = Serializer() _SERIALIZER.client_side_validation = False def build_create_request( resource_group_name: str, virtual_network_name: str, subscription_id: str, **kwargs: Any ) -> HttpRequest: _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: Literal["2022-07-15-preview"] = kwargs.pop( "api_version", _params.pop("api-version", "2022-07-15-preview") ) content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) accept = _headers.pop("Accept", "application/json") # Construct URL _url = kwargs.pop( "template_url", "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ConnectedVMwarevSphere/virtualNetworks/{virtualNetworkName}", ) # pylint: disable=line-too-long path_format_arguments = { "subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, "str"), "resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, "str"), "virtualNetworkName": _SERIALIZER.url("virtual_network_name", virtual_network_name, "str"), } _url: str = _format_url_section(_url, **path_format_arguments) # type: ignore # Construct parameters _params["api-version"] = _SERIALIZER.query("api_version", api_version, "str") # Construct headers if content_type is not None: _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") return HttpRequest(method="PUT", url=_url, params=_params, headers=_headers, **kwargs) def build_get_request( resource_group_name: str, virtual_network_name: str, subscription_id: str, **kwargs: Any ) -> HttpRequest: _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: Literal["2022-07-15-preview"] = kwargs.pop( "api_version", _params.pop("api-version", "2022-07-15-preview") ) accept = _headers.pop("Accept", "application/json") # Construct URL _url = kwargs.pop( "template_url", "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ConnectedVMwarevSphere/virtualNetworks/{virtualNetworkName}", ) # pylint: disable=line-too-long path_format_arguments = { "subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, "str"), "resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, "str"), "virtualNetworkName": _SERIALIZER.url("virtual_network_name", virtual_network_name, "str"), } _url: str = _format_url_section(_url, **path_format_arguments) # type: ignore # Construct parameters _params["api-version"] = _SERIALIZER.query("api_version", api_version, "str") # Construct headers _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs) def build_update_request( resource_group_name: str, virtual_network_name: str, subscription_id: str, **kwargs: Any ) -> HttpRequest: _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: Literal["2022-07-15-preview"] = kwargs.pop( "api_version", _params.pop("api-version", "2022-07-15-preview") ) content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) accept = _headers.pop("Accept", "application/json") # Construct URL _url = kwargs.pop( "template_url", "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ConnectedVMwarevSphere/virtualNetworks/{virtualNetworkName}", ) # pylint: disable=line-too-long path_format_arguments = { "subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, "str"), "resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, "str"), "virtualNetworkName": _SERIALIZER.url("virtual_network_name", virtual_network_name, "str"), } _url: str = _format_url_section(_url, **path_format_arguments) # type: ignore # Construct parameters _params["api-version"] = _SERIALIZER.query("api_version", api_version, "str") # Construct headers if content_type is not None: _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") return HttpRequest(method="PATCH", url=_url, params=_params, headers=_headers, **kwargs) def build_delete_request( resource_group_name: str, virtual_network_name: str, subscription_id: str, *, force: Optional[bool] = None, **kwargs: Any ) -> HttpRequest: _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: Literal["2022-07-15-preview"] = kwargs.pop( "api_version", _params.pop("api-version", "2022-07-15-preview") ) accept = _headers.pop("Accept", "application/json") # Construct URL _url = kwargs.pop( "template_url", "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ConnectedVMwarevSphere/virtualNetworks/{virtualNetworkName}", ) # pylint: disable=line-too-long path_format_arguments = { "subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, "str"), "resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, "str"), "virtualNetworkName": _SERIALIZER.url("virtual_network_name", virtual_network_name, "str"), } _url: str = _format_url_section(_url, **path_format_arguments) # type: ignore # Construct parameters _params["api-version"] = _SERIALIZER.query("api_version", api_version, "str") if force is not None: _params["force"] = _SERIALIZER.query("force", force, "bool") # Construct headers _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") return HttpRequest(method="DELETE", url=_url, params=_params, headers=_headers, **kwargs) def build_list_request(subscription_id: str, **kwargs: Any) -> HttpRequest: _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: Literal["2022-07-15-preview"] = kwargs.pop( "api_version", _params.pop("api-version", "2022-07-15-preview") ) accept = _headers.pop("Accept", "application/json") # Construct URL _url = kwargs.pop( "template_url", "/subscriptions/{subscriptionId}/providers/Microsoft.ConnectedVMwarevSphere/virtualNetworks" ) # pylint: disable=line-too-long path_format_arguments = { "subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, "str"), } _url: str = _format_url_section(_url, **path_format_arguments) # type: ignore # Construct parameters _params["api-version"] = _SERIALIZER.query("api_version", api_version, "str") # Construct headers _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs) def build_list_by_resource_group_request(resource_group_name: str, subscription_id: str, **kwargs: Any) -> HttpRequest: _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: Literal["2022-07-15-preview"] = kwargs.pop( "api_version", _params.pop("api-version", "2022-07-15-preview") ) accept = _headers.pop("Accept", "application/json") # Construct URL _url = kwargs.pop( "template_url", "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ConnectedVMwarevSphere/virtualNetworks", ) # pylint: disable=line-too-long path_format_arguments = { "subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, "str"), "resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, "str"), } _url: str = _format_url_section(_url, **path_format_arguments) # type: ignore # Construct parameters _params["api-version"] = _SERIALIZER.query("api_version", api_version, "str") # Construct headers _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs) class VirtualNetworksOperations: """ .. warning:: **DO NOT** instantiate this class directly. Instead, you should access the following operations through :class:`~azure.mgmt.connectedvmware.AzureArcVMwareManagementServiceAPI`'s :attr:`virtual_networks` attribute. """ models = _models def __init__(self, *args, **kwargs): input_args = list(args) self._client = input_args.pop(0) if input_args else kwargs.pop("client") self._config = input_args.pop(0) if input_args else kwargs.pop("config") self._serialize = input_args.pop(0) if input_args else kwargs.pop("serializer") self._deserialize = input_args.pop(0) if input_args else kwargs.pop("deserializer") def _create_initial( self, resource_group_name: str, virtual_network_name: str, body: Optional[Union[_models.VirtualNetwork, IO]] = None, **kwargs: Any ) -> _models.VirtualNetwork: error_map = { 401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError, 304: ResourceNotModifiedError, } error_map.update(kwargs.pop("error_map", {}) or {}) _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: Literal["2022-07-15-preview"] = kwargs.pop( "api_version", _params.pop("api-version", self._config.api_version) ) content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) cls: ClsType[_models.VirtualNetwork] = kwargs.pop("cls", None) content_type = content_type or "application/json" _json = None _content = None if isinstance(body, (IO, bytes)): _content = body else: if body is not None: _json = self._serialize.body(body, "VirtualNetwork") else: _json = None request = build_create_request( resource_group_name=resource_group_name, virtual_network_name=virtual_network_name, subscription_id=self._config.subscription_id, api_version=api_version, content_type=content_type, json=_json, content=_content, template_url=self._create_initial.metadata["url"], headers=_headers, params=_params, ) request = _convert_request(request) request.url = self._client.format_url(request.url) pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access request, stream=False, **kwargs ) response = pipeline_response.http_response if response.status_code not in [200, 201]: map_error(status_code=response.status_code, response=response, error_map=error_map) error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response) raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat) if response.status_code == 200: deserialized = self._deserialize("VirtualNetwork", pipeline_response) if response.status_code == 201: deserialized = self._deserialize("VirtualNetwork", pipeline_response) if cls: return cls(pipeline_response, deserialized, {}) # type: ignore return deserialized # type: ignore _create_initial.metadata = { "url": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ConnectedVMwarevSphere/virtualNetworks/{virtualNetworkName}" } @overload def begin_create( self, resource_group_name: str, virtual_network_name: str, body: Optional[_models.VirtualNetwork] = None, *, content_type: str = "application/json", **kwargs: Any ) -> LROPoller[_models.VirtualNetwork]: """Implements virtual network PUT method. Create Or Update virtual network. :param resource_group_name: The Resource Group Name. Required. :type resource_group_name: str :param virtual_network_name: Name of the virtual network resource. Required. :type virtual_network_name: str :param body: Request payload. Default value is None. :type body: ~azure.mgmt.connectedvmware.models.VirtualNetwork :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. Default value is "application/json". :paramtype content_type: str :keyword callable cls: A custom type or function that will be passed the direct response :keyword str continuation_token: A continuation token to restart a poller from a saved state. :keyword polling: By default, your polling method will be ARMPolling. Pass in False for this operation to not poll, or pass in your own initialized polling object for a personal polling strategy. :paramtype polling: bool or ~azure.core.polling.PollingMethod :keyword int polling_interval: Default waiting time between two polls for LRO operations if no Retry-After header is present. :return: An instance of LROPoller that returns either VirtualNetwork or the result of cls(response) :rtype: ~azure.core.polling.LROPoller[~azure.mgmt.connectedvmware.models.VirtualNetwork] :raises ~azure.core.exceptions.HttpResponseError: """ @overload def begin_create( self, resource_group_name: str, virtual_network_name: str, body: Optional[IO] = None, *, content_type: str = "application/json", **kwargs: Any ) -> LROPoller[_models.VirtualNetwork]: """Implements virtual network PUT method. Create Or Update virtual network. :param resource_group_name: The Resource Group Name. Required. :type resource_group_name: str :param virtual_network_name: Name of the virtual network resource. Required. :type virtual_network_name: str :param body: Request payload. Default value is None. :type body: IO :keyword content_type: Body Parameter content-type. Content type parameter for binary body. Default value is "application/json". :paramtype content_type: str :keyword callable cls: A custom type or function that will be passed the direct response :keyword str continuation_token: A continuation token to restart a poller from a saved state. :keyword polling: By default, your polling method will be ARMPolling. Pass in False for this operation to not poll, or pass in your own initialized polling object for a personal polling strategy. :paramtype polling: bool or ~azure.core.polling.PollingMethod :keyword int polling_interval: Default waiting time between two polls for LRO operations if no Retry-After header is present. :return: An instance of LROPoller that returns either VirtualNetwork or the result of cls(response) :rtype: ~azure.core.polling.LROPoller[~azure.mgmt.connectedvmware.models.VirtualNetwork] :raises ~azure.core.exceptions.HttpResponseError: """ @distributed_trace def begin_create( self, resource_group_name: str, virtual_network_name: str, body: Optional[Union[_models.VirtualNetwork, IO]] = None, **kwargs: Any ) -> LROPoller[_models.VirtualNetwork]: """Implements virtual network PUT method. Create Or Update virtual network. :param resource_group_name: The Resource Group Name. Required. :type resource_group_name: str :param virtual_network_name: Name of the virtual network resource. Required. :type virtual_network_name: str :param body: Request payload. Is either a model type or a IO type. Default value is None. :type body: ~azure.mgmt.connectedvmware.models.VirtualNetwork or IO :keyword content_type: Body Parameter content-type. Known values are: 'application/json'. Default value is None. :paramtype content_type: str :keyword callable cls: A custom type or function that will be passed the direct response :keyword str continuation_token: A continuation token to restart a poller from a saved state. :keyword polling: By default, your polling method will be ARMPolling. Pass in False for this operation to not poll, or pass in your own initialized polling object for a personal polling strategy. :paramtype polling: bool or ~azure.core.polling.PollingMethod :keyword int polling_interval: Default waiting time between two polls for LRO operations if no Retry-After header is present. :return: An instance of LROPoller that returns either VirtualNetwork or the result of cls(response) :rtype: ~azure.core.polling.LROPoller[~azure.mgmt.connectedvmware.models.VirtualNetwork] :raises ~azure.core.exceptions.HttpResponseError: """ _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: Literal["2022-07-15-preview"] = kwargs.pop( "api_version", _params.pop("api-version", self._config.api_version) ) content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) cls: ClsType[_models.VirtualNetwork] = kwargs.pop("cls", None) polling: Union[bool, PollingMethod] = kwargs.pop("polling", True) lro_delay = kwargs.pop("polling_interval", self._config.polling_interval) cont_token: Optional[str] = kwargs.pop("continuation_token", None) if cont_token is None: raw_result = self._create_initial( resource_group_name=resource_group_name, virtual_network_name=virtual_network_name, body=body, api_version=api_version, content_type=content_type, cls=lambda x, y, z: x, headers=_headers, params=_params, **kwargs ) kwargs.pop("error_map", None) def get_long_running_output(pipeline_response): deserialized = self._deserialize("VirtualNetwork", pipeline_response) if cls: return cls(pipeline_response, deserialized, {}) return deserialized if polling is True: polling_method: PollingMethod = cast( PollingMethod, ARMPolling(lro_delay, lro_options={"final-state-via": "azure-async-operation"}, **kwargs) ) elif polling is False: polling_method = cast(PollingMethod, NoPolling()) else: polling_method = polling if cont_token: return LROPoller.from_continuation_token( polling_method=polling_method, continuation_token=cont_token, client=self._client, deserialization_callback=get_long_running_output, ) return LROPoller(self._client, raw_result, get_long_running_output, polling_method) # type: ignore begin_create.metadata = { "url": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ConnectedVMwarevSphere/virtualNetworks/{virtualNetworkName}" } @distributed_trace def get(self, resource_group_name: str, virtual_network_name: str, **kwargs: Any) -> _models.VirtualNetwork: """Gets a virtual network. Implements virtual network GET method. :param resource_group_name: The Resource Group Name. Required. :type resource_group_name: str :param virtual_network_name: Name of the virtual network resource. Required. :type virtual_network_name: str :keyword callable cls: A custom type or function that will be passed the direct response :return: VirtualNetwork or the result of cls(response) :rtype: ~azure.mgmt.connectedvmware.models.VirtualNetwork :raises ~azure.core.exceptions.HttpResponseError: """ error_map = { 401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError, 304: ResourceNotModifiedError, } error_map.update(kwargs.pop("error_map", {}) or {}) _headers = kwargs.pop("headers", {}) or {} _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: Literal["2022-07-15-preview"] = kwargs.pop( "api_version", _params.pop("api-version", self._config.api_version) ) cls: ClsType[_models.VirtualNetwork] = kwargs.pop("cls", None) request = build_get_request( resource_group_name=resource_group_name, virtual_network_name=virtual_network_name, subscription_id=self._config.subscription_id, api_version=api_version, template_url=self.get.metadata["url"], headers=_headers, params=_params, ) request = _convert_request(request) request.url = self._client.format_url(request.url) pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access request, stream=False, **kwargs ) response = pipeline_response.http_response if response.status_code not in [200]: map_error(status_code=response.status_code, response=response, error_map=error_map) error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response) raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat) deserialized = self._deserialize("VirtualNetwork", pipeline_response) if cls: return cls(pipeline_response, deserialized, {}) return deserialized get.metadata = { "url": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ConnectedVMwarevSphere/virtualNetworks/{virtualNetworkName}" } @overload def update( self, resource_group_name: str, virtual_network_name: str, body: Optional[_models.ResourcePatch] = None, *, content_type: str = "application/json", **kwargs: Any ) -> _models.VirtualNetwork: """Updates a virtual network. API to update certain properties of the virtual network resource. :param resource_group_name: The Resource Group Name. Required. :type resource_group_name: str :param virtual_network_name: Name of the virtual network resource. Required. :type virtual_network_name: str :param body: Resource properties to update. Default value is None. :type body: ~azure.mgmt.connectedvmware.models.ResourcePatch :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. Default value is "application/json". :paramtype content_type: str :keyword callable cls: A custom type or function that will be passed the direct response :return: VirtualNetwork or the result of cls(response) :rtype: ~azure.mgmt.connectedvmware.models.VirtualNetwork :raises ~azure.core.exceptions.HttpResponseError: """ @overload def update( self, resource_group_name: str, virtual_network_name: str, body: Optional[IO] = None, *, content_type: str = "application/json", **kwargs: Any ) -> _models.VirtualNetwork: """Updates a virtual network. API to update certain properties of the virtual network resource. :param resource_group_name: The Resource Group Name. Required. :type resource_group_name: str :param virtual_network_name: Name of the virtual network resource. Required. :type virtual_network_name: str :param body: Resource properties to update. Default value is None. :type body: IO :keyword content_type: Body Parameter content-type. Content type parameter for binary body. Default value is "application/json". :paramtype content_type: str :keyword callable cls: A custom type or function that will be passed the direct response :return: VirtualNetwork or the result of cls(response) :rtype: ~azure.mgmt.connectedvmware.models.VirtualNetwork :raises ~azure.core.exceptions.HttpResponseError: """ @distributed_trace def update( self, resource_group_name: str, virtual_network_name: str, body: Optional[Union[_models.ResourcePatch, IO]] = None, **kwargs: Any ) -> _models.VirtualNetwork: """Updates a virtual network. API to update certain properties of the virtual network resource. :param resource_group_name: The Resource Group Name. Required. :type resource_group_name: str :param virtual_network_name: Name of the virtual network resource. Required. :type virtual_network_name: str :param body: Resource properties to update. Is either a model type or a IO type. Default value is None. :type body: ~azure.mgmt.connectedvmware.models.ResourcePatch or IO :keyword content_type: Body Parameter content-type. Known values are: 'application/json'. Default value is None. :paramtype content_type: str :keyword callable cls: A custom type or function that will be passed the direct response :return: VirtualNetwork or the result of cls(response) :rtype: ~azure.mgmt.connectedvmware.models.VirtualNetwork :raises ~azure.core.exceptions.HttpResponseError: """ error_map = { 401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError, 304: ResourceNotModifiedError, } error_map.update(kwargs.pop("error_map", {}) or {}) _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: Literal["2022-07-15-preview"] = kwargs.pop( "api_version", _params.pop("api-version", self._config.api_version) ) content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) cls: ClsType[_models.VirtualNetwork] = kwargs.pop("cls", None) content_type = content_type or "application/json" _json = None _content = None if isinstance(body, (IO, bytes)): _content = body else: if body is not None: _json = self._serialize.body(body, "ResourcePatch") else: _json = None request = build_update_request( resource_group_name=resource_group_name, virtual_network_name=virtual_network_name, subscription_id=self._config.subscription_id, api_version=api_version, content_type=content_type, json=_json, content=_content, template_url=self.update.metadata["url"], headers=_headers, params=_params, ) request = _convert_request(request) request.url = self._client.format_url(request.url) pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access request, stream=False, **kwargs ) response = pipeline_response.http_response if response.status_code not in [200]: map_error(status_code=response.status_code, response=response, error_map=error_map) error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response) raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat) deserialized = self._deserialize("VirtualNetwork", pipeline_response) if cls: return cls(pipeline_response, deserialized, {}) return deserialized update.metadata = { "url": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ConnectedVMwarevSphere/virtualNetworks/{virtualNetworkName}" } def _delete_initial( # pylint: disable=inconsistent-return-statements self, resource_group_name: str, virtual_network_name: str, force: Optional[bool] = None, **kwargs: Any ) -> None: error_map = { 401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError, 304: ResourceNotModifiedError, } error_map.update(kwargs.pop("error_map", {}) or {}) _headers = kwargs.pop("headers", {}) or {} _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: Literal["2022-07-15-preview"] = kwargs.pop( "api_version", _params.pop("api-version", self._config.api_version) ) cls: ClsType[None] = kwargs.pop("cls", None) request = build_delete_request( resource_group_name=resource_group_name, virtual_network_name=virtual_network_name, subscription_id=self._config.subscription_id, force=force, api_version=api_version, template_url=self._delete_initial.metadata["url"], headers=_headers, params=_params, ) request = _convert_request(request) request.url = self._client.format_url(request.url) pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access request, stream=False, **kwargs ) response = pipeline_response.http_response if response.status_code not in [200, 202, 204]: map_error(status_code=response.status_code, response=response, error_map=error_map) error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response) raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat) if cls: return cls(pipeline_response, None, {}) _delete_initial.metadata = { "url": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ConnectedVMwarevSphere/virtualNetworks/{virtualNetworkName}" } @distributed_trace def begin_delete( self, resource_group_name: str, virtual_network_name: str, force: Optional[bool] = None, **kwargs: Any ) -> LROPoller[None]: """Deletes an virtual network. Implements virtual network DELETE method. :param resource_group_name: The Resource Group Name. Required. :type resource_group_name: str :param virtual_network_name: Name of the virtual network resource. Required. :type virtual_network_name: str :param force: Whether force delete was specified. Default value is None. :type force: bool :keyword callable cls: A custom type or function that will be passed the direct response :keyword str continuation_token: A continuation token to restart a poller from a saved state. :keyword polling: By default, your polling method will be ARMPolling. Pass in False for this operation to not poll, or pass in your own initialized polling object for a personal polling strategy. :paramtype polling: bool or ~azure.core.polling.PollingMethod :keyword int polling_interval: Default waiting time between two polls for LRO operations if no Retry-After header is present. :return: An instance of LROPoller that returns either None or the result of cls(response) :rtype: ~azure.core.polling.LROPoller[None] :raises ~azure.core.exceptions.HttpResponseError: """ _headers = kwargs.pop("headers", {}) or {} _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: Literal["2022-07-15-preview"] = kwargs.pop( "api_version", _params.pop("api-version", self._config.api_version) ) cls: ClsType[None] = kwargs.pop("cls", None) polling: Union[bool, PollingMethod] = kwargs.pop("polling", True) lro_delay = kwargs.pop("polling_interval", self._config.polling_interval) cont_token: Optional[str] = kwargs.pop("continuation_token", None) if cont_token is None: raw_result = self._delete_initial( # type: ignore resource_group_name=resource_group_name, virtual_network_name=virtual_network_name, force=force, api_version=api_version, cls=lambda x, y, z: x, headers=_headers, params=_params, **kwargs ) kwargs.pop("error_map", None) def get_long_running_output(pipeline_response): # pylint: disable=inconsistent-return-statements if cls: return cls(pipeline_response, None, {}) if polling is True: polling_method: PollingMethod = cast(PollingMethod, ARMPolling(lro_delay, **kwargs)) elif polling is False: polling_method = cast(PollingMethod, NoPolling()) else: polling_method = polling if cont_token: return LROPoller.from_continuation_token( polling_method=polling_method, continuation_token=cont_token, client=self._client, deserialization_callback=get_long_running_output, ) return LROPoller(self._client, raw_result, get_long_running_output, polling_method) # type: ignore begin_delete.metadata = { "url": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ConnectedVMwarevSphere/virtualNetworks/{virtualNetworkName}" } @distributed_trace def list(self, **kwargs: Any) -> Iterable["_models.VirtualNetwork"]: """Implements GET virtualNetworks in a subscription. List of virtualNetworks in a subscription. :keyword callable cls: A custom type or function that will be passed the direct response :return: An iterator like instance of either VirtualNetwork or the result of cls(response) :rtype: ~azure.core.paging.ItemPaged[~azure.mgmt.connectedvmware.models.VirtualNetwork] :raises ~azure.core.exceptions.HttpResponseError: """ _headers = kwargs.pop("headers", {}) or {} _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: Literal["2022-07-15-preview"] = kwargs.pop( "api_version", _params.pop("api-version", self._config.api_version) ) cls: ClsType[_models.VirtualNetworksList] = kwargs.pop("cls", None) error_map = { 401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError, 304: ResourceNotModifiedError, } error_map.update(kwargs.pop("error_map", {}) or {}) def prepare_request(next_link=None): if not next_link: request = build_list_request( subscription_id=self._config.subscription_id, api_version=api_version, template_url=self.list.metadata["url"], headers=_headers, params=_params, ) request = _convert_request(request) request.url = self._client.format_url(request.url) else: # make call to next link with the client's api-version _parsed_next_link = urllib.parse.urlparse(next_link) _next_request_params = case_insensitive_dict( { key: [urllib.parse.quote(v) for v in value] for key, value in urllib.parse.parse_qs(_parsed_next_link.query).items() } ) _next_request_params["api-version"] = self._config.api_version request = HttpRequest( "GET", urllib.parse.urljoin(next_link, _parsed_next_link.path), params=_next_request_params ) request = _convert_request(request) request.url = self._client.format_url(request.url) request.method = "GET" return request def extract_data(pipeline_response): deserialized = self._deserialize("VirtualNetworksList", pipeline_response) list_of_elem = deserialized.value if cls: list_of_elem = cls(list_of_elem) # type: ignore return deserialized.next_link or None, iter(list_of_elem) def get_next(next_link=None): request = prepare_request(next_link) pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access request, stream=False, **kwargs ) response = pipeline_response.http_response if response.status_code not in [200]: map_error(status_code=response.status_code, response=response, error_map=error_map) error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response) raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat) return pipeline_response return ItemPaged(get_next, extract_data) list.metadata = { "url": "/subscriptions/{subscriptionId}/providers/Microsoft.ConnectedVMwarevSphere/virtualNetworks" } @distributed_trace def list_by_resource_group(self, resource_group_name: str, **kwargs: Any) -> Iterable["_models.VirtualNetwork"]: """Implements GET virtualNetworks in a resource group. List of virtualNetworks in a resource group. :param resource_group_name: The Resource Group Name. Required. :type resource_group_name: str :keyword callable cls: A custom type or function that will be passed the direct response :return: An iterator like instance of either VirtualNetwork or the result of cls(response) :rtype: ~azure.core.paging.ItemPaged[~azure.mgmt.connectedvmware.models.VirtualNetwork] :raises ~azure.core.exceptions.HttpResponseError: """ _headers = kwargs.pop("headers", {}) or {} _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: Literal["2022-07-15-preview"] = kwargs.pop( "api_version", _params.pop("api-version", self._config.api_version) ) cls: ClsType[_models.VirtualNetworksList] = kwargs.pop("cls", None) error_map = { 401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError, 304: ResourceNotModifiedError, } error_map.update(kwargs.pop("error_map", {}) or {}) def prepare_request(next_link=None): if not next_link: request = build_list_by_resource_group_request( resource_group_name=resource_group_name, subscription_id=self._config.subscription_id, api_version=api_version, template_url=self.list_by_resource_group.metadata["url"], headers=_headers, params=_params, ) request = _convert_request(request) request.url = self._client.format_url(request.url) else: # make call to next link with the client's api-version _parsed_next_link = urllib.parse.urlparse(next_link) _next_request_params = case_insensitive_dict( { key: [urllib.parse.quote(v) for v in value] for key, value in urllib.parse.parse_qs(_parsed_next_link.query).items() } ) _next_request_params["api-version"] = self._config.api_version request = HttpRequest( "GET", urllib.parse.urljoin(next_link, _parsed_next_link.path), params=_next_request_params ) request = _convert_request(request) request.url = self._client.format_url(request.url) request.method = "GET" return request def extract_data(pipeline_response): deserialized = self._deserialize("VirtualNetworksList", pipeline_response) list_of_elem = deserialized.value if cls: list_of_elem = cls(list_of_elem) # type: ignore return deserialized.next_link or None, iter(list_of_elem) def get_next(next_link=None): request = prepare_request(next_link) pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access request, stream=False, **kwargs ) response = pipeline_response.http_response if response.status_code not in [200]: map_error(status_code=response.status_code, response=response, error_map=error_map) error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response) raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat) return pipeline_response return ItemPaged(get_next, extract_data) list_by_resource_group.metadata = { "url": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ConnectedVMwarevSphere/virtualNetworks" }
PypiClean
/vampire_analysis-0.0.1-py3-none-any.whl/vampire/analysis.py
import numpy as np import pandas as pd from scipy import spatial from sklearn import preprocessing from sklearn.cluster import KMeans from . import amath def pca_contours(contours): """ Return the principal component of the contours. Parameters ---------- contours : ndarray Object contours, with shape (num_contour, 2*num_points). Returns ------- principal_directions : ndarray Loadings, weights, principal directions, principal axes, eigenvector of covariance matrix of mean-subtracted contours, with shape (2*num_points, 2*num_points). principal_components : ndarray PC score, principal components, coordinates of mean-subtracted contours in their principal directions, with shape (num_contours, 2*num_points). See Also -------- vampire.amath.pca : Implementation of principal component analysis. """ principal_directions, principal_components, variance = amath.pca(contours, 'eig') return principal_directions, principal_components def pca_transform_contours(contours, mean_contour, principal_directions): """ Transform contour coordinates to principal directions in the PC space. Parameters ---------- contours : ndarray Object contours, with shape (num_contour, 2*num_points). mean_contour : ndarray Mean contour used to mean-center object contours. principal_directions : ndarray Loadings, weights, principal directions, principal axes, eigenvector of covariance matrix of mean-subtracted contours, with shape (2*num_points, 2*num_points). Returns ------- principal_components : ndarray PC score, principal components, coordinates of mean-subtracted contours in their principal directions, with shape (num_contours, 2*num_points). """ mean_centered_contours = contours - mean_contour principal_components = mean_centered_contours @ principal_directions return principal_components def cluster_contours(pc, contours, num_clusters=5, num_pc=20, random_state=None): # random: None """ K-means clustering of contour principal components. Parameters ---------- pc : ndarray Principal components of contours. contours : ndarray Object contours, with shape (num_contour, 2*num_points). num_clusters : int, optional Number of clusters. num_pc : int, optional Number of principal components used for approximation. random_state : None or int, optional Random state for K-means clustering. Returns ------- contours_df : DataFrameDataFrame of objects' contour coordinates with cluster id. centroids : ndarray Coordinates of cluster centers of K-means clusters. See Also -------- sklearn.cluster.KMeans : Implementation of K-means clustering. """ pc_truncated = pc[:, :num_pc] pc_truncated_normalized = preprocessing.normalize(pc_truncated) # k-means clustering of normalized principal coordinates k_means = KMeans(n_clusters=num_clusters, random_state=random_state, init='k-means++', n_init=3, max_iter=300).fit(pc_truncated_normalized) centroids = k_means.cluster_centers_ # distance = spatial.distance.cdist(pc_truncated_normalized, centroid) # D, why not this line? distance = spatial.distance.cdist(pc_truncated, centroids) cluster_id = np.argmin(distance, axis=1) # tag each object with cluster id contours_df = pd.DataFrame(contours) contours_df['cluster_id'] = cluster_id return contours_df, centroids def assign_clusters_id(pc, contours, centroids, num_pc=20): """ Assign the contours with id of the closest centroid. Parameters ---------- pc : ndarray Principal components of contours. contours : ndarray Object contours, with shape (num_contour, 2*num_points). centroids : ndarray Coordinates of cluster centers of K-means clusters. num_pc : int, optional Number of principal components used for approximation. Returns ------- contours_df : DataFrame DataFrame of objects' contour coordinates with cluster id. min_distance : ndarray Distance of truncated principal components to the closest centroid. """ # find closest centroid and get cluster id pc_truncated = pc[:, :num_pc] distance = spatial.distance.cdist(pc_truncated, centroids) cluster_id = np.argmin(distance, axis=1) min_distance = np.min(distance, axis=1) # tag each object with cluster id contours_df = pd.DataFrame(contours) contours_df['cluster_id'] = cluster_id return contours_df, min_distance
PypiClean
/django-kelove-db-3.1.0.tar.gz/django-kelove-db-3.1.0/django_kelove_db/static/django_kelove_db/editor_md/lib/codemirror/mode/turtle/turtle.js
(function(mod) { if (typeof exports == "object" && typeof module == "object") // CommonJS mod(require("../../lib/codemirror")); else if (typeof define == "function" && define.amd) // AMD define(["../../lib/codemirror"], mod); else // Plain browser env mod(CodeMirror); })(function(CodeMirror) { "use strict"; CodeMirror.defineMode("turtle", function(config) { var indentUnit = config.indentUnit; var curPunc; function wordRegexp(words) { return new RegExp("^(?:" + words.join("|") + ")$", "i"); } var ops = wordRegexp([]); var keywords = wordRegexp(["@prefix", "@base", "a"]); var operatorChars = /[*+\-<>=&|]/; function tokenBase(stream, state) { var ch = stream.next(); curPunc = null; if (ch == "<" && !stream.match(/^[\s\u00a0=]/, false)) { stream.match(/^[^\s\u00a0>]*>?/); return "atom"; } else if (ch == "\"" || ch == "'") { state.tokenize = tokenLiteral(ch); return state.tokenize(stream, state); } else if (/[{}\(\),\.;\[\]]/.test(ch)) { curPunc = ch; return null; } else if (ch == "#") { stream.skipToEnd(); return "comment"; } else if (operatorChars.test(ch)) { stream.eatWhile(operatorChars); return null; } else if (ch == ":") { return "operator"; } else { stream.eatWhile(/[_\w\d]/); if(stream.peek() == ":") { return "variable-3"; } else { var word = stream.current(); if(keywords.test(word)) { return "meta"; } if(ch >= "A" && ch <= "Z") { return "comment"; } else { return "keyword"; } } var word = stream.current(); if (ops.test(word)) return null; else if (keywords.test(word)) return "meta"; else return "variable"; } } function tokenLiteral(quote) { return function(stream, state) { var escaped = false, ch; while ((ch = stream.next()) != null) { if (ch == quote && !escaped) { state.tokenize = tokenBase; break; } escaped = !escaped && ch == "\\"; } return "string"; }; } function pushContext(state, type, col) { state.context = {prev: state.context, indent: state.indent, col: col, type: type}; } function popContext(state) { state.indent = state.context.indent; state.context = state.context.prev; } return { startState: function() { return {tokenize: tokenBase, context: null, indent: 0, col: 0}; }, token: function(stream, state) { if (stream.sol()) { if (state.context && state.context.align == null) state.context.align = false; state.indent = stream.indentation(); } if (stream.eatSpace()) return null; var style = state.tokenize(stream, state); if (style != "comment" && state.context && state.context.align == null && state.context.type != "pattern") { state.context.align = true; } if (curPunc == "(") pushContext(state, ")", stream.column()); else if (curPunc == "[") pushContext(state, "]", stream.column()); else if (curPunc == "{") pushContext(state, "}", stream.column()); else if (/[\]\}\)]/.test(curPunc)) { while (state.context && state.context.type == "pattern") popContext(state); if (state.context && curPunc == state.context.type) popContext(state); } else if (curPunc == "." && state.context && state.context.type == "pattern") popContext(state); else if (/atom|string|variable/.test(style) && state.context) { if (/[\}\]]/.test(state.context.type)) pushContext(state, "pattern", stream.column()); else if (state.context.type == "pattern" && !state.context.align) { state.context.align = true; state.context.col = stream.column(); } } return style; }, indent: function(state, textAfter) { var firstChar = textAfter && textAfter.charAt(0); var context = state.context; if (/[\]\}]/.test(firstChar)) while (context && context.type == "pattern") context = context.prev; var closing = context && firstChar == context.type; if (!context) return 0; else if (context.type == "pattern") return context.col; else if (context.align) return context.col + (closing ? 0 : 1); else return context.indent + (closing ? 0 : indentUnit); }, lineComment: "#" }; }); CodeMirror.defineMIME("text/turtle", "turtle"); });
PypiClean
/mlflow_by_johnsnowlabs-2.20.0-py3-none-any.whl/mlflow/store/artifact/mlflow_artifacts_repo.py
from urllib.parse import urlparse, urlunparse import re from mlflow.store.artifact.http_artifact_repo import HttpArtifactRepository from mlflow.tracking._tracking_service.utils import get_tracking_uri from mlflow.exceptions import MlflowException def _check_if_host_is_numeric(hostname): if hostname: try: float(hostname) return True except ValueError: return False else: return False def _validate_port_mapped_to_hostname(uri_parse): # This check is to catch an mlflow-artifacts uri that has a port designated but no # hostname specified. `urllib.parse.urlparse` will treat such a uri as a filesystem # definition, mapping the provided port as a hostname value if this condition is not # validated. if uri_parse.hostname and _check_if_host_is_numeric(uri_parse.hostname) and not uri_parse.port: raise MlflowException( "The mlflow-artifacts uri was supplied with a port number: " f"{uri_parse.hostname}, but no host was defined." ) def _validate_uri_scheme(scheme): allowable_schemes = {"http", "https"} if scheme not in allowable_schemes: raise MlflowException( f"The configured tracking uri scheme: '{scheme}' is invalid for use with the proxy " f"mlflow-artifact scheme. The allowed tracking schemes are: {allowable_schemes}" ) class MlflowArtifactsRepository(HttpArtifactRepository): """Scheme wrapper around HttpArtifactRepository for mlflow-artifacts server functionality""" def __init__(self, artifact_uri): super().__init__(self.resolve_uri(artifact_uri, get_tracking_uri())) @classmethod def resolve_uri(cls, artifact_uri, tracking_uri): base_url = "/api/2.0/mlflow-artifacts/artifacts" track_parse = urlparse(tracking_uri) uri_parse = urlparse(artifact_uri) # Check to ensure that a port is present with no hostname _validate_port_mapped_to_hostname(uri_parse) # Check that tracking uri is http or https _validate_uri_scheme(track_parse.scheme) if uri_parse.path == "/": # root directory; build simple path resolved = f"{base_url}{uri_parse.path}" elif uri_parse.path == base_url: # for operations like list artifacts resolved = base_url else: resolved = f"{track_parse.path}/{base_url}/{uri_parse.path}" resolved = re.sub("//+", "/", resolved) resolved_artifacts_uri = urlunparse( ( # scheme track_parse.scheme, # netloc uri_parse.netloc if uri_parse.netloc else track_parse.netloc, # path resolved, # params "", # query "", # fragment "", ) ) return resolved_artifacts_uri.replace("///", "/").rstrip("/")
PypiClean
/alipay_sdk_python-3.6.740-py3-none-any.whl/alipay/aop/api/response/AlipayOverseasTaxOrderQueryResponse.py
import json from alipay.aop.api.response.AlipayResponse import AlipayResponse class AlipayOverseasTaxOrderQueryResponse(AlipayResponse): def __init__(self): super(AlipayOverseasTaxOrderQueryResponse, self).__init__() self._identify_account_no = None self._identify_account_type = None self._out_order_no = None self._status = None self._success_time = None self._tax_no = None @property def identify_account_no(self): return self._identify_account_no @identify_account_no.setter def identify_account_no(self, value): self._identify_account_no = value @property def identify_account_type(self): return self._identify_account_type @identify_account_type.setter def identify_account_type(self, value): self._identify_account_type = value @property def out_order_no(self): return self._out_order_no @out_order_no.setter def out_order_no(self, value): self._out_order_no = value @property def status(self): return self._status @status.setter def status(self, value): self._status = value @property def success_time(self): return self._success_time @success_time.setter def success_time(self, value): self._success_time = value @property def tax_no(self): return self._tax_no @tax_no.setter def tax_no(self, value): self._tax_no = value def parse_response_content(self, response_content): response = super(AlipayOverseasTaxOrderQueryResponse, self).parse_response_content(response_content) if 'identify_account_no' in response: self.identify_account_no = response['identify_account_no'] if 'identify_account_type' in response: self.identify_account_type = response['identify_account_type'] if 'out_order_no' in response: self.out_order_no = response['out_order_no'] if 'status' in response: self.status = response['status'] if 'success_time' in response: self.success_time = response['success_time'] if 'tax_no' in response: self.tax_no = response['tax_no']
PypiClean
/openpeerpower_frontend-20210523.2-py3-none-any.whl/opp_frontend/frontend_es5/chunk.0af871604d5c7462da2b.js
(self.webpackChunkopenpeerpower_frontend=self.webpackChunkopenpeerpower_frontend||[]).push([[7632,7172,9816,873,9978,272,4328,7082,129,1422,4823],{99257:function(t,e,r){"use strict";r(65233);var n=r(15112),o=r(9672),i=r(87156);(0,o.k)({is:"iron-iconset-svg",properties:{name:{type:String,observer:"_nameChanged"},size:{type:Number,value:24},rtlMirroring:{type:Boolean,value:!1},useGlobalRtlAttribute:{type:Boolean,value:!1}},created:function(){this._meta=new n.P({type:"iconset",key:null,value:null})},attached:function(){this.style.display="none"},getIconNames:function(){return this._icons=this._createIconMap(),Object.keys(this._icons).map((function(t){return this.name+":"+t}),this)},applyIcon:function(t,e){this.removeIcon(t);var r=this._cloneIcon(e,this.rtlMirroring&&this._targetIsRTL(t));if(r){var n=(0,i.vz)(t.root||t);return n.insertBefore(r,n.childNodes[0]),t._svgIcon=r}return null},removeIcon:function(t){t._svgIcon&&((0,i.vz)(t.root||t).removeChild(t._svgIcon),t._svgIcon=null)},_targetIsRTL:function(t){if(null==this.__targetIsRTL)if(this.useGlobalRtlAttribute){var e=document.body&&document.body.hasAttribute("dir")?document.body:document.documentElement;this.__targetIsRTL="rtl"===e.getAttribute("dir")}else t&&t.nodeType!==Node.ELEMENT_NODE&&(t=t.host),this.__targetIsRTL=t&&"rtl"===window.getComputedStyle(t).direction;return this.__targetIsRTL},_nameChanged:function(){this._meta.value=null,this._meta.key=this.name,this._meta.value=this,this.async((function(){this.fire("iron-iconset-added",this,{node:window})}))},_createIconMap:function(){var t=Object.create(null);return(0,i.vz)(this).querySelectorAll("[id]").forEach((function(e){t[e.id]=e})),t},_cloneIcon:function(t,e){return this._icons=this._icons||this._createIconMap(),this._prepareSvgClone(this._icons[t],this.size,e)},_prepareSvgClone:function(t,e,r){if(t){var n=t.cloneNode(!0),o=document.createElementNS("http://www.w3.org/2000/svg","svg"),i=n.getAttribute("viewBox")||"0 0 "+e+" "+e,l="pointer-events: none; display: block; width: 100%; height: 100%;";return r&&n.hasAttribute("mirror-in-rtl")&&(l+="-webkit-transform:scale(-1,1);transform:scale(-1,1);transform-origin:center;"),o.setAttribute("viewBox",i),o.setAttribute("preserveAspectRatio","xMidYMid meet"),o.setAttribute("focusable","false"),o.style.cssText=l,o.appendChild(n).removeAttribute("id"),o}return null}})},67810:function(t,e,r){"use strict";r.d(e,{o:function(){return i}});r(65233);var n=r(87156);function o(t){return(o="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?function(t){return typeof t}:function(t){return t&&"function"==typeof Symbol&&t.constructor===Symbol&&t!==Symbol.prototype?"symbol":typeof t})(t)}var i={properties:{scrollTarget:{type:HTMLElement,value:function(){return this._defaultScrollTarget}}},observers:["_scrollTargetChanged(scrollTarget, isAttached)"],_shouldHaveListener:!0,_scrollTargetChanged:function(t,e){if(this._oldScrollTarget&&(this._toggleScrollListener(!1,this._oldScrollTarget),this._oldScrollTarget=null),e)if("document"===t)this.scrollTarget=this._doc;else if("string"==typeof t){var r=this.domHost;this.scrollTarget=r&&r.$?r.$[t]:(0,n.vz)(this.ownerDocument).querySelector("#"+t)}else this._isValidScrollTarget()&&(this._oldScrollTarget=t,this._toggleScrollListener(this._shouldHaveListener,t))},_scrollHandler:function(){},get _defaultScrollTarget(){return this._doc},get _doc(){return this.ownerDocument.documentElement},get _scrollTop(){return this._isValidScrollTarget()?this.scrollTarget===this._doc?window.pageYOffset:this.scrollTarget.scrollTop:0},get _scrollLeft(){return this._isValidScrollTarget()?this.scrollTarget===this._doc?window.pageXOffset:this.scrollTarget.scrollLeft:0},set _scrollTop(t){this.scrollTarget===this._doc?window.scrollTo(window.pageXOffset,t):this._isValidScrollTarget()&&(this.scrollTarget.scrollTop=t)},set _scrollLeft(t){this.scrollTarget===this._doc?window.scrollTo(t,window.pageYOffset):this._isValidScrollTarget()&&(this.scrollTarget.scrollLeft=t)},scroll:function(t,e){var r;"object"===o(t)?(r=t.left,e=t.top):r=t,r=r||0,e=e||0,this.scrollTarget===this._doc?window.scrollTo(r,e):this._isValidScrollTarget()&&(this.scrollTarget.scrollLeft=r,this.scrollTarget.scrollTop=e)},get _scrollTargetWidth(){return this._isValidScrollTarget()?this.scrollTarget===this._doc?window.innerWidth:this.scrollTarget.offsetWidth:0},get _scrollTargetHeight(){return this._isValidScrollTarget()?this.scrollTarget===this._doc?window.innerHeight:this.scrollTarget.offsetHeight:0},_isValidScrollTarget:function(){return this.scrollTarget instanceof HTMLElement},_toggleScrollListener:function(t,e){var r=e===this._doc?window:e;t?this._boundScrollHandler||(this._boundScrollHandler=this._scrollHandler.bind(this),r.addEventListener("scroll",this._boundScrollHandler)):this._boundScrollHandler&&(r.removeEventListener("scroll",this._boundScrollHandler),this._boundScrollHandler=null)},toggleScrollListener:function(t){this._shouldHaveListener=t,this._toggleScrollListener(t,this.scrollTarget)}}},25782:function(t,e,r){"use strict";r(65233),r(65660),r(70019),r(97968);var n=r(9672),o=r(50856),i=r(33760);function l(){var t=function(t,e){e||(e=t.slice(0));return Object.freeze(Object.defineProperties(t,{raw:{value:Object.freeze(e)}}))}(['\n <style include="paper-item-shared-styles"></style>\n <style>\n :host {\n @apply --layout-horizontal;\n @apply --layout-center;\n @apply --paper-font-subhead;\n\n @apply --paper-item;\n @apply --paper-icon-item;\n }\n\n .content-icon {\n @apply --layout-horizontal;\n @apply --layout-center;\n\n width: var(--paper-item-icon-width, 56px);\n @apply --paper-item-icon;\n }\n </style>\n\n <div id="contentIcon" class="content-icon">\n <slot name="item-icon"></slot>\n </div>\n <slot></slot>\n']);return l=function(){return t},t}(0,n.k)({_template:(0,o.d)(l()),is:"paper-icon-item",behaviors:[i.U]})},89194:function(t,e,r){"use strict";r(65233),r(65660),r(70019);var n=r(9672),o=r(50856);function i(){var t=function(t,e){e||(e=t.slice(0));return Object.freeze(Object.defineProperties(t,{raw:{value:Object.freeze(e)}}))}(["\n <style>\n :host {\n overflow: hidden; /* needed for text-overflow: ellipsis to work on ff */\n @apply --layout-vertical;\n @apply --layout-center-justified;\n @apply --layout-flex;\n }\n\n :host([two-line]) {\n min-height: var(--paper-item-body-two-line-min-height, 72px);\n }\n\n :host([three-line]) {\n min-height: var(--paper-item-body-three-line-min-height, 88px);\n }\n\n :host > ::slotted(*) {\n overflow: hidden;\n text-overflow: ellipsis;\n white-space: nowrap;\n }\n\n :host > ::slotted([secondary]) {\n @apply --paper-font-body1;\n\n color: var(--paper-item-body-secondary-color, var(--secondary-text-color));\n\n @apply --paper-item-body-secondary;\n }\n </style>\n\n <slot></slot>\n"]);return i=function(){return t},t}(0,n.k)({_template:(0,o.d)(i()),is:"paper-item-body"})},1275:function(t,e,r){"use strict";r.d(e,{l:function(){return i}});var n=r(94707),o=new WeakMap,i=(0,n.XM)((function(t,e){return function(r){var n=o.get(r);if(Array.isArray(t)){if(Array.isArray(n)&&n.length===t.length&&t.every((function(t,e){return t===n[e]})))return}else if(n===t&&(void 0!==t||o.has(r)))return;r.setValue(e()),o.set(r,Array.isArray(t)?Array.from(t):t)}}))}}]); //# sourceMappingURL=chunk.0af871604d5c7462da2b.js.map
PypiClean
/simplebot_translator-1.4.0.tar.gz/simplebot_translator-1.4.0/CHANGELOG.rst
Changelog ========= `1.4.0`_ -------- - warn user if command has wrong syntax. - warn user if language code is invalid. - added filter to translate text sent in private (user can set the language to which text will be translated) `1.3.0`_ -------- - improved commad description. - try other engines if the default engine fails. `1.2.0`_ -------- - moved check for correct engine from ``deltabot_init``, to ``deltabot_start`` to allow to change engine after a wrong engine was set. `1.1.0`_ -------- - allow to translate quoted message (#1) - quote translated message - allow to set engine 1.0.0 ----- - initial release .. _Unreleased: https://github.com/adbenitez/simplebot_translator/compare/v1.4.0...HEAD .. _1.4.0: https://github.com/adbenitez/simplebot_translator/compare/v1.3.0...v1.4.0 .. _1.3.0: https://github.com/adbenitez/simplebot_translator/compare/v1.2.0...v1.3.0 .. _1.2.0: https://github.com/adbenitez/simplebot_translator/compare/v1.1.0...v1.2.0 .. _1.1.0: https://github.com/adbenitez/simplebot_translator/compare/v1.0.0...v1.1.0
PypiClean
/apache_airflow_providers_amazon-8.6.0-py3-none-any.whl/airflow/providers/amazon/aws/triggers/sagemaker.py
from __future__ import annotations import asyncio from collections import Counter from enum import IntEnum from functools import cached_property from typing import Any, AsyncIterator from botocore.exceptions import WaiterError from airflow import AirflowException from airflow.providers.amazon.aws.hooks.sagemaker import SageMakerHook from airflow.providers.amazon.aws.utils.waiter_with_logging import async_wait from airflow.triggers.base import BaseTrigger, TriggerEvent class SageMakerTrigger(BaseTrigger): """ SageMakerTrigger is fired as deferred class with params to run the task in triggerer. :param job_name: name of the job to check status :param job_type: Type of the sagemaker job whether it is Transform or Training :param poke_interval: polling period in seconds to check for the status :param max_attempts: Number of times to poll for query state before returning the current state, defaults to None. :param aws_conn_id: AWS connection ID for sagemaker """ def __init__( self, job_name: str, job_type: str, poke_interval: int = 30, max_attempts: int = 480, aws_conn_id: str = "aws_default", ): super().__init__() self.job_name = job_name self.job_type = job_type self.poke_interval = poke_interval self.max_attempts = max_attempts self.aws_conn_id = aws_conn_id def serialize(self) -> tuple[str, dict[str, Any]]: """Serialize SagemakerTrigger arguments and classpath.""" return ( "airflow.providers.amazon.aws.triggers.sagemaker.SageMakerTrigger", { "job_name": self.job_name, "job_type": self.job_type, "poke_interval": self.poke_interval, "max_attempts": self.max_attempts, "aws_conn_id": self.aws_conn_id, }, ) @cached_property def hook(self) -> SageMakerHook: return SageMakerHook(aws_conn_id=self.aws_conn_id) @staticmethod def _get_job_type_waiter(job_type: str) -> str: return { "training": "TrainingJobComplete", "transform": "TransformJobComplete", "processing": "ProcessingJobComplete", "tuning": "TuningJobComplete", "endpoint": "endpoint_in_service", # this one is provided by boto }[job_type.lower()] @staticmethod def _get_waiter_arg_name(job_type: str) -> str: return { "training": "TrainingJobName", "transform": "TransformJobName", "processing": "ProcessingJobName", "tuning": "HyperParameterTuningJobName", "endpoint": "EndpointName", }[job_type.lower()] @staticmethod def _get_response_status_key(job_type: str) -> str: return { "training": "TrainingJobStatus", "transform": "TransformJobStatus", "processing": "ProcessingJobStatus", "tuning": "HyperParameterTuningJobStatus", "endpoint": "EndpointStatus", }[job_type.lower()] async def run(self): self.log.info("job name is %s and job type is %s", self.job_name, self.job_type) async with self.hook.async_conn as client: waiter = self.hook.get_waiter( self._get_job_type_waiter(self.job_type), deferrable=True, client=client ) await async_wait( waiter=waiter, waiter_delay=self.poke_interval, waiter_max_attempts=self.max_attempts, args={self._get_waiter_arg_name(self.job_type): self.job_name}, failure_message=f"Error while waiting for {self.job_type} job", status_message=f"{self.job_type} job not done yet", status_args=[self._get_response_status_key(self.job_type)], ) yield TriggerEvent({"status": "success", "message": "Job completed."}) class SageMakerPipelineTrigger(BaseTrigger): """Trigger to wait for a sagemaker pipeline execution to finish.""" class Type(IntEnum): """Type of waiter to use.""" COMPLETE = 1 STOPPED = 2 def __init__( self, waiter_type: Type, pipeline_execution_arn: str, waiter_delay: int, waiter_max_attempts: int, aws_conn_id: str, ): self.waiter_type = waiter_type self.pipeline_execution_arn = pipeline_execution_arn self.waiter_delay = waiter_delay self.waiter_max_attempts = waiter_max_attempts self.aws_conn_id = aws_conn_id def serialize(self) -> tuple[str, dict[str, Any]]: return ( self.__class__.__module__ + "." + self.__class__.__qualname__, { "waiter_type": self.waiter_type.value, # saving the int value here "pipeline_execution_arn": self.pipeline_execution_arn, "waiter_delay": self.waiter_delay, "waiter_max_attempts": self.waiter_max_attempts, "aws_conn_id": self.aws_conn_id, }, ) _waiter_name = { Type.COMPLETE: "PipelineExecutionComplete", Type.STOPPED: "PipelineExecutionStopped", } async def run(self) -> AsyncIterator[TriggerEvent]: hook = SageMakerHook(aws_conn_id=self.aws_conn_id) async with hook.async_conn as conn: waiter = hook.get_waiter(self._waiter_name[self.waiter_type], deferrable=True, client=conn) for _ in range(self.waiter_max_attempts): try: await waiter.wait( PipelineExecutionArn=self.pipeline_execution_arn, WaiterConfig={"MaxAttempts": 1} ) # we reach this point only if the waiter met a success criteria yield TriggerEvent({"status": "success", "value": self.pipeline_execution_arn}) return except WaiterError as error: if "terminal failure" in str(error): raise self.log.info( "Status of the pipeline execution: %s", error.last_response["PipelineExecutionStatus"] ) res = await conn.list_pipeline_execution_steps( PipelineExecutionArn=self.pipeline_execution_arn ) count_by_state = Counter(s["StepStatus"] for s in res["PipelineExecutionSteps"]) running_steps = [ s["StepName"] for s in res["PipelineExecutionSteps"] if s["StepStatus"] == "Executing" ] self.log.info("State of the pipeline steps: %s", count_by_state) self.log.info("Steps currently in progress: %s", running_steps) await asyncio.sleep(int(self.waiter_delay)) raise AirflowException("Waiter error: max attempts reached")
PypiClean
/ts_IntegrationTests-0.1.1.tar.gz/ts_IntegrationTests-0.1.1/python/lsst/ts/IntegrationTests/base_script.py
__all__ = ["BaseScript"] from lsst.ts import salobj from lsst.ts.idl.enums import ScriptQueue class BaseScript: """Defines the common attributes and functions for an AuxTel or MainTel script. Notes ----- Use index=1 for MainTel, 2 for AuxTel. The index is defined as a class attribute for simplicity. The sub-Classes define which index, if necessary. The BaseScript class defaults to index=1, as the most common option. Attributes ---------- index : `int` The index represents the Main Telescope, index=1, or the Auxilliary Telescope, index=2. configs : `tuple` The list of Yaml-formatted script configurations. They are stored in the configs.py module. scripts : `tuple` The list of Standard or External scripts to execute. """ # See Attributes for the definition. index = 1 configs = None scripts = None def __init__(self, isStandard=True, queue_placement="LAST"): """Initialize the given Standard or External script, with the given Yaml configuration, placed in the given ScriptQueue location. Parameters ---------- isStandard : `bool` If True, the script is in ts_standardscripts (True is the default, as it is the most common option). if False, the script is in ts_externalscripts. queue_placement : `str` Options are "FIRST" "LAST" "BEFORE" or "AFTER" and are case insensistive ("FIRST" is the default, for convenience). The BaseScript Class will convert to the appropriate ScriptQueue.Location enum object. """ self.isStandard = isStandard self.queue_placement = queue_placement async def run(self): """Run the specified standard or external script.""" async with salobj.Domain() as domain, salobj.Remote( domain=domain, name="ScriptQueue", index=self.index ) as remote: # Since `async with` is used, # you do NOT have to wait for the remote to start # Convert the queue_placement parameter to the approprirate # ScriptQueue.Location Enum object. queue_placement = getattr( ScriptQueue.Location, self.queue_placement.upper() ) # Wait for the next ScriptQueue heartbeat to ensure it is running. await remote.evt_heartbeat.next(flush=True, timeout=30) # Pause the ScriptQueue to load the scripts into the queue. await remote.cmd_pause.start(timeout=10) # Add scripts to the queue. for script, config in zip(self.scripts, self.configs): await remote.cmd_add.set_start( timeout=10, isStandard=self.isStandard, path=script, config=config, logLevel=10, location=queue_placement, ) # Resume the ScriptQueue to begin script execution. await remote.cmd_resume.set_start(timeout=10)
PypiClean
/snakeskin_fabric-0.1.1-py3-none-any.whl/snakeskin/config.py
import os import json from dataclasses import dataclass, field from typing import List, Mapping import yaml import dacite from .models import Peer, Channel, User, Orderer, ChaincodeSpec from .models.gateway import Gateway from .constants import ChaincodeLanguage @dataclass() class GatewayConfig: """ A gateway config object """ channel: str requestor: str chaincode: str endorsing_peers: List[str] orderers: List[str] @dataclass() class BlockchainConfig: """ A gateway for accessing the blockchain """ @classmethod def from_file(cls, file_path: str): """ Loads gateway config from a static file """ ext = os.path.splitext(file_path)[1] with open(file_path) as inf: if ext == '.json': return cls.from_dict(**json.load(inf)) if ext in {'.yaml', '.yml'}: return cls.from_dict(yaml.load(inf, Loader=yaml.SafeLoader)) raise ValueError( f'Unrecognized file extension for file {file_path}' ) @classmethod def from_dict(cls, value: dict): """ Creates a gateway config from a dictionary """ return dacite.from_dict(cls, value, config=dacite.Config( type_hooks={ ChaincodeLanguage: ChaincodeLanguage } )) peers: Mapping[str, Peer] = field(default_factory=dict) orderers: Mapping[str, Orderer] = field(default_factory=dict) users: Mapping[str, User] = field(default_factory=dict) chaincodes: Mapping[str, ChaincodeSpec] = field(default_factory=dict) gateways: Mapping[str, GatewayConfig] = field(default_factory=dict) def __post_init__(self): # Set names to be the mapping key for all entities that weren't # provided names for name, peer in self.peers.items(): if not peer.name: peer.name = name for name, orderer in self.orderers.items(): if not orderer.name: orderer.name = name for name, user in self.users.items(): if not user.name: user.name = name for name, chaincode in self.chaincodes.items(): if not chaincode.name: chaincode.name = name def get_gateway(self, name: str): """ Gets a gateway using the config name """ if name not in self.gateways: raise KeyError(f'No gateway defined with name "{name}"') config = self.gateways[name] return Gateway( endorsing_peers=[ self.get_peer(peer) for peer in config.endorsing_peers ], chaincode=self.get_chaincode(config.chaincode), requestor=self.get_user(config.requestor), orderers=[ self.get_orderer(orderer) for orderer in config.orderers ], channel=Channel(name=config.channel) ) def get_peer(self, name: str): """ Gets a peer using the config name """ if not name in self.peers: raise KeyError(f'No peer defined with name "{name}"') return self.peers[name] def get_orderer(self, name: str): """ Gets a orderer using the config name """ if not name in self.orderers: raise KeyError(f'No orderer defined with name "{name}"') return self.orderers[name] def get_user(self, name: str): """ Gets a user using the config name """ if not name in self.users: raise KeyError(f'No user defined with name "{name}"') return self.users[name] def get_chaincode(self, name: str): """ Gets a chaincode spec using the config name """ if not name in self.chaincodes: raise KeyError(f'No chaincode defined with name "{name}"') return self.chaincodes[name]
PypiClean
/pulumi_azure_native-2.5.1a1693590910.tar.gz/pulumi_azure_native-2.5.1a1693590910/pulumi_azure_native/storage/private_endpoint_connection.py
import copy import warnings import pulumi import pulumi.runtime from typing import Any, Mapping, Optional, Sequence, Union, overload from .. import _utilities from . import outputs from ._enums import * from ._inputs import * __all__ = ['PrivateEndpointConnectionArgs', 'PrivateEndpointConnection'] @pulumi.input_type class PrivateEndpointConnectionArgs: def __init__(__self__, *, account_name: pulumi.Input[str], private_link_service_connection_state: pulumi.Input['PrivateLinkServiceConnectionStateArgs'], resource_group_name: pulumi.Input[str], private_endpoint_connection_name: Optional[pulumi.Input[str]] = None): """ The set of arguments for constructing a PrivateEndpointConnection resource. :param pulumi.Input[str] account_name: The name of the storage account within the specified resource group. Storage account names must be between 3 and 24 characters in length and use numbers and lower-case letters only. :param pulumi.Input['PrivateLinkServiceConnectionStateArgs'] private_link_service_connection_state: A collection of information about the state of the connection between service consumer and provider. :param pulumi.Input[str] resource_group_name: The name of the resource group within the user's subscription. The name is case insensitive. :param pulumi.Input[str] private_endpoint_connection_name: The name of the private endpoint connection associated with the Azure resource """ pulumi.set(__self__, "account_name", account_name) pulumi.set(__self__, "private_link_service_connection_state", private_link_service_connection_state) pulumi.set(__self__, "resource_group_name", resource_group_name) if private_endpoint_connection_name is not None: pulumi.set(__self__, "private_endpoint_connection_name", private_endpoint_connection_name) @property @pulumi.getter(name="accountName") def account_name(self) -> pulumi.Input[str]: """ The name of the storage account within the specified resource group. Storage account names must be between 3 and 24 characters in length and use numbers and lower-case letters only. """ return pulumi.get(self, "account_name") @account_name.setter def account_name(self, value: pulumi.Input[str]): pulumi.set(self, "account_name", value) @property @pulumi.getter(name="privateLinkServiceConnectionState") def private_link_service_connection_state(self) -> pulumi.Input['PrivateLinkServiceConnectionStateArgs']: """ A collection of information about the state of the connection between service consumer and provider. """ return pulumi.get(self, "private_link_service_connection_state") @private_link_service_connection_state.setter def private_link_service_connection_state(self, value: pulumi.Input['PrivateLinkServiceConnectionStateArgs']): pulumi.set(self, "private_link_service_connection_state", value) @property @pulumi.getter(name="resourceGroupName") def resource_group_name(self) -> pulumi.Input[str]: """ The name of the resource group within the user's subscription. The name is case insensitive. """ return pulumi.get(self, "resource_group_name") @resource_group_name.setter def resource_group_name(self, value: pulumi.Input[str]): pulumi.set(self, "resource_group_name", value) @property @pulumi.getter(name="privateEndpointConnectionName") def private_endpoint_connection_name(self) -> Optional[pulumi.Input[str]]: """ The name of the private endpoint connection associated with the Azure resource """ return pulumi.get(self, "private_endpoint_connection_name") @private_endpoint_connection_name.setter def private_endpoint_connection_name(self, value: Optional[pulumi.Input[str]]): pulumi.set(self, "private_endpoint_connection_name", value) class PrivateEndpointConnection(pulumi.CustomResource): @overload def __init__(__self__, resource_name: str, opts: Optional[pulumi.ResourceOptions] = None, account_name: Optional[pulumi.Input[str]] = None, private_endpoint_connection_name: Optional[pulumi.Input[str]] = None, private_link_service_connection_state: Optional[pulumi.Input[pulumi.InputType['PrivateLinkServiceConnectionStateArgs']]] = None, resource_group_name: Optional[pulumi.Input[str]] = None, __props__=None): """ The Private Endpoint Connection resource. Azure REST API version: 2022-09-01. Prior API version in Azure Native 1.x: 2021-02-01 :param str resource_name: The name of the resource. :param pulumi.ResourceOptions opts: Options for the resource. :param pulumi.Input[str] account_name: The name of the storage account within the specified resource group. Storage account names must be between 3 and 24 characters in length and use numbers and lower-case letters only. :param pulumi.Input[str] private_endpoint_connection_name: The name of the private endpoint connection associated with the Azure resource :param pulumi.Input[pulumi.InputType['PrivateLinkServiceConnectionStateArgs']] private_link_service_connection_state: A collection of information about the state of the connection between service consumer and provider. :param pulumi.Input[str] resource_group_name: The name of the resource group within the user's subscription. The name is case insensitive. """ ... @overload def __init__(__self__, resource_name: str, args: PrivateEndpointConnectionArgs, opts: Optional[pulumi.ResourceOptions] = None): """ The Private Endpoint Connection resource. Azure REST API version: 2022-09-01. Prior API version in Azure Native 1.x: 2021-02-01 :param str resource_name: The name of the resource. :param PrivateEndpointConnectionArgs args: The arguments to use to populate this resource's properties. :param pulumi.ResourceOptions opts: Options for the resource. """ ... def __init__(__self__, resource_name: str, *args, **kwargs): resource_args, opts = _utilities.get_resource_args_opts(PrivateEndpointConnectionArgs, pulumi.ResourceOptions, *args, **kwargs) if resource_args is not None: __self__._internal_init(resource_name, opts, **resource_args.__dict__) else: __self__._internal_init(resource_name, *args, **kwargs) def _internal_init(__self__, resource_name: str, opts: Optional[pulumi.ResourceOptions] = None, account_name: Optional[pulumi.Input[str]] = None, private_endpoint_connection_name: Optional[pulumi.Input[str]] = None, private_link_service_connection_state: Optional[pulumi.Input[pulumi.InputType['PrivateLinkServiceConnectionStateArgs']]] = None, resource_group_name: Optional[pulumi.Input[str]] = None, __props__=None): opts = pulumi.ResourceOptions.merge(_utilities.get_resource_opts_defaults(), opts) if not isinstance(opts, pulumi.ResourceOptions): raise TypeError('Expected resource options to be a ResourceOptions instance') if opts.id is None: if __props__ is not None: raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource') __props__ = PrivateEndpointConnectionArgs.__new__(PrivateEndpointConnectionArgs) if account_name is None and not opts.urn: raise TypeError("Missing required property 'account_name'") __props__.__dict__["account_name"] = account_name __props__.__dict__["private_endpoint_connection_name"] = private_endpoint_connection_name if private_link_service_connection_state is None and not opts.urn: raise TypeError("Missing required property 'private_link_service_connection_state'") __props__.__dict__["private_link_service_connection_state"] = private_link_service_connection_state if resource_group_name is None and not opts.urn: raise TypeError("Missing required property 'resource_group_name'") __props__.__dict__["resource_group_name"] = resource_group_name __props__.__dict__["name"] = None __props__.__dict__["private_endpoint"] = None __props__.__dict__["provisioning_state"] = None __props__.__dict__["type"] = None alias_opts = pulumi.ResourceOptions(aliases=[pulumi.Alias(type_="azure-native:storage/v20190601:PrivateEndpointConnection"), pulumi.Alias(type_="azure-native:storage/v20200801preview:PrivateEndpointConnection"), pulumi.Alias(type_="azure-native:storage/v20210101:PrivateEndpointConnection"), pulumi.Alias(type_="azure-native:storage/v20210201:PrivateEndpointConnection"), pulumi.Alias(type_="azure-native:storage/v20210401:PrivateEndpointConnection"), pulumi.Alias(type_="azure-native:storage/v20210601:PrivateEndpointConnection"), pulumi.Alias(type_="azure-native:storage/v20210801:PrivateEndpointConnection"), pulumi.Alias(type_="azure-native:storage/v20210901:PrivateEndpointConnection"), pulumi.Alias(type_="azure-native:storage/v20220501:PrivateEndpointConnection"), pulumi.Alias(type_="azure-native:storage/v20220901:PrivateEndpointConnection"), pulumi.Alias(type_="azure-native:storage/v20230101:PrivateEndpointConnection")]) opts = pulumi.ResourceOptions.merge(opts, alias_opts) super(PrivateEndpointConnection, __self__).__init__( 'azure-native:storage:PrivateEndpointConnection', resource_name, __props__, opts) @staticmethod def get(resource_name: str, id: pulumi.Input[str], opts: Optional[pulumi.ResourceOptions] = None) -> 'PrivateEndpointConnection': """ Get an existing PrivateEndpointConnection resource's state with the given name, id, and optional extra properties used to qualify the lookup. :param str resource_name: The unique name of the resulting resource. :param pulumi.Input[str] id: The unique provider ID of the resource to lookup. :param pulumi.ResourceOptions opts: Options for the resource. """ opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id)) __props__ = PrivateEndpointConnectionArgs.__new__(PrivateEndpointConnectionArgs) __props__.__dict__["name"] = None __props__.__dict__["private_endpoint"] = None __props__.__dict__["private_link_service_connection_state"] = None __props__.__dict__["provisioning_state"] = None __props__.__dict__["type"] = None return PrivateEndpointConnection(resource_name, opts=opts, __props__=__props__) @property @pulumi.getter def name(self) -> pulumi.Output[str]: """ The name of the resource """ return pulumi.get(self, "name") @property @pulumi.getter(name="privateEndpoint") def private_endpoint(self) -> pulumi.Output[Optional['outputs.PrivateEndpointResponse']]: """ The resource of private end point. """ return pulumi.get(self, "private_endpoint") @property @pulumi.getter(name="privateLinkServiceConnectionState") def private_link_service_connection_state(self) -> pulumi.Output['outputs.PrivateLinkServiceConnectionStateResponse']: """ A collection of information about the state of the connection between service consumer and provider. """ return pulumi.get(self, "private_link_service_connection_state") @property @pulumi.getter(name="provisioningState") def provisioning_state(self) -> pulumi.Output[str]: """ The provisioning state of the private endpoint connection resource. """ return pulumi.get(self, "provisioning_state") @property @pulumi.getter def type(self) -> pulumi.Output[str]: """ The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts" """ return pulumi.get(self, "type")
PypiClean
/basx_bread-0.7.30-py3-none-any.whl/basxbread/forms/forms.py
import htmlgenerator as hg from django import forms from django.contrib.contenttypes.fields import GenericForeignKey, GenericRelation from django.contrib.contenttypes.forms import ( BaseGenericInlineFormSet, generic_inlineformset_factory, ) from django.core.exceptions import FieldDoesNotExist from django.db import models, transaction from django.forms.formsets import DELETION_FIELD_NAME, ORDERING_FIELD_NAME from guardian.shortcuts import get_objects_for_user from .. import layout as _layout # prevent name clashing from ..utils import permissionname from .fields import FormsetField, GenericForeignKeyField # shortcut, actually this should always be used but class based views wanted the class separately def generate_form(request, model, layout, instance, **kwargs): return modelform_factory( request, model=model, layout=layout, instance=instance, )( *([request.POST, request.FILES] if request.method == "POST" else []), instance=instance, **kwargs, ) def modelform_factory( # noqa request, model, layout, instance=None, baseformclass=forms.models.ModelForm, baseinlineformclass=None, cache_querysets=False, ): """Returns a form class which can handle inline-modelform sets and generic foreign keys.""" formfieldelements = _get_form_fields_from_layout(layout) baseinlineformclass = baseinlineformclass or {} class ModelFormBase(baseformclass): field_order = baseformclass.field_order or [ f.fieldname for f in formfieldelements ] def __init__(self, data=None, files=None, initial=None, **kwargs): inst = kwargs.get("instance", instance) formsetinitial = {} for name, field in self.declared_fields.items(): if isinstance(field, FormsetField): formsetinitial[name] = { "instance": inst, } if initial is not None: field.initial = initial.get(name, []) if isinstance(field, GenericForeignKeyField): modelfield = model._meta.get_field(name) if hasattr(modelfield, "lazy_choices"): field.choices = GenericForeignKeyField.objects_to_choices( modelfield.lazy_choices(modelfield, request, inst) ) init = getattr(inst, modelfield.name, None) if init: formsetinitial[name] = GenericForeignKeyField.object_to_choice( init )[0] if initial is not None: formsetinitial.update(initial) super().__init__( data=data, files=files, initial=formsetinitial, **kwargs, ) def save(self, *args, **kwargs): with transaction.atomic(): kwargs["commit"] = False forminstance = super().save(*args, **kwargs) # GenericForeignKey might need a resafe because we set the value for fieldname, field in self.fields.items(): if isinstance(field, GenericForeignKeyField): setattr(forminstance, fieldname, self.cleaned_data[fieldname]) elif isinstance(field, FormsetField) and issubclass( field.formsetclass, BaseGenericInlineFormSet ): self.cleaned_data.pop(fieldname).save() forminstance.save() self.save_m2m() for fieldname, field in self.fields.items(): if isinstance(field, FormsetField) and not issubclass( field.formsetclass, BaseGenericInlineFormSet ): self.cleaned_data[fieldname].instance = forminstance self.cleaned_data[fieldname].save() if self.cleaned_data[fieldname].can_order: order = [ f.instance.pk for f in self.cleaned_data[fieldname].ordered_forms ] getattr( forminstance, f"set_{self.cleaned_data[fieldname].model._meta.model_name}_order", )(order) forminstance.save() # call save a second time to make related objects available in save method return forminstance # GenericForeignKey and one-to-n fields need to be added separatly to the form class attribs = {} for formfieldelement in formfieldelements: modelfield = None try: modelfield = model._meta.get_field(formfieldelement.fieldname) except FieldDoesNotExist: continue if isinstance(modelfield, GenericForeignKey): attribs[modelfield.name] = GenericForeignKeyField( required=not model._meta.get_field(modelfield.fk_field).blank ) elif isinstance(formfieldelement, _layout.forms.Formset): attribs[modelfield.name] = FormsetField( _generate_formset_class( request=request, model=model, modelfield=modelfield, baseinlineformclass=baseinlineformclass.get( modelfield.name, forms.models.ModelForm ), formsetfieldelement=formfieldelement, instance=instance, cache_querysets=cache_querysets, ), instance, formfieldelement.formsetinitial, ) patched_formclass = type(f"{model.__name__}ModelForm", (ModelFormBase,), attribs) modelfields = {f.name for f in model._meta.get_fields()} ret = forms.modelform_factory( model, form=patched_formclass, fields=[ f.fieldname for f in formfieldelements if isinstance(f, _layout.forms.fields.FormFieldMarker) and f.fieldname in modelfields ], formfield_callback=lambda field: _formfield_callback_with_request( field, request, model, instance, cache_querysets ), ) return ret class InlineFormSetWithLimits(forms.BaseInlineFormSet): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def get_queryset(self): self._queryset = super().get_queryset()[: self.max_num] return self._queryset class GenericInlineFormSetWithLimits(BaseGenericInlineFormSet): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def get_queryset(self): self._queryset = super().get_queryset()[: self.max_num] return self._queryset def _generate_formset_class( request, model, modelfield, baseinlineformclass, formsetfieldelement, instance, cache_querysets, ): """Returns a FormSet class which handles inline forms correctly.""" formfieldelements = _get_form_fields_from_layout( hg.BaseElement(*formsetfieldelement) ) # make sure the _layout.forms.FormsetField does not be considered recursively formclass = modelform_factory( request=request, model=modelfield.related_model, layout=formfieldelements, instance=instance, baseformclass=baseinlineformclass, cache_querysets=cache_querysets, ) base_formset_kwargs = { "fields": [field.fieldname for field in formfieldelements], "form": formclass, "extra": 0, "can_delete": True, } if modelfield.one_to_one: base_formset_kwargs["absolute_max"] = 1 base_formset_kwargs["min_num"] = 0 base_formset_kwargs["max_num"] = 1 base_formset_kwargs["extra"] = 1 base_formset_kwargs.update(formsetfieldelement.formsetfactory_kwargs) if isinstance(modelfield, GenericRelation): return generic_inlineformset_factory( modelfield.related_model, ct_field=modelfield.content_type_field_name, fk_field=modelfield.object_id_field_name, formset=GenericInlineFormSetWithLimits, # BUG: instance for the callback is the parent instance, not the instance of the inline object formfield_callback=lambda field: _formfield_callback_with_request( field, request, modelfield.related_model, instance, cache_querysets ), **base_formset_kwargs, ) else: return forms.models.inlineformset_factory( model, modelfield.related_model, formset=InlineFormSetWithLimits, # BUG: instance for the callback is the parent instance, not the instance of the inline object formfield_callback=lambda field: _formfield_callback_with_request( field, request, model, instance, cache_querysets ), fk_name=modelfield.field.name, **base_formset_kwargs, ) def _formfield_callback_with_request(field, request, model, instance, cache_querysets): kwargs = getattr(field, "formfield_kwargs", {}) choices = None if hasattr(field, "lazy_choices"): choices = field.lazy_choices(field, request, instance) if not (choices is None or isinstance(choices, models.QuerySet)): field.choices = choices if hasattr(field, "lazy_initial"): kwargs["initial"] = field.lazy_initial(field, request, instance) ret = field.formfield(**kwargs) if isinstance(choices, models.QuerySet): ret.queryset = choices # apply permissions for querysets and chache the result if hasattr(ret, "queryset"): ret.queryset = get_objects_for_user( request.user, permissionname(ret.queryset.model, "view"), ret.queryset, with_superuser=True, ) if cache_querysets: if not hasattr(request, "formfield_cache"): request.formfield_cache = {} cache_key = f"{field}-query-cache" if cache_key not in request.formfield_cache: forms.models.apply_limit_choices_to_to_formfield(ret) request.formfield_cache[cache_key] = [*ret.choices] ret.choices = request.formfield_cache[cache_key] return ret def _get_form_fields_from_layout(layout): INTERNAL_FIELDS = [DELETION_FIELD_NAME, ORDERING_FIELD_NAME] def walk(element): # do not descend into formsets, they need to be gathered separately if isinstance(element, _layout.forms.Formset): yield element return # do not descend into script tags because we keep formset-empty form templates there if isinstance(element, hg.SCRIPT): return if ( isinstance(element, _layout.forms.fields.FormFieldMarker) and element.fieldname not in INTERNAL_FIELDS ): yield element for e in element: if isinstance(e, hg.BaseElement): yield from walk(e) return list(walk(layout))
PypiClean
/django-mssql-backend-2.8.1.tar.gz/django-mssql-backend-2.8.1/sql_server/pyodbc/schema.py
import binascii import datetime from django.db.backends.base.schema import ( BaseDatabaseSchemaEditor, _is_relevant_relation, _related_non_m2m_objects, logger, ) from django.db.backends.ddl_references import ( Columns, IndexName, Statement as DjStatement, Table, ) from django.db.models import Index from django.db.models.fields import AutoField, BigAutoField from django.db.transaction import TransactionManagementError from django.utils.encoding import force_str class Statement(DjStatement): def __hash__(self): return hash((self.template, str(self.parts['name']))) def __eq__(self, other): return self.template == other.template and str(self.parts['name']) == str(other.parts['name']) class DatabaseSchemaEditor(BaseDatabaseSchemaEditor): _sql_check_constraint = " CONSTRAINT %(name)s CHECK (%(check)s)" _sql_select_default_constraint_name = "SELECT" \ " d.name " \ "FROM sys.default_constraints d " \ "INNER JOIN sys.tables t ON" \ " d.parent_object_id = t.object_id " \ "INNER JOIN sys.columns c ON" \ " d.parent_object_id = c.object_id AND" \ " d.parent_column_id = c.column_id " \ "INNER JOIN sys.schemas s ON" \ " t.schema_id = s.schema_id " \ "WHERE" \ " t.name = %(table)s AND" \ " c.name = %(column)s" _sql_select_foreign_key_constraints = "SELECT" \ " po.name AS table_name," \ " co.name AS constraint_name " \ "FROM sys.foreign_key_columns fkc " \ "INNER JOIN sys.objects co ON" \ " fkc.constraint_object_id = co.object_id " \ "INNER JOIN sys.tables po ON" \ " fkc.parent_object_id = po.object_id " \ "INNER JOIN sys.tables ro ON" \ " fkc.referenced_object_id = ro.object_id " \ "WHERE ro.name = %(table)s" sql_alter_column_default = "ADD DEFAULT %(default)s FOR %(column)s" sql_alter_column_no_default = "DROP CONSTRAINT %(column)s" sql_alter_column_not_null = "ALTER COLUMN %(column)s %(type)s NOT NULL" sql_alter_column_null = "ALTER COLUMN %(column)s %(type)s NULL" sql_alter_column_type = "ALTER COLUMN %(column)s %(type)s" sql_create_column = "ALTER TABLE %(table)s ADD %(column)s %(definition)s" sql_delete_column = "ALTER TABLE %(table)s DROP COLUMN %(column)s" sql_delete_index = "DROP INDEX %(name)s ON %(table)s" sql_delete_table = "DROP TABLE %(table)s" sql_rename_column = "EXEC sp_rename '%(table)s.%(old_column)s', %(new_column)s, 'COLUMN'" sql_rename_table = "EXEC sp_rename %(old_table)s, %(new_table)s" sql_create_unique_null = "CREATE UNIQUE INDEX %(name)s ON %(table)s(%(columns)s) " \ "WHERE %(columns)s IS NOT NULL" def _alter_column_default_sql(self, model, old_field, new_field, drop=False): """ Hook to specialize column default alteration. Return a (sql, params) fragment to add or drop (depending on the drop argument) a default to new_field's column. """ new_default = self.effective_default(new_field) default = '%s' params = [new_default] column = self.quote_name(new_field.column) if drop: params = [] # SQL Server requires the name of the default constraint result = self.execute( self._sql_select_default_constraint_name % { "table": self.quote_value(model._meta.db_table), "column": self.quote_value(new_field.column), }, has_result=True ) if result: for row in result: column = self.quote_name(next(iter(row))) elif self.connection.features.requires_literal_defaults: # Some databases (Oracle) can't take defaults as a parameter # If this is the case, the SchemaEditor for that database should # implement prepare_default(). default = self.prepare_default(new_default) params = [] new_db_params = new_field.db_parameters(connection=self.connection) sql = self.sql_alter_column_no_default if drop else self.sql_alter_column_default return ( sql % { 'column': column, 'type': new_db_params['type'], 'default': default, }, params, ) def _alter_column_null_sql(self, model, old_field, new_field): """ Hook to specialize column null alteration. Return a (sql, params) fragment to set a column to null or non-null as required by new_field, or None if no changes are required. """ if (self.connection.features.interprets_empty_strings_as_nulls and new_field.get_internal_type() in ("CharField", "TextField")): # The field is nullable in the database anyway, leave it alone. return else: new_db_params = new_field.db_parameters(connection=self.connection) sql = self.sql_alter_column_null if new_field.null else self.sql_alter_column_not_null return ( sql % { 'column': self.quote_name(new_field.column), 'type': new_db_params['type'], }, [], ) def _alter_column_type_sql(self, model, old_field, new_field, new_type): new_type = self._set_field_new_type_null_status(old_field, new_type) return super()._alter_column_type_sql(model, old_field, new_field, new_type) def alter_unique_together(self, model, old_unique_together, new_unique_together): """ Deal with a model changing its unique_together. The input unique_togethers must be doubly-nested, not the single-nested ["foo", "bar"] format. """ olds = {tuple(fields) for fields in old_unique_together} news = {tuple(fields) for fields in new_unique_together} # Deleted uniques for fields in olds.difference(news): self._delete_composed_index(model, fields, {'unique': True}, self.sql_delete_index) # Created uniques for fields in news.difference(olds): columns = [model._meta.get_field(field).column for field in fields] condition = ' AND '.join(["[%s] IS NOT NULL" % col for col in columns]) sql = self._create_unique_sql(model, columns, condition=condition) self.execute(sql) def _model_indexes_sql(self, model): """ Return a list of all index SQL statements (field indexes, index_together, Meta.indexes) for the specified model. """ if not model._meta.managed or model._meta.proxy or model._meta.swapped: return [] output = [] for field in model._meta.local_fields: output.extend(self._field_indexes_sql(model, field)) for field_names in model._meta.index_together: fields = [model._meta.get_field(field) for field in field_names] output.append(self._create_index_sql(model, fields, suffix="_idx")) for field_names in model._meta.unique_together: columns = [model._meta.get_field(field).column for field in field_names] condition = ' AND '.join(["[%s] IS NOT NULL" % col for col in columns]) sql = self._create_unique_sql(model, columns, condition=condition) output.append(sql) for index in model._meta.indexes: output.append(index.create_sql(model, self)) return output def _alter_many_to_many(self, model, old_field, new_field, strict): """Alter M2Ms to repoint their to= endpoints.""" for idx in self._constraint_names(old_field.remote_field.through, index=True, unique=True): self.execute(self.sql_delete_index % {'name': idx, 'table': old_field.remote_field.through._meta.db_table}) return super()._alter_many_to_many(model, old_field, new_field, strict) def _db_table_constraint_names(self, db_table, column_names=None, unique=None, primary_key=None, index=None, foreign_key=None, check=None, type_=None, exclude=None): """Return all constraint names matching the columns and conditions.""" if column_names is not None: column_names = [ self.connection.introspection.identifier_converter(name) for name in column_names ] with self.connection.cursor() as cursor: constraints = self.connection.introspection.get_constraints(cursor, db_table) result = [] for name, infodict in constraints.items(): if column_names is None or column_names == infodict['columns']: if unique is not None and infodict['unique'] != unique: continue if primary_key is not None and infodict['primary_key'] != primary_key: continue if index is not None and infodict['index'] != index: continue if check is not None and infodict['check'] != check: continue if foreign_key is not None and not infodict['foreign_key']: continue if type_ is not None and infodict['type'] != type_: continue if not exclude or name not in exclude: result.append(name) return result def _db_table_delete_constraint_sql(self, template, db_table, name): return Statement( template, table=Table(db_table, self.quote_name), name=self.quote_name(name), ) def alter_db_table(self, model, old_db_table, new_db_table): index_names = self._db_table_constraint_names(old_db_table, index=True) for index_name in index_names: self.execute(self._db_table_delete_constraint_sql(self.sql_delete_index, old_db_table, index_name)) index_names = self._db_table_constraint_names(new_db_table, index=True) for index_name in index_names: self.execute(self._db_table_delete_constraint_sql(self.sql_delete_index, new_db_table, index_name)) return super().alter_db_table(model, old_db_table, new_db_table) def _alter_field(self, model, old_field, new_field, old_type, new_type, old_db_params, new_db_params, strict=False): """Actually perform a "physical" (non-ManyToMany) field update.""" # the backend doesn't support altering from/to (Big)AutoField # because of the limited capability of SQL Server to edit IDENTITY property for t in (AutoField, BigAutoField): if isinstance(old_field, t) or isinstance(new_field, t): raise NotImplementedError("the backend doesn't support altering from/to %s." % t.__name__) # Drop any FK constraints, we'll remake them later fks_dropped = set() if old_field.remote_field and old_field.db_constraint: # Drop index, SQL Server requires explicit deletion if not hasattr(new_field, 'db_constraint') or not new_field.db_constraint: index_names = self._constraint_names(model, [old_field.column], index=True) for index_name in index_names: self.execute(self._delete_constraint_sql(self.sql_delete_index, model, index_name)) fk_names = self._constraint_names(model, [old_field.column], foreign_key=True) if strict and len(fk_names) != 1: raise ValueError("Found wrong number (%s) of foreign key constraints for %s.%s" % ( len(fk_names), model._meta.db_table, old_field.column, )) for fk_name in fk_names: fks_dropped.add((old_field.column,)) self.execute(self._delete_constraint_sql(self.sql_delete_fk, model, fk_name)) # Has unique been removed? if old_field.unique and (not new_field.unique or self._field_became_primary_key(old_field, new_field)): # Find the unique constraint for this field constraint_names = self._constraint_names(model, [old_field.column], unique=True, primary_key=False) if strict and len(constraint_names) != 1: raise ValueError("Found wrong number (%s) of unique constraints for %s.%s" % ( len(constraint_names), model._meta.db_table, old_field.column, )) for constraint_name in constraint_names: self.execute(self._delete_constraint_sql(self.sql_delete_unique, model, constraint_name)) # Drop incoming FK constraints if the field is a primary key or unique, # which might be a to_field target, and things are going to change. drop_foreign_keys = ( ( (old_field.primary_key and new_field.primary_key) or (old_field.unique and new_field.unique) ) and old_type != new_type ) if drop_foreign_keys: # '_meta.related_field' also contains M2M reverse fields, these # will be filtered out for _old_rel, new_rel in _related_non_m2m_objects(old_field, new_field): rel_fk_names = self._constraint_names( new_rel.related_model, [new_rel.field.column], foreign_key=True ) for fk_name in rel_fk_names: self.execute(self._delete_constraint_sql(self.sql_delete_fk, new_rel.related_model, fk_name)) # Removed an index? (no strict check, as multiple indexes are possible) # Remove indexes if db_index switched to False or a unique constraint # will now be used in lieu of an index. The following lines from the # truth table show all True cases; the rest are False: # # old_field.db_index | old_field.unique | new_field.db_index | new_field.unique # ------------------------------------------------------------------------------ # True | False | False | False # True | False | False | True # True | False | True | True if (old_field.db_index and not old_field.unique and (not new_field.db_index or new_field.unique)) or ( # Drop indexes on nvarchar columns that are changing to a different type # SQL Server requires explicit deletion (old_field.db_index or old_field.unique) and ( (old_type.startswith('nvarchar') and not new_type.startswith('nvarchar')) )): # Find the index for this field meta_index_names = {index.name for index in model._meta.indexes} # Retrieve only BTREE indexes since this is what's created with # db_index=True. index_names = self._constraint_names(model, [old_field.column], index=True, type_=Index.suffix) for index_name in index_names: if index_name not in meta_index_names: # The only way to check if an index was created with # db_index=True or with Index(['field'], name='foo') # is to look at its name (refs #28053). self.execute(self._delete_constraint_sql(self.sql_delete_index, model, index_name)) # Change check constraints? if (old_db_params['check'] != new_db_params['check'] and old_db_params['check']) or ( # SQL Server requires explicit deletion befor altering column type with the same constraint old_db_params['check'] == new_db_params['check'] and old_db_params['check'] and old_db_params['type'] != new_db_params['type'] ): constraint_names = self._constraint_names(model, [old_field.column], check=True) if strict and len(constraint_names) != 1: raise ValueError("Found wrong number (%s) of check constraints for %s.%s" % ( len(constraint_names), model._meta.db_table, old_field.column, )) for constraint_name in constraint_names: self.execute(self._delete_constraint_sql(self.sql_delete_check, model, constraint_name)) # Have they renamed the column? if old_field.column != new_field.column: # remove old indices self._delete_indexes(model, old_field, new_field) self.execute(self._rename_field_sql(model._meta.db_table, old_field, new_field, new_type)) # Rename all references to the renamed column. for sql in self.deferred_sql: if isinstance(sql, DjStatement): sql.rename_column_references(model._meta.db_table, old_field.column, new_field.column) # Next, start accumulating actions to do actions = [] null_actions = [] post_actions = [] # Type change? if old_type != new_type: fragment, other_actions = self._alter_column_type_sql(model, old_field, new_field, new_type) actions.append(fragment) post_actions.extend(other_actions) # Drop unique constraint, SQL Server requires explicit deletion self._delete_unique_constraints(model, old_field, new_field, strict) # Drop indexes, SQL Server requires explicit deletion self._delete_indexes(model, old_field, new_field) # When changing a column NULL constraint to NOT NULL with a given # default value, we need to perform 4 steps: # 1. Add a default for new incoming writes # 2. Update existing NULL rows with new default # 3. Replace NULL constraint with NOT NULL # 4. Drop the default again. # Default change? old_default = self.effective_default(old_field) new_default = self.effective_default(new_field) needs_database_default = ( old_field.null and not new_field.null and old_default != new_default and new_default is not None and not self.skip_default(new_field) ) if needs_database_default: actions.append(self._alter_column_default_sql(model, old_field, new_field)) # Nullability change? if old_field.null != new_field.null: fragment = self._alter_column_null_sql(model, old_field, new_field) if fragment: null_actions.append(fragment) if not new_field.null: # Drop unique constraint, SQL Server requires explicit deletion self._delete_unique_constraints(model, old_field, new_field, strict) # Drop indexes, SQL Server requires explicit deletion self._delete_indexes(model, old_field, new_field) # Only if we have a default and there is a change from NULL to NOT NULL four_way_default_alteration = ( new_field.has_default() and (old_field.null and not new_field.null) ) if actions or null_actions: if not four_way_default_alteration: # If we don't have to do a 4-way default alteration we can # directly run a (NOT) NULL alteration actions = actions + null_actions # Combine actions together if we can (e.g. postgres) if self.connection.features.supports_combined_alters and actions: sql, params = tuple(zip(*actions)) actions = [(", ".join(sql), sum(params, []))] # Apply those actions for sql, params in actions: self._delete_indexes(model, old_field, new_field) self.execute( self.sql_alter_column % { "table": self.quote_name(model._meta.db_table), "changes": sql, }, params, ) if four_way_default_alteration: # Update existing rows with default value self.execute( self.sql_update_with_default % { "table": self.quote_name(model._meta.db_table), "column": self.quote_name(new_field.column), "default": "%s", }, [new_default], ) # Since we didn't run a NOT NULL change before we need to do it # now for sql, params in null_actions: self.execute( self.sql_alter_column % { "table": self.quote_name(model._meta.db_table), "changes": sql, }, params, ) if post_actions: for sql, params in post_actions: self.execute(sql, params) # If primary_key changed to False, delete the primary key constraint. if old_field.primary_key and not new_field.primary_key: self._delete_primary_key(model, strict) # Added a unique? if self._unique_should_be_added(old_field, new_field): if (self.connection.features.supports_nullable_unique_constraints and not new_field.many_to_many and new_field.null): self.execute( self._create_index_sql( model, [new_field], sql=self.sql_create_unique_null, suffix="_uniq" ) ) else: self.execute(self._create_unique_sql(model, [new_field.column])) # Added an index? # constraint will no longer be used in lieu of an index. The following # lines from the truth table show all True cases; the rest are False: # # old_field.db_index | old_field.unique | new_field.db_index | new_field.unique # ------------------------------------------------------------------------------ # False | False | True | False # False | True | True | False # True | True | True | False if (not old_field.db_index or old_field.unique) and new_field.db_index and not new_field.unique: self.execute(self._create_index_sql(model, [new_field])) # Restore indexes & unique constraints deleted above, SQL Server requires explicit restoration if (old_type != new_type or (old_field.null and not new_field.null)) and ( old_field.column == new_field.column ): # Restore unique constraints # Note: if nullable they are implemented via an explicit filtered UNIQUE INDEX (not CONSTRAINT) # in order to get ANSI-compliant NULL behaviour (i.e. NULL != NULL, multiple are allowed) if old_field.unique and new_field.unique: if new_field.null: self.execute( self._create_index_sql( model, [old_field], sql=self.sql_create_unique_null, suffix="_uniq" ) ) else: self.execute(self._create_unique_sql(model, columns=[old_field.column])) else: for fields in model._meta.unique_together: columns = [model._meta.get_field(field).column for field in fields] if old_field.column in columns: condition = ' AND '.join(["[%s] IS NOT NULL" % col for col in columns]) self.execute(self._create_unique_sql(model, columns, condition=condition)) # Restore indexes index_columns = [] if old_field.db_index and new_field.db_index: index_columns.append([old_field]) else: for fields in model._meta.index_together: columns = [model._meta.get_field(field) for field in fields] if old_field.column in [c.column for c in columns]: index_columns.append(columns) if index_columns: for columns in index_columns: self.execute(self._create_index_sql(model, columns, suffix='_idx')) # Type alteration on primary key? Then we need to alter the column # referring to us. rels_to_update = [] if old_field.primary_key and new_field.primary_key and old_type != new_type: rels_to_update.extend(_related_non_m2m_objects(old_field, new_field)) # Changed to become primary key? if self._field_became_primary_key(old_field, new_field): # Make the new one self.execute( self.sql_create_pk % { "table": self.quote_name(model._meta.db_table), "name": self.quote_name( self._create_index_name(model._meta.db_table, [new_field.column], suffix="_pk") ), "columns": self.quote_name(new_field.column), } ) # Update all referencing columns rels_to_update.extend(_related_non_m2m_objects(old_field, new_field)) # Handle our type alters on the other end of rels from the PK stuff above for old_rel, new_rel in rels_to_update: rel_db_params = new_rel.field.db_parameters(connection=self.connection) rel_type = rel_db_params['type'] fragment, other_actions = self._alter_column_type_sql( new_rel.related_model, old_rel.field, new_rel.field, rel_type ) self.execute( self.sql_alter_column % { "table": self.quote_name(new_rel.related_model._meta.db_table), "changes": fragment[0], }, fragment[1], ) for sql, params in other_actions: self.execute(sql, params) # Does it have a foreign key? if (new_field.remote_field and (fks_dropped or not old_field.remote_field or not old_field.db_constraint) and new_field.db_constraint): self.execute(self._create_fk_sql(model, new_field, "_fk_%(to_table)s_%(to_column)s")) # Rebuild FKs that pointed to us if we previously had to drop them if drop_foreign_keys: for rel in new_field.model._meta.related_objects: if _is_relevant_relation(rel, new_field) and rel.field.db_constraint: self.execute(self._create_fk_sql(rel.related_model, rel.field, "_fk")) # Does it have check constraints we need to add? if (old_db_params['check'] != new_db_params['check'] and new_db_params['check']) or ( # SQL Server requires explicit creation after altering column type with the same constraint old_db_params['check'] == new_db_params['check'] and new_db_params['check'] and old_db_params['type'] != new_db_params['type'] ): self.execute( self.sql_create_check % { "table": self.quote_name(model._meta.db_table), "name": self.quote_name( self._create_index_name(model._meta.db_table, [new_field.column], suffix="_check") ), "column": self.quote_name(new_field.column), "check": new_db_params['check'], } ) # Drop the default if we need to # (Django usually does not use in-database defaults) if needs_database_default: changes_sql, params = self._alter_column_default_sql(model, old_field, new_field, drop=True) sql = self.sql_alter_column % { "table": self.quote_name(model._meta.db_table), "changes": changes_sql, } self.execute(sql, params) # Reset connection if required if self.connection.features.connection_persists_old_columns: self.connection.close() def _delete_indexes(self, model, old_field, new_field): index_columns = [] if old_field.db_index and new_field.db_index: index_columns.append([old_field.column]) for fields in model._meta.index_together: columns = [model._meta.get_field(field).column for field in fields] if old_field.column in columns: index_columns.append(columns) for fields in model._meta.unique_together: columns = [model._meta.get_field(field).column for field in fields] if old_field.column in columns: index_columns.append(columns) if index_columns: for columns in index_columns: index_names = self._constraint_names(model, columns, index=True) for index_name in index_names: self.execute(self._delete_constraint_sql(self.sql_delete_index, model, index_name)) def _delete_unique_constraints(self, model, old_field, new_field, strict=False): unique_columns = [] if old_field.unique and new_field.unique: unique_columns.append([old_field.column]) if unique_columns: for columns in unique_columns: constraint_names_normal = self._constraint_names(model, columns, unique=True, index=False) constraint_names_index = self._constraint_names(model, columns, unique=True, index=True) constraint_names = constraint_names_normal + constraint_names_index if strict and len(constraint_names) != 1: raise ValueError("Found wrong number (%s) of unique constraints for %s.%s" % ( len(constraint_names), model._meta.db_table, old_field.column, )) for constraint_name in constraint_names_normal: self.execute(self._delete_constraint_sql(self.sql_delete_unique, model, constraint_name)) # Unique indexes which are not table constraints must be deleted using the appropriate SQL. # These may exist for example to enforce ANSI-compliant unique constraints on nullable columns. for index_name in constraint_names_index: self.execute(self._delete_constraint_sql(self.sql_delete_index, model, index_name)) def _rename_field_sql(self, table, old_field, new_field, new_type): new_type = self._set_field_new_type_null_status(old_field, new_type) return super()._rename_field_sql(table, old_field, new_field, new_type) def _set_field_new_type_null_status(self, field, new_type): """ Keep the null property of the old field. If it has changed, it will be handled separately. """ if field.null: new_type += " NULL" else: new_type += " NOT NULL" return new_type def add_field(self, model, field): """ Create a field on a model. Usually involves adding a column, but may involve adding a table instead (for M2M fields). """ # Special-case implicit M2M tables if field.many_to_many and field.remote_field.through._meta.auto_created: return self.create_model(field.remote_field.through) # Get the column's definition definition, params = self.column_sql(model, field, include_default=True) # It might not actually have a column behind it if definition is None: return if (self.connection.features.supports_nullable_unique_constraints and not field.many_to_many and field.null and field.unique): definition = definition.replace(' UNIQUE', '') self.deferred_sql.append(self._create_index_sql( model, [field], sql=self.sql_create_unique_null, suffix="_uniq" )) # Check constraints can go on the column SQL here db_params = field.db_parameters(connection=self.connection) if db_params['check']: definition += " CHECK (%s)" % db_params['check'] # Build the SQL and run it sql = self.sql_create_column % { "table": self.quote_name(model._meta.db_table), "column": self.quote_name(field.column), "definition": definition, } self.execute(sql, params) # Drop the default if we need to # (Django usually does not use in-database defaults) if not self.skip_default(field) and self.effective_default(field) is not None: changes_sql, params = self._alter_column_default_sql(model, None, field, drop=True) sql = self.sql_alter_column % { "table": self.quote_name(model._meta.db_table), "changes": changes_sql, } self.execute(sql, params) # Add an index, if required self.deferred_sql.extend(self._field_indexes_sql(model, field)) # Add any FK constraints later if field.remote_field and self.connection.features.supports_foreign_keys and field.db_constraint: self.deferred_sql.append(self._create_fk_sql(model, field, "_fk_%(to_table)s_%(to_column)s")) # Reset connection if required if self.connection.features.connection_persists_old_columns: self.connection.close() def _create_unique_sql(self, model, columns, name=None, condition=None): def create_unique_name(*args, **kwargs): return self.quote_name(self._create_index_name(*args, **kwargs)) table = Table(model._meta.db_table, self.quote_name) if name is None: name = IndexName(model._meta.db_table, columns, '_uniq', create_unique_name) else: name = self.quote_name(name) columns = Columns(table, columns, self.quote_name) if condition: return Statement( self.sql_create_unique_index, table=table, name=name, columns=columns, condition=' WHERE ' + condition, ) if self.connection.features.supports_partial_indexes else None else: return Statement( self.sql_create_unique, table=table, name=name, columns=columns, ) def _create_index_sql(self, model, fields, *, name=None, suffix='', using='', db_tablespace=None, col_suffixes=(), sql=None, opclasses=(), condition=None): """ Return the SQL statement to create the index for one or several fields. `sql` can be specified if the syntax differs from the standard (GIS indexes, ...). """ tablespace_sql = self._get_index_tablespace_sql(model, fields, db_tablespace=db_tablespace) columns = [field.column for field in fields] sql_create_index = sql or self.sql_create_index table = model._meta.db_table def create_index_name(*args, **kwargs): nonlocal name if name is None: name = self._create_index_name(*args, **kwargs) return self.quote_name(name) return Statement( sql_create_index, table=Table(table, self.quote_name), name=IndexName(table, columns, suffix, create_index_name), using=using, columns=self._index_columns(table, columns, col_suffixes, opclasses), extra=tablespace_sql, condition=(' WHERE ' + condition) if condition else '', ) def create_model(self, model): """ Takes a model and creates a table for it in the database. Will also create any accompanying indexes or unique constraints. """ # Create column SQL, add FK deferreds if needed column_sqls = [] params = [] for field in model._meta.local_fields: # SQL definition, extra_params = self.column_sql(model, field) if definition is None: continue if (self.connection.features.supports_nullable_unique_constraints and not field.many_to_many and field.null and field.unique): definition = definition.replace(' UNIQUE', '') self.deferred_sql.append(self._create_index_sql( model, [field], sql=self.sql_create_unique_null, suffix="_uniq" )) # Check constraints can go on the column SQL here db_params = field.db_parameters(connection=self.connection) if db_params['check']: # SQL Server requires a name for the check constraint definition += self._sql_check_constraint % { "name": self._create_index_name(model._meta.db_table, [field.column], suffix="_check"), "check": db_params['check'] } # Autoincrement SQL (for backends with inline variant) col_type_suffix = field.db_type_suffix(connection=self.connection) if col_type_suffix: definition += " %s" % col_type_suffix params.extend(extra_params) # FK if field.remote_field and field.db_constraint: to_table = field.remote_field.model._meta.db_table to_column = field.remote_field.model._meta.get_field(field.remote_field.field_name).column if self.sql_create_inline_fk: definition += " " + self.sql_create_inline_fk % { "to_table": self.quote_name(to_table), "to_column": self.quote_name(to_column), } elif self.connection.features.supports_foreign_keys: self.deferred_sql.append(self._create_fk_sql(model, field, "_fk_%(to_table)s_%(to_column)s")) # Add the SQL to our big list column_sqls.append("%s %s" % ( self.quote_name(field.column), definition, )) # Autoincrement SQL (for backends with post table definition variant) if field.get_internal_type() in ("AutoField", "BigAutoField"): autoinc_sql = self.connection.ops.autoinc_sql(model._meta.db_table, field.column) if autoinc_sql: self.deferred_sql.extend(autoinc_sql) # Add any unique_togethers (always deferred, as some fields might be # created afterwards, like geometry fields with some backends) for fields in model._meta.unique_together: columns = [model._meta.get_field(field).column for field in fields] condition = ' AND '.join(["[%s] IS NOT NULL" % col for col in columns]) self.deferred_sql.append(self._create_unique_sql(model, columns, condition=condition)) # Make the table sql = self.sql_create_table % { "table": self.quote_name(model._meta.db_table), "definition": ", ".join(column_sqls) } if model._meta.db_tablespace: tablespace_sql = self.connection.ops.tablespace_sql(model._meta.db_tablespace) if tablespace_sql: sql += ' ' + tablespace_sql # Prevent using [] as params, in the case a literal '%' is used in the definition self.execute(sql, params or None) # Add any field index and index_together's (deferred as SQLite3 _remake_table needs it) self.deferred_sql.extend(self._model_indexes_sql(model)) self.deferred_sql = list(set(self.deferred_sql)) # Make M2M tables for field in model._meta.local_many_to_many: if field.remote_field.through._meta.auto_created: self.create_model(field.remote_field.through) def delete_model(self, model): """ Deletes a model from the database. """ # Delete the foreign key constraints result = self.execute( self._sql_select_foreign_key_constraints % { "table": self.quote_value(model._meta.db_table), }, has_result=True ) if result: for table, constraint in result: sql = self.sql_alter_column % { "table": self.quote_name(table), "changes": self.sql_alter_column_no_default % { "column": self.quote_name(constraint), } } self.execute(sql) # Delete the table super().delete_model(model) # Remove all deferred statements referencing the deleted table. for sql in list(self.deferred_sql): if isinstance(sql, Statement) and sql.references_table(model._meta.db_table): self.deferred_sql.remove(sql) def execute(self, sql, params=(), has_result=False): """ Executes the given SQL statement, with optional parameters. """ result = None # Don't perform the transactional DDL check if SQL is being collected # as it's not going to be executed anyway. if not self.collect_sql and self.connection.in_atomic_block and not self.connection.features.can_rollback_ddl: raise TransactionManagementError( "Executing DDL statements while in a transaction on databases " "that can't perform a rollback is prohibited." ) # Account for non-string statement objects. sql = str(sql) # Log the command we're running, then run it logger.debug("%s; (params %r)", sql, params, extra={'params': params, 'sql': sql}) if self.collect_sql: ending = "" if sql.endswith(";") else ";" if params is not None: self.collected_sql.append((sql % tuple(map(self.quote_value, params))) + ending) else: self.collected_sql.append(sql + ending) else: cursor = self.connection.cursor() cursor.execute(sql, params) if has_result: result = cursor.fetchall() # the cursor can be closed only when the driver supports opening # multiple cursors on a connection because the migration command # has already opened a cursor outside this method if self.connection.supports_mars: cursor.close() return result def prepare_default(self, value): return self.quote_value(value) def quote_value(self, value): """ Returns a quoted version of the value so it's safe to use in an SQL string. This is not safe against injection from user code; it is intended only for use in making SQL scripts or preparing default values for particularly tricky backends (defaults are not user-defined, though, so this is safe). """ if isinstance(value, (datetime.datetime, datetime.date, datetime.time)): return "'%s'" % value elif isinstance(value, str): return "'%s'" % value.replace("'", "''") elif isinstance(value, (bytes, bytearray, memoryview)): return "0x%s" % force_str(binascii.hexlify(value)) elif isinstance(value, bool): return "1" if value else "0" else: return str(value) def remove_field(self, model, field): """ Removes a field from a model. Usually involves deleting a column, but for M2Ms may involve deleting a table. """ # Special-case implicit M2M tables if field.many_to_many and field.remote_field.through._meta.auto_created: return self.delete_model(field.remote_field.through) # It might not actually have a column behind it if field.db_parameters(connection=self.connection)['type'] is None: return # Drop any FK constraints, SQL Server requires explicit deletion with self.connection.cursor() as cursor: constraints = self.connection.introspection.get_constraints(cursor, model._meta.db_table) for name, infodict in constraints.items(): if field.column in infodict['columns'] and infodict['foreign_key']: self.execute(self._delete_constraint_sql(self.sql_delete_fk, model, name)) # Drop any indexes, SQL Server requires explicit deletion for name, infodict in constraints.items(): if field.column in infodict['columns'] and infodict['index']: self.execute(self.sql_delete_index % { "table": self.quote_name(model._meta.db_table), "name": self.quote_name(name), }) # Drop primary key constraint, SQL Server requires explicit deletion for name, infodict in constraints.items(): if field.column in infodict['columns'] and infodict['primary_key']: self.execute(self.sql_delete_pk % { "table": self.quote_name(model._meta.db_table), "name": self.quote_name(name), }) # Drop check constraints, SQL Server requires explicit deletion for name, infodict in constraints.items(): if field.column in infodict['columns'] and infodict['check']: self.execute(self.sql_delete_check % { "table": self.quote_name(model._meta.db_table), "name": self.quote_name(name), }) # Drop unique constraints, SQL Server requires explicit deletion for name, infodict in constraints.items(): if (field.column in infodict['columns'] and infodict['unique'] and not infodict['primary_key'] and not infodict['index']): self.execute(self.sql_delete_unique % { "table": self.quote_name(model._meta.db_table), "name": self.quote_name(name), }) # Delete the column sql = self.sql_delete_column % { "table": self.quote_name(model._meta.db_table), "column": self.quote_name(field.column), } self.execute(sql) # Reset connection if required if self.connection.features.connection_persists_old_columns: self.connection.close() # Remove all deferred statements referencing the deleted column. for sql in list(self.deferred_sql): if isinstance(sql, Statement) and sql.references_column(model._meta.db_table, field.column): self.deferred_sql.remove(sql)
PypiClean
/bitbank-client-0.1.1.tar.gz/bitbank-client-0.1.1/bitbank_client/sync.py
import hashlib import requests import time import hmac import json import urllib class Client(object): def __init__(self, **kwargs): self.public_origin = kwargs.get('public_origin', 'https://public.bitbank.cc') self.private_origin = kwargs.get('private_origin', 'https://api.bitbank.cc') self.public_key = kwargs.get('public_key', None) if self.public_key is None: raise Exception('api key is absent.') self.private_key = kwargs.get('private_key', None) if self.private_key is None: raise Exception('secret key is absent.') self.timeout = kwargs.get('timeout', None) def _requests_public(self, path): uri = self.public_origin + path res = requests.get(uri, timeout=self.timeout) return res def _requests_private(self, path, method='GET', params=None): nonce = str(time.time()) if method == 'GET': text = nonce + path + urllib.parse.urlencode(params) uri = '{0}{1}{2}'.format(self.private_origin, path, urllib.parse.urlencode(params)) else: #method == 'POST' text = nonce + json.dumps(params) uri = '{0}{1}'.format(self.private_origin, path) headers = { 'ACCESS-KEY': self.public_key, 'ACCESS-NONCE': nonce, 'ACCESS-SIGNATURE': self._signature(text), 'Content-Type': 'application/json' } if method == 'GET': res = requests.get(uri, headers=headers, timeout=self.timeout, params=params) else: #method == 'POST' res = requests.post(uri, headers=headers, timeout=self.timeout, data=params) return res def _signature(self, params): sign = hmac.new(self.private_key.encode('utf-8'), params.encode('utf-8'), hashlib.sha256).hexdigest() return sign def get_ticker(self, **kwargs): params = kwargs path = '/{0}/ticker'.format(params['pair']) data = self._requests_public(path) return data def get_depth(self, **kwargs): params = kwargs path = '/{0}/depth'.format(params['pair']) data = self._requests_public(path) return data def get_transactions(self, **kwargs): params = kwargs path = '/{0}/transactions'.format(params['pair']) yyyymmdd = params.get('yyyymmdd', None) if yyyymmdd is not None: path += '/{0}'.format(params['yyyymmdd']) data = self._requests_public(path) return data def get_candlestick(self, **kwargs): params = kwargs path = '/{0}/candlestick/{1}/{2}'.format(params['pair'], params['candle_type'], params['yyyymmdd']) data = self._requests_public(path) return data def get_assets(self, **kwargs): params = kwargs path = '/v1/user/assets' data = self._requests_private(path, params=params) return data def get_order(self, **kwargs): params = kwargs path = '/v1/user/spot/order' data = self._requests_private(path, params=params) return data def order(self, **kwargs): params = kwargs path = '/v1/user/spot/order' data = self._requests_private(path, method='POST', params=params) return data def cancel_order(self, **kwargs): params = kwargs path = '/v1/user/spot/cancel_order' data = self._requests_private(path, method='POST', params=params) return data def cancel_orders(self, **kwargs): params = kwargs path = '/v1/user/spot/cancel_orders' data = self._requests_private(path, method='POST', params=params) return data def orders_info(self, **kwargs): params = kwargs path = '/v1/user/spot/orders_info' data = self._requests_private(path, method='POST', params=params) return data def get_active_orders(self, **kwargs): params = kwargs path = '/v1/user/spot/active_orders' data = self._requests_private(path, params=params) return data def get_trade_history(self, **kwargs): params = kwargs path = '/v1/user/spot/trade_history' data = self._requests_private(path, params=params) return data def get_withdrawal_account(self, **kwargs): params = kwargs path = '/v1/user/withdrawal_account' data = self._requests_private(path, params=params) return data def request_withdrawal(self, **kwargs): params = kwargs path = '/v1/user/request_withdrawal' data = self._requests_private(path, method='POST', params=params) return data
PypiClean
/lightnovel_crawler-3.2.10-py3-none-any.whl/sources/en/k/kolnovelnewsite.py
import logging from lncrawl.core.crawler import Crawler logger = logging.getLogger(__name__) search_url = "https://newsite.kolnovel.com/?s=%s&post_type=wp-manga" chapter_list_url = "https://newsite.kolnovel.com/wp-admin/admin-ajax.php" class kolnovelnewsite(Crawler): base_url = "https://newsite.kolnovel.com/" def search_novel(self, query): query = query.lower().replace(" ", "+") soup = self.get_soup(search_url % query) results = [] for tab in soup.select(".c-tabs-item__content"): a = tab.select_one(".post-title h3 a") latest = tab.select_one(".latest-chap .chapter a").text votes = tab.select_one(".rating .total_votes").text results.append( { "title": a.text.strip(), "url": self.absolute_url(a["href"]), "info": "%s | Rating: %s" % (latest, votes), } ) return results def read_novel_info(self): logger.debug("Visiting %s", self.novel_url) soup = self.get_soup(self.novel_url) possible_title = soup.select_one(".post-title h1") for span in possible_title.select("span"): span.extract() self.novel_title = possible_title.text.strip() logger.info("Novel title: %s", self.novel_title) possible_image = soup.select_one(".summary_image a img") if possible_image: self.novel_cover = self.absolute_url(possible_image["src"]) logger.info("Novel cover: %s", self.novel_cover) self.novel_author = " ".join( [ a.text.strip() for a in soup.select('.author-content a[href*="manga-author"]') ] ) logger.info("%s", self.novel_author) self.novel_id = soup.select_one("#manga-chapters-holder")["data-id"] logger.info("Novel id: %s", self.novel_id) response = self.submit_form( chapter_list_url, data={ "action": "manga_get_chapters", "manga": self.novel_id, }, ) soup = self.make_soup(response) for a in reversed(soup.select(".wp-manga-chapter a")): chap_id = len(self.chapters) + 1 vol_id = 1 + len(self.chapters) // 100 if chap_id % 100 == 1: self.volumes.append({"id": vol_id}) self.chapters.append( { "id": chap_id, "volume": vol_id, "title": a.text.strip(), "url": self.absolute_url(a["href"]), } ) def download_chapter_body(self, chapter): soup = self.get_soup(chapter["url"]) contents = soup.select(".reading-content p") return "".join([str(p) for p in contents])
PypiClean
/convo6-0.1-py3-none-any.whl/convo/utils/common.py
import logging import os import shutil import warnings from types import TracebackType from typing import Any, Callable, Dict, List, Optional, Text, Type import convo.core.utils import convo.utils.io from convo.cli import utils from convo.cli.utils import bcolors from convo.constants import ( DEFAULT_LOG_LEVEL, DEFAULT_LOG_LEVEL_LIBRARIES, ENV_LOG_LEVEL, ENV_LOG_LEVEL_LIBRARIES, GLOBAL_USER_CONFIG_PATH, ) logger = logging.getLogger(__name__) class TempDirectoryPath(str): """Represents a path to an temporary directory. When used as a context manager, it erases the contents of the directory on exit. """ def __enter__(self) -> "TempDirectoryPath": return self def __exit__( self, _exc: Optional[Type[BaseException]], _value: Optional[Exception], _tb: Optional[TracebackType], ) -> bool: if os.path.exists(self): shutil.rmtree(self) def arguments_of(func: Callable) -> List[Text]: """Return the parameters of the function `func` as a list of names.""" import inspect return list(inspect.signature(func).parameters.keys()) def read_global_config() -> Dict[Text, Any]: """Read global Convo configuration.""" # noinspection PyBroadException try: return convo.utils.io.read_config_file(GLOBAL_USER_CONFIG_PATH) except Exception: # if things go south we pretend there is no config return {} def set_log_level(log_level: Optional[int] = None): """Set log level of Convo and Tensorflow either to the provided log level or to the log level specified in the environment variable 'LOG_LEVEL'. If none is set a default log level will be used.""" import logging if not log_level: log_level = os.environ.get(ENV_LOG_LEVEL, DEFAULT_LOG_LEVEL) log_level = logging.getLevelName(log_level) logging.getLogger("convo").setLevel(log_level) update_tensorflow_log_level() update_asyncio_log_level() update_apscheduler_log_level() update_socketio_log_level() os.environ[ENV_LOG_LEVEL] = logging.getLevelName(log_level) def update_apscheduler_log_level() -> None: log_level = os.environ.get(ENV_LOG_LEVEL_LIBRARIES, DEFAULT_LOG_LEVEL_LIBRARIES) apscheduler_loggers = [ "apscheduler", "apscheduler.scheduler", "apscheduler.executors", "apscheduler.executors.default", ] for logger_name in apscheduler_loggers: logging.getLogger(logger_name).setLevel(log_level) logging.getLogger(logger_name).propagate = False def update_socketio_log_level() -> None: log_level = os.environ.get(ENV_LOG_LEVEL_LIBRARIES, DEFAULT_LOG_LEVEL_LIBRARIES) socketio_loggers = ["websockets.protocol", "engineio.server", "socketio.server"] for logger_name in socketio_loggers: logging.getLogger(logger_name).setLevel(log_level) logging.getLogger(logger_name).propagate = False def update_tensorflow_log_level() -> None: """Set the log level of Tensorflow to the log level specified in the environment variable 'LOG_LEVEL_LIBRARIES'.""" # Disables libvinfer, tensorRT, cuda, AVX2 and FMA warnings (CPU support). This variable needs to be set before the # first import since some warnings are raised on the first import. os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2" import tensorflow as tf log_level = os.environ.get(ENV_LOG_LEVEL_LIBRARIES, DEFAULT_LOG_LEVEL_LIBRARIES) if log_level == "DEBUG": tf_log_level = tf.compat.v1.logging.DEBUG elif log_level == "INFO": tf_log_level = tf.compat.v1.logging.INFO elif log_level == "WARNING": tf_log_level = tf.compat.v1.logging.WARN else: tf_log_level = tf.compat.v1.logging.ERROR tf.compat.v1.logging.set_verbosity(tf_log_level) logging.getLogger("tensorflow").propagate = False def update_sanic_log_level(log_file: Optional[Text] = None): """Set the log level of sanic loggers to the log level specified in the environment variable 'LOG_LEVEL_LIBRARIES'.""" from sanic.log import logger, error_logger, access_logger log_level = os.environ.get(ENV_LOG_LEVEL_LIBRARIES, DEFAULT_LOG_LEVEL_LIBRARIES) logger.setLevel(log_level) error_logger.setLevel(log_level) access_logger.setLevel(log_level) logger.propagate = False error_logger.propagate = False access_logger.propagate = False if log_file is not None: formatter = logging.Formatter("%(asctime)s [%(levelname)-5.5s] %(message)s") file_handler = logging.FileHandler(log_file) file_handler.setFormatter(formatter) logger.addHandler(file_handler) error_logger.addHandler(file_handler) access_logger.addHandler(file_handler) def update_asyncio_log_level() -> None: """Set the log level of asyncio to the log level specified in the environment variable 'LOG_LEVEL_LIBRARIES'.""" log_level = os.environ.get(ENV_LOG_LEVEL_LIBRARIES, DEFAULT_LOG_LEVEL_LIBRARIES) logging.getLogger("asyncio").setLevel(log_level) def set_log_and_warnings_filters() -> None: """ Set log filters on the root logger, and duplicate filters for warnings. Filters only propagate on handlers, not loggers. """ for handler in logging.getLogger().handlers: handler.addFilter(RepeatedLogFilter()) warnings.filterwarnings("once", category=UserWarning) def obtain_verbosity() -> int: """Returns a verbosity level according to the set log level.""" log_level = os.environ.get(ENV_LOG_LEVEL, DEFAULT_LOG_LEVEL) verbosity = 0 if log_level == "DEBUG": verbosity = 2 if log_level == "INFO": verbosity = 1 return verbosity def is_logging_disabled() -> bool: """Returns true, if log level is set to WARNING or ERROR, false otherwise.""" log_level = os.environ.get(ENV_LOG_LEVEL, DEFAULT_LOG_LEVEL) return log_level == "ERROR" or log_level == "WARNING" def sort_list_of_dicts_by_first_key(dicts: List[Dict]) -> List[Dict]: """Sorts a list of dictionaries by their first key.""" return sorted(dicts, key=lambda d: list(d.keys())[0]) # noinspection PyUnresolvedReferences def class_from_module_path( module_path: Text, lookup_path: Optional[Text] = None ) -> Any: """Given the module name and path of a class, tries to retrieve the class. The loaded class can be used to instantiate new objects. """ import importlib # load the module, will raise ImportError if module cannot be loaded if "." in module_path: module_name, _, class_name = module_path.rpartition(".") m = importlib.import_module(module_name) # get the class, will raise AttributeError if class cannot be found return getattr(m, class_name) else: module = globals().get(module_path, locals().get(module_path)) if module is not None: return module if lookup_path: # last resort: try to import the class from the lookup path m = importlib.import_module(lookup_path) return getattr(m, module_path) else: raise ImportError(f"Cannot retrieve class from path {module_path}.") def minimal_kwargs( kwargs: Dict[Text, Any], func: Callable, excluded_keys: Optional[List] = None ) -> Dict[Text, Any]: """Returns only the kwargs which are required by a function. Keys, contained in the exception list, are not included. Args: kwargs: All available kwargs. func: The function which should be called. excluded_keys: Keys to exclude from the result. Returns: Subset of kwargs which are accepted by `func`. """ excluded_keys = excluded_keys or [] possible_arguments = arguments_of(func) return { k: v for k, v in kwargs.items() if k in possible_arguments and k not in excluded_keys } def write_global_config_value(name: Text, value: Any) -> None: """Read global Convo configuration.""" try: os.makedirs(os.path.dirname(GLOBAL_USER_CONFIG_PATH), exist_ok=True) c = read_global_config() c[name] = value convo.core.utils.dump_obj_as_yaml_to_file(GLOBAL_USER_CONFIG_PATH, c) except Exception as e: logger.warning(f"Failed to write global config. Error: {e}. Skipping.") def read_global_config_value(name: Text, unavailable_ok: bool = True) -> Any: """Read a value from the global Convo configuration.""" def not_found(): if unavailable_ok: return None else: raise ValueError(f"Configuration '{name}' key not found.") if not os.path.exists(GLOBAL_USER_CONFIG_PATH): return not_found() c = read_global_config() if name in c: return c[name] else: return not_found() def mark_as_experimental_feature(feature_name: Text) -> None: """Warns users that they are using an experimental feature.""" logger.warning( f"The {feature_name} is currently experimental and might change or be " "removed in the future 🔬 Please share your feedback on it in the " "forum (https://forum.convo.com) to help us make this feature " "ready for production." ) def lazy_property(function: Callable) -> Any: """Allows to avoid recomputing a property over and over. The result gets stored in a local var. Computation of the property will happen once, on the first call of the property. All succeeding calls will use the value stored in the private property.""" attr_name = "_lazy_" + function.__name__ @property def _lazyprop(self): if not hasattr(self, attr_name): setattr(self, attr_name, function(self)) return getattr(self, attr_name) return _lazyprop def raise_warning( message: Text, category: Optional[Type[Warning]] = None, docs: Optional[Text] = None, **kwargs: Any, ) -> None: """Emit a `warnings.warn` with sensible defaults and a colored warning msg.""" original_formatter = warnings.formatwarning def should_show_source_line() -> bool: if "stacklevel" not in kwargs: if category == UserWarning or category is None: return False if category == FutureWarning: return False return True def formatwarning( message: Text, category: Optional[Type[Warning]], filename: Text, lineno: Optional[int], line: Optional[Text] = None, ): """Function to format a warning the standard way.""" if not should_show_source_line(): if docs: line = f"More info at {docs}" else: line = "" formatted_message = original_formatter( message, category, filename, lineno, line ) return utils.wrap_with_color(formatted_message, color=bcolors.WARNING) if "stacklevel" not in kwargs: # try to set useful defaults for the most common warning categories if category == DeprecationWarning: kwargs["stacklevel"] = 3 elif category == UserWarning: kwargs["stacklevel"] = 2 elif category == FutureWarning: kwargs["stacklevel"] = 3 warnings.formatwarning = formatwarning warnings.warn(message, category=category, **kwargs) warnings.formatwarning = original_formatter class RepeatedLogFilter(logging.Filter): """ Filter repeated log records. """ last_log = None def filter(self, record): current_log = ( record.levelno, record.pathname, record.lineno, record.msg, record.args, ) if current_log != self.last_log: self.last_log = current_log return True return False
PypiClean
/benchling_api_client-2.0.207-py3-none-any.whl/benchling_api_client/v2/stable/models/folder_create.py
from typing import Any, cast, Dict, Type, TypeVar import attr from ..extensions import NotPresentError from ..types import UNSET, Unset T = TypeVar("T", bound="FolderCreate") @attr.s(auto_attribs=True, repr=False) class FolderCreate: """ """ _name: str _parent_folder_id: str def __repr__(self): fields = [] fields.append("name={}".format(repr(self._name))) fields.append("parent_folder_id={}".format(repr(self._parent_folder_id))) return "FolderCreate({})".format(", ".join(fields)) def to_dict(self) -> Dict[str, Any]: name = self._name parent_folder_id = self._parent_folder_id field_dict: Dict[str, Any] = {} # Allow the model to serialize even if it was created outside of the constructor, circumventing validation if name is not UNSET: field_dict["name"] = name if parent_folder_id is not UNSET: field_dict["parentFolderId"] = parent_folder_id return field_dict @classmethod def from_dict(cls: Type[T], src_dict: Dict[str, Any], strict: bool = False) -> T: d = src_dict.copy() def get_name() -> str: name = d.pop("name") return name try: name = get_name() except KeyError: if strict: raise name = cast(str, UNSET) def get_parent_folder_id() -> str: parent_folder_id = d.pop("parentFolderId") return parent_folder_id try: parent_folder_id = get_parent_folder_id() except KeyError: if strict: raise parent_folder_id = cast(str, UNSET) folder_create = cls( name=name, parent_folder_id=parent_folder_id, ) return folder_create @property def name(self) -> str: """ The name of the new folder. """ if isinstance(self._name, Unset): raise NotPresentError(self, "name") return self._name @name.setter def name(self, value: str) -> None: self._name = value @property def parent_folder_id(self) -> str: """ The ID of the parent folder. """ if isinstance(self._parent_folder_id, Unset): raise NotPresentError(self, "parent_folder_id") return self._parent_folder_id @parent_folder_id.setter def parent_folder_id(self, value: str) -> None: self._parent_folder_id = value
PypiClean
/obiba_opal-5.2.1.tar.gz/obiba_opal-5.2.1/obiba_opal/exports.py
import json import obiba_opal.core as core import obiba_opal.io as io class ExportPluginCommand: """ Data export to a datasource plugin. """ @classmethod def add_arguments(cls, parser): """ Add data command specific options """ parser.add_argument('--datasource', '-d', required=True, help='Project name') parser.add_argument('--tables', '-t', nargs='+', required=True, help='The list of tables to be exported') parser.add_argument('--name', '-n', required=True, help='Opal datasource plugin name') parser.add_argument('--config', '-c', required=True, help='A JSON file containing the export configuration') parser.add_argument('--identifiers', '-id', required=False, help='Name of the ID mapping') parser.add_argument('--json', '-j', action='store_true', help='Pretty JSON formatting of the response') @classmethod def do_command(cls, args): """ Execute export data command """ # Build and send request client = core.OpalClient.build(core.OpalClient.LoginInfo.parse(args)) config = json.loads(open(args.config).read()) try: res = cls(client, args.verbose) \ .export_data(args.name, args.datasource, args.tables, config, args.identifiers) # format response core.Formatter.print_json(res, args.json) finally: client.close() def export_data(self, name: str, project: str, tables: list, config: str, identifiers: str = None) -> dict: """ Export tables using a plugin. :param name: The name of the plugin. :param project: The project name :param tables: The table names to export :param config: The plugin configuration dictionary :param identifiers: The name of the ID mapping """ configStr = json.dumps(config) exporter = io.OpalExporter.build(client=self.client, datasource=project , tables=tables, identifiers=identifiers, output=configStr, verbose=self.verbose) response = exporter.submit(name) return response.from_json() class ExportCSVCommand: """ Export some tables in CSV format. """ def __init__(self, client: core.OpalClient, verbose: bool = False): self.client = client self.verbose = verbose @classmethod def add_arguments(cls, parser): """ Add data command specific options """ parser.add_argument('--datasource', '-d', required=True, help='Project name') parser.add_argument('--tables', '-t', nargs='+', required=True, help='The list of tables to be exported') parser.add_argument('--output', '-out', required=True, help='Output directory name') parser.add_argument('--id-name', '-in', required=False, help='Name of the ID column name') parser.add_argument('--identifiers', '-id', required=False, help='Name of the ID mapping') parser.add_argument('--no-multilines', '-nl', action='store_true', help='Do not write value sequences as multiple lines') parser.add_argument('--json', '-j', action='store_true', help='Pretty JSON formatting of the response') @classmethod def do_command(cls, args): """ Execute export data command """ # Build and send request client = core.OpalClient.build(core.OpalClient.LoginInfo.parse(args)) try: res = cls(client, args.verbose) \ .export_data(args.datasource, args.tables, args.output, args.id_name, args.identifiers, not args.no_multilines) # format response core.Formatter.print_json(res, args.json) finally: client.close() def export_data(self, project: str, tables: list, output: str, id_name: str = None, identifiers: str = None, multilines: bool = True) -> dict: """ Export tables in CSV files. :param project: The project name :param tables: The table names to export :param output: The output directory path :param id_name: The name of the ID column name :param identifiers: The name of the ID mapping :param multilines: Write value sequences as multiple lines """ exporter = io.OpalExporter.build(client=self.client, datasource=project , tables=tables, entityIdNames = id_name, identifiers=identifiers, output=output, multilines=multilines, verbose=self.verbose) response = exporter.submit('csv') return response.from_json() class ExportRDSCommand: """ Data export in RDS (using R). """ def __init__(self, client: core.OpalClient, verbose: bool = False): self.client = client self.verbose = verbose @classmethod def add_arguments(cls, parser): """ Add data command specific options """ parser.add_argument('--datasource', '-d', required=True, help='Project name') parser.add_argument('--tables', '-t', nargs='+', required=True, help='The list of tables to be exported') parser.add_argument('--output', '-out', required=True, help='Output file name (.rds)') parser.add_argument('--id-name', '-in', required=False, help='Name of the ID column name') parser.add_argument('--identifiers', '-id', required=False, help='Name of the ID mapping') parser.add_argument('--no-multilines', '-nl', action='store_true', help='Do not write value sequences as multiple lines') parser.add_argument('--json', '-j', action='store_true', help='Pretty JSON formatting of the response') @classmethod def do_command(cls, args): """ Execute export data command """ # Build and send request client = core.OpalClient.build(core.OpalClient.LoginInfo.parse(args)) try: res = cls(client, args.verbose) \ .export_data(args.datasource, args.tables, args.output, args.id_name, args.identifiers, not args.no_multilines) # format response core.Formatter.print_json(res, args.json) finally: client.close() def export_data(self, project: str, tables: list, output: str, id_name: str = None, identifiers: str = None, multilines: bool = True) -> dict: """ Export tables in a RDS file. :param project: The project name :param tables: The table names to export :param output: The output file path (.rds) :param id_name: The name of the ID column name :param identifiers: The name of the ID mapping :param multilines: Write value sequences as multiple lines """ if not (output.endswith('.rds')): raise Exception('Output must be a RDS file (.rds).') exporter = io.OpalExporter.build(client=self.client, datasource=project , tables=tables, entityIdNames = id_name, identifiers=identifiers, output=output, multilines=multilines, verbose=self.verbose) response = exporter.submit('RDS') return response.from_json() class ExportRSASCommand: """ Data export in SAS (using R). """ def __init__(self, client: core.OpalClient, verbose: bool = False): self.client = client self.verbose = verbose @classmethod def add_arguments(cls, parser): """ Add data command specific options """ parser.add_argument('--datasource', '-d', required=True, help='Project name') parser.add_argument('--tables', '-t', nargs='+', required=True, help='The list of tables to be exported') parser.add_argument('--output', '-out', required=True, help='Output file name (.sas7bdat or .xpt (Transport format))') parser.add_argument('--id-name', '-in', required=False, help='Name of the ID column name') parser.add_argument('--identifiers', '-id', required=False, help='Name of the ID mapping') parser.add_argument('--no-multilines', '-nl', action='store_true', help='Do not write value sequences as multiple lines') parser.add_argument('--json', '-j', action='store_true', help='Pretty JSON formatting of the response') @classmethod def do_command(cls, args): """ Execute export data command """ # Build and send request client = core.OpalClient.build(core.OpalClient.LoginInfo.parse(args)) try: res = cls(client, args.verbose) \ .export_data(args.datasource, args.tables, args.output, args.id_name, args.identifiers, not args.no_multilines) # format response core.Formatter.print_json(res, args.json) finally: client.close() def export_data(self, project: str, tables: list, output: str, id_name: str = None, identifiers: str = None, multilines: bool = True) -> dict: """ Export tables in a SAS file. :param project: The project name :param tables: The table names to export :param output: The output file path (.sas7bdat or .xpt) :param id_name: The name of the ID column name :param identifiers: The name of the ID mapping :param multilines: Write value sequences as multiple lines """ if not (output.endswith('.sas7bdat')) and not (output.endswith('.xpt')): raise Exception('Output must be a SAS file (.sas7bdat or .xpt).') exporter = io.OpalExporter.build(client=self.client, datasource=project , tables=tables, entityIdNames = id_name, identifiers=identifiers, output=output, multilines=multilines, verbose=self.verbose) response = None if output.endswith('.sas7bdat'): response = exporter.submit('RSAS') else: response = exporter.submit('RXPT') return response.from_json() class ExportRSPSSCommand: """ Data export in SPSS (using R). """ def __init__(self, client: core.OpalClient, verbose: bool = False): self.client = client self.verbose = verbose @classmethod def add_arguments(cls, parser): """ Add data command specific options """ parser.add_argument('--datasource', '-d', required=True, help='Project name') parser.add_argument('--tables', '-t', nargs='+', required=True, help='The list of tables to be exported') parser.add_argument('--output', '-out', required=True, help='Output file name (.sav or .zsav (compressed format))') parser.add_argument('--id-name', '-in', required=False, help='Name of the ID column name') parser.add_argument('--identifiers', '-id', required=False, help='Name of the ID mapping') parser.add_argument('--no-multilines', '-nl', action='store_true', help='Do not write value sequences as multiple lines') parser.add_argument('--json', '-j', action='store_true', help='Pretty JSON formatting of the response') @classmethod def do_command(cls, args): """ Execute export data command """ # Build and send request client = core.OpalClient.build(core.OpalClient.LoginInfo.parse(args)) try: res = cls(client, args.verbose) \ .export_data(args.datasource, args.tables, args.output, args.id_name, args.identifiers, not args.no_multilines) # format response core.Formatter.print_json(res, args.json) finally: client.close() def export_data(self, project: str, tables: list, output: str, id_name: str = None, identifiers: str = None, multilines: bool = True) -> dict: """ Export tables in a SPSS file. :param project: The project name :param tables: The table names to export :param output: The output file path (.sav or .zsav) :param id_name: The name of the ID column name :param identifiers: The name of the ID mapping :param multilines: Write value sequences as multiple lines """ if not (output.endswith('.sav')) and not (output.endswith('.zsav')): raise Exception('Output must be a SPSS file (.sav or .zsav).') exporter = io.OpalExporter.build(client=self.client, datasource=project , tables=tables, entityIdNames = id_name, identifiers=identifiers, output=output, multilines=multilines, verbose=self.verbose) response = None if output.endswith('.sav'): response = exporter.submit('RSPSS') else: response = exporter.submit('RZSPSS') return response.from_json() class ExportRSTATACommand: """ Data export in SAS (using R). """ def __init__(self, client: core.OpalClient, verbose: bool = False): self.client = client self.verbose = verbose @classmethod def add_arguments(cls, parser): """ Add data command specific options """ parser.add_argument('--datasource', '-d', required=True, help='Project name') parser.add_argument('--tables', '-t', nargs='+', required=True, help='The list of tables to be exported') parser.add_argument('--output', '-out', required=True, help='Output file name (.dta)') parser.add_argument('--id-name', '-in', required=False, help='Name of the ID column name') parser.add_argument('--identifiers', '-id', required=False, help='Name of the ID mapping') parser.add_argument('--no-multilines', '-nl', action='store_true', help='Do not write value sequences as multiple lines') parser.add_argument('--json', '-j', action='store_true', help='Pretty JSON formatting of the response') @classmethod def do_command(cls, args): """ Execute export data command """ # Build and send request client = core.OpalClient.build(core.OpalClient.LoginInfo.parse(args)) try: res = cls(client, args.verbose) \ .export_data(args.datasource, args.tables, args.output, args.id_name, args.identifiers, not args.no_multilines) # format response core.Formatter.print_json(res, args.json) finally: client.close() def export_data(self, project: str, tables: list, output: str, id_name: str = None, identifiers: str = None, multilines: bool = True) -> dict: """ Export tables in a STATA file. :param project: The project name :param tables: The table names to export :param output: The output file path (.dta) :param id_name: The name of the ID column name :param identifiers: The name of the ID mapping :param multilines: Write value sequences as multiple lines """ if not (output.endswith('.dta')): raise Exception('Output must be a Stata file (.dta).') exporter = io.OpalExporter.build(client=self.client, datasource=project , tables=tables, entityIdNames = id_name, identifiers=identifiers, output=output, multilines=multilines, verbose=self.verbose) response = exporter.submit('RSTATA') return response.from_json() class ExportSQLCommand: """ Data export to a SQL database. """ def __init__(self, client: core.OpalClient, verbose: bool = False): self.client = client self.verbose = verbose @classmethod def add_arguments(cls, parser): """ Add data command specific options """ parser.add_argument('--datasource', '-d', required=True, help='Project name') parser.add_argument('--tables', '-t', nargs='+', required=True, help='The list of tables to be exported') parser.add_argument('--database', '-db', required=True, help='Name of the SQL database') parser.add_argument('--identifiers', '-id', required=False, help='Name of the ID mapping') parser.add_argument('--json', '-j', action='store_true', help='Pretty JSON formatting of the response') @classmethod def do_command(cls, args): """ Execute export data command """ # Build and send request client = core.OpalClient.build(core.OpalClient.LoginInfo.parse(args)) try: res = cls(client, args.verbose) \ .export_data(args.datasource, args.tables, args.database, args.identifiers) # format response core.Formatter.print_json(res, args.json) finally: client.close() def export_data(self, project: str, tables: list, database: str, identifiers: str = None): """ Export tables in a SQL database. :param project: The project name :param tables: The table names to export :param database: The SQL database name. See ProjectService.get_databases() for a list of databases with 'export' usage. :param identifiers: The name of the ID mapping """ exporter = io.OpalExporter.build(client=self.client, datasource=project , tables=tables, identifiers=identifiers, output=database, verbose=self.verbose) response = exporter.submit('jdbc') return response.from_json() class ExportXMLCommand: """ Data export in XML. """ def __init__(self, client: core.OpalClient, verbose: bool = False): self.client = client self.verbose = verbose @classmethod def add_arguments(cls, parser): """ Add data command specific options """ parser.add_argument('--datasource', '-d', required=True, help='Project name') parser.add_argument('--tables', '-t', nargs='+', required=True, help='The list of tables to be exported') parser.add_argument('--output', '-out', required=True, help='Output zip file name that will be exported') parser.add_argument('--identifiers', '-id', required=False, help='Name of the ID mapping') parser.add_argument('--json', '-j', action='store_true', help='Pretty JSON formatting of the response') @classmethod def do_command(cls, args): """ Execute export data command """ # Check output filename extension client = core.OpalClient.build(core.OpalClient.LoginInfo.parse(args)) try: res = cls(client, args.verbose) \ .export_data(args.datasource, args.tables, args.output, args.identifiers) # format response core.Formatter.print_json(res, args.json) finally: client.close() def export_data(self, project: str, tables: list, output: str, identifiers: str = None) -> dict: """ Export tables in an Opal archive file. :param project: The project name :param tables: The table names to export :param output: The output file path (.zip) :param id_name: The name of the ID column name :param identifiers: The name of the ID mapping :param multilines: Write value sequences as multiple lines """ if not (output.endswith('.zip')): raise Exception('Output must be a zip file.') exporter = io.OpalExporter.build(client=self.client, datasource=project , tables=tables, identifiers=identifiers, output=output, incremental=False, verbose=self.verbose) response = exporter.submit('xml') return response.from_json() class ExportVCFCommand: """ Export some VCF/BCF files. """ def __init__(self, client: core.OpalClient, verbose: bool = False): self.client = client self.verbose = verbose @classmethod def add_arguments(cls, parser): """ Add command specific options """ parser.add_argument('--project', '-pr', required=True, help='Project name from which genotypes data will be exported') parser.add_argument('--vcf', '-vcf', nargs='+', required=True, help='List of VCF/BCF file names') parser.add_argument('--destination', '-d', required=True, help='Destination folder (in Opal file system)') parser.add_argument('--filter-table', '-f', required=False, help='Participant table name to be used to filter the samples by participant ID (only relevant if there is a sample-participant mapping defined)') parser.add_argument('--no-case-controls', '-nocc', action='store_true', help='Do not include case control samples (only relevant if there is a sample-participant mapping defined)') @classmethod def do_command(cls, args): """ Execute delete command """ # Build and send requests client = core.OpalClient.build(core.OpalClient.LoginInfo.parse(args)) try: res = ExportVCFCommand(client, args.verbose) \ .export_data(args.project, args.vcf, args.destination, not args.no_case_controls, args.filter_table) finally: client.close() def export_data(self, project: str, vcf: list, destination: str, case_controls: bool = True, filter_table: str = None) -> dict: """ Export VCF/BCF files. :param project: The project name :param vcf: The list of VCF/BCF file names :param destination: The output folder path :param case_controls: Include case control samples (only relevant if there is a sample-participant mapping defined) :param filter_table: Participant table name to be used to filter the samples by participant ID (only relevant if there is a sample-participant mapping defined) """ request = self.client.new_request() request.fail_on_error().accept_json().content_type_json() if self.verbose: request.verbose() options = { 'project': project, 'names': vcf, 'destination': destination, 'caseControl': case_controls } if filter_table: options['table'] = filter_table # send request uri = core.UriBuilder(['project', project, 'commands', '_export_vcf']).build() response = request.resource(uri).post().content(json.dumps(options)).send() return response.from_json()
PypiClean
/scf-0.2.8.tar.gz/scf-0.2.8/tcfcli/common/operation_msg.py
import os import sys import click import logging from logging.handlers import RotatingFileHandler from builtins import str as text from tcfcli.common.user_config import UserConfig home = os.path.expanduser('~') _LOG_FILE_PATH_ = os.path.join(home, '.scfcli_log') if not os.path.exists(_LOG_FILE_PATH_): os.makedirs(_LOG_FILE_PATH_) _LOG_FILE_ = os.path.join(_LOG_FILE_PATH_, 'scfcli.log') logger = logging.getLogger() for eve in logger.handlers: logger.handlers.remove(eve) fh = RotatingFileHandler(_LOG_FILE_, maxBytes=100000, backupCount=10) fh.setFormatter(logging.Formatter('%(asctime)s - %(pathname)s[line:%(lineno)d] - %(levelname)s: %(message)s')) fh.setLevel(level=logging.FATAL) logger.addHandler(fh) class Operation(object): def __init__(self, message, fg=None, bg=None, bold=None, dim=None, underline=None, blink=None, reverse=None, reset=True, file=None, nl=True, err=False, color=None, err_msg=None, level=None, tofile=True): self.message = message self.fg = fg self.bg = bg self.bold = bold self.dim = dim self.underline = underline self.blink = blink self.reverse = reverse self.reset = reset self.file = file self.nl = nl self.err = err self.color = color self.err_msg = err_msg self.level = level self.tofile = tofile def format_message(self): return text(self.message) def new_style(self, msg, bg=None, fg=None): if "--no-color" in sys.argv or "-nc" in sys.argv or UserConfig().section_map[UserConfig.OTHERS][ 'no_color'].upper() == 'TRUE': return click.style(u'%s' % msg) else: return click.style(u'%s' % msg, bg=bg, fg=fg) def success(self): if self.tofile: self.log(logs="INFO") click.secho(self.new_style("[o]", bg="green") + self.new_style(u' %s' % self.format_message(), fg="green")) def begin(self): self.log(logs="INFO") click.secho(self.new_style("[+]", bg="green") + self.new_style(u' %s' % self.format_message(), fg="green")) def warning(self): self.log(logs="WARNING") click.secho(self.new_style("[!]", bg="magenta") + self.new_style(u' %s' % self.format_message(), fg="magenta")) def information(self): self.log(logs="INFO") click.secho(self.new_style("[*]", bg="yellow") + self.new_style(u' %s' % self.format_message(), fg="yellow")) def process(self): self.log(logs="INFO") click.secho(self.new_style("[>]", bg="cyan") + self.new_style(u' %s' % self.format_message(), fg="cyan")) def out_infor(self): self.log(logs="INFO") click.secho(self.new_style(" ") + self.new_style(u' %s' % self.format_message(), fg="cyan")) def no_output(self): self.log(logs=self.level if self.level else "WARNING") def exception(self): self.log(logs="ERROR") click.secho(self.new_style("[x] [ERROR] ", bg="red") + self.new_style(u' %s' % self.format_message(), fg="red")) def echo(self): self.log(logs="INFO") if "--no-color" in sys.argv or "-nc" in sys.argv or UserConfig().section_map[UserConfig.OTHERS][ 'no_color'].startswith('True'): click.secho(u'%s' % self.format_message(), bold=self.bold, dim=self.dim, underline=self.underline, blink=self.blink, reverse=self.reverse, reset=self.reset, file=self.file, nl=self.nl, err=self.err, color=self.color, ) else: click.secho(u'%s' % self.format_message(), fg=self.fg, bg=self.bg, bold=self.bold, dim=self.dim, underline=self.underline, blink=self.blink, reverse=self.reverse, reset=self.reset, file=self.file, nl=self.nl, err=self.err, color=self.color, ) def style(self): if "--no-color" in sys.argv or "-nc" in sys.argv or UserConfig().section_map[UserConfig.OTHERS][ 'no_color'].startswith('True'): return click.style(u'%s' % self.format_message(), bold=self.bold, dim=self.dim, underline=self.underline, blink=self.blink, reverse=self.reverse, reset=self.reset, ) else: return click.style(u'%s' % self.format_message(), fg=self.fg, bg=self.bg, bold=self.bold, dim=self.dim, underline=self.underline, blink=self.blink, reverse=self.reverse, reset=self.reset, ) def log(self, logs): if logs == "DEBUG": logger.debug(self.message) elif logs == "INFO": logger.info(self.message) elif logs == "WARNING": logger.warning(self.message) elif logs == "ERROR": err_msg = text(self.err_msg) if self.err_msg else self.message logger.error(err_msg)
PypiClean
/boyd_bot_glasgow-1.6.1-py3-none-any.whl/boyd_bot/timetable.py
import pytz import requests from icalendar import Calendar from datetime import datetime from getpass import getpass from fuzzywuzzy import fuzz class Timetable: def __init__(self, cal_url, tmzn="UTC", fuzz_threshold=36): self.cal_url = cal_url self.tmzn = pytz.timezone(tmzn) self.fuzz_threshold = fuzz_threshold self.cal = Calendar() def login(self, uid=None, pw=None): if not (uid and pw): uid = input("University ID: ") pw = getpass("Password: ") try: self.cal = Calendar.from_ical(requests.get(self.cal_url, auth=(uid, pw)).content) except ValueError: print("Invalid login credentials provided.") self.login() except Exception as e: raise Exception("Something went wrong. {}".format(e.__str__())) from None def format_event(self, event): return "\n{}\nfrom {} to {}\nat {}.\n".format( event["summary"].split(")")[0] + ")" if "(" in event["summary"] else event["summary"], event["dtstart"].dt.strftime("%I:%M%p"), event["dtend"].dt.strftime("%I:%M%p"), event.get("location", "No Location Found"), ) def read(self, start_date=None, class_name=None): class_list = self.iterate(start_date, class_name) if not class_list: print("There seem to be no classes.") else: for event in class_list: print(self.format_event(event)) def iterate(self, start_date, class_name): class_list = [] date1 = ( start_date.replace(hour=0, minute=0, second=0, tzinfo=self.tmzn) if start_date else datetime.now(tz=self.tmzn) ) date2 = date1.replace(hour=23, minute=59, second=59) for event in self.cal.walk("vevent"): if event["dtstart"].dt >= date1 and event["dtend"].dt <= date2: if not start_date: class_list.append(event) break if class_name: if ( fuzz.token_set_ratio( class_name.lower(), event["summary"].lower() ) > self.fuzz_threshold ): class_list.append(event) else: class_list.append(event) return class_list def dateparse(self, date_entry): try: day, month, year = map(int, date_entry.split("/")) return datetime(year, month, day) except ValueError: raise Exception("Date entered in invalid format!") from None
PypiClean
/ovos_core-0.0.8a25-py3-none-any.whl/ovos_core/intent_services/commonqa_service.py
import re from threading import Lock, Event import time from itertools import chain from ovos_bus_client.message import Message, dig_for_message import ovos_core.intent_services from ovos_utils import flatten_list from ovos_utils.enclosure.api import EnclosureAPI from ovos_utils.log import LOG from ovos_utils.messagebus import get_message_lang from ovos_workshop.resource_files import CoreResources EXTENSION_TIME = 10 class CommonQAService: """Intent Service handling common query skills. All common query skills answer and the best answer is selected This is in contrast to triggering best intent directly. """ def __init__(self, bus): self.bus = bus self.skill_id = "common_query.openvoiceos" # fake skill self.query_replies = {} # cache of received replies self.query_extensions = {} # maintains query timeout extensions self.lock = Lock() self.searching = Event() self.waiting = True self.answered = False self.enclosure = EnclosureAPI(self.bus, self.skill_id) self._vocabs = {} self.bus.on('question:query.response', self.handle_query_response) self.bus.on('common_query.question', self.handle_question) def voc_match(self, utterance, voc_filename, lang, exact=False): """Determine if the given utterance contains the vocabulary provided. By default the method checks if the utterance contains the given vocab thereby allowing the user to say things like "yes, please" and still match against "Yes.voc" containing only "yes". An exact match can be requested. The method checks the "res/text/{lang}" folder of mycroft-core. The result is cached to avoid hitting the disk each time the method is called. Args: utterance (str): Utterance to be tested voc_filename (str): Name of vocabulary file (e.g. 'yes' for 'res/text/en-us/yes.voc') lang (str): Language code, defaults to self.lang exact (bool): Whether the vocab must exactly match the utterance Returns: bool: True if the utterance has the given vocabulary it """ match = False if lang not in self._vocabs: resources = CoreResources(language=lang) vocab = resources.load_vocabulary_file(voc_filename) self._vocabs[lang] = list(chain(*vocab)) if utterance: if exact: # Check for exact match match = any(i.strip() == utterance for i in self._vocabs[lang]) else: # Check for matches against complete words match = any([re.match(r'.*\b' + i + r'\b.*', utterance) for i in self._vocabs[lang]]) return match def is_question_like(self, utterance, lang): # skip utterances with less than 3 words if len(utterance.split(" ")) < 3: return False # skip utterances meant for common play if self.voc_match(utterance, "common_play", lang): return False return True def match(self, utterances, lang, message): """Send common query request and select best response Args: utterances (list): List of tuples, utterances and normalized version lang (str): Language code message: Message for session context Returns: IntentMatch or None """ # we call flatten in case someone is sending the old style list of tuples utterances = flatten_list(utterances) match = None for utterance in utterances: if self.is_question_like(utterance, lang): message.data["lang"] = lang # only used for speak message.data["utterance"] = utterance answered = self.handle_question(message) if answered: match = ovos_core.intent_services.IntentMatch('CommonQuery', None, {}, None) break return match def handle_question(self, message): """ Send the phrase to the CommonQuerySkills and prepare for handling the replies. """ self.searching.set() self.waiting = True self.answered = False utt = message.data.get('utterance') self.enclosure.mouth_think() self.query_replies[utt] = [] self.query_extensions[utt] = [] LOG.info(f'Searching for {utt}') # Send the query to anyone listening for them msg = message.reply('question:query', data={'phrase': utt}) if "skill_id" not in msg.context: msg.context["skill_id"] = self.skill_id self.bus.emit(msg) self.timeout_time = time.time() + 1 while self.searching.is_set(): if not self.waiting or time.time() > self.timeout_time + 1: break time.sleep(0.2) # forcefully timeout if search is still going self._query_timeout(message) return self.answered def handle_query_response(self, message): search_phrase = message.data['phrase'] skill_id = message.data['skill_id'] searching = message.data.get('searching') answer = message.data.get('answer') # Manage requests for time to complete searches if searching: # extend the timeout by 5 seconds self.timeout_time = time.time() + EXTENSION_TIME # TODO: Perhaps block multiple extensions? if (search_phrase in self.query_extensions and skill_id not in self.query_extensions[search_phrase]): self.query_extensions[search_phrase].append(skill_id) elif search_phrase in self.query_extensions: # Search complete, don't wait on this skill any longer if answer and search_phrase in self.query_replies: LOG.info(f'Answer from {skill_id}') self.query_replies[search_phrase].append(message.data) # Remove the skill from list of timeout extensions if skill_id in self.query_extensions[search_phrase]: self.query_extensions[search_phrase].remove(skill_id) # not waiting for any more skills if not self.query_extensions[search_phrase]: self._query_timeout(message.reply('question:query.timeout', message.data)) else: LOG.warning(f'{skill_id} Answered too slowly, will be ignored.') def _query_timeout(self, message): if not self.searching.is_set(): LOG.warning("got a common query response outside search window") return # not searching, ignore timeout event self.searching.clear() # Prevent any late-comers from retriggering this query handler with self.lock: LOG.info('Timeout occurred check responses') search_phrase = message.data.get('phrase', "") if search_phrase in self.query_extensions: self.query_extensions[search_phrase] = [] self.enclosure.mouth_reset() # Look at any replies that arrived before the timeout # Find response(s) with the highest confidence best = None ties = [] if search_phrase in self.query_replies: for handler in self.query_replies[search_phrase]: if not best or handler['conf'] > best['conf']: best = handler ties = [] elif handler['conf'] == best['conf']: ties.append(handler) if best: if ties: # TODO: Ask user to pick between ties or do it automagically pass # invoke best match LOG.info('Handling with: ' + str(best['skill_id'])) if not message.data.get("handles_speech", False): self.speak(best['answer']) cb = best.get('callback_data') or {} self.bus.emit(message.forward('question:action', data={'skill_id': best['skill_id'], 'phrase': search_phrase, 'callback_data': cb})) self.answered = True else: self.answered = False self.waiting = False if search_phrase in self.query_replies: del self.query_replies[search_phrase] if search_phrase in self.query_extensions: del self.query_extensions[search_phrase] def speak(self, utterance, message=None): """Speak a sentence. Args: utterance (str): sentence mycroft should speak """ # registers the skill as being active self.enclosure.register(self.skill_id) message = message or dig_for_message() lang = get_message_lang(message) data = {'utterance': utterance, 'expect_response': False, 'meta': {"skill": self.skill_id}, 'lang': lang} m = message.forward("speak", data) if message \ else Message("speak", data) m.context["skill_id"] = self.skill_id self.bus.emit(m)
PypiClean
/hurry.jqplot-0.9.7.2.zip/hurry.jqplot-0.9.7.2/src/hurry/jqplot/jqplot-build/plugins/jqplot.cursor.js
(function($) { /** * Class: $.jqplot.Cursor * Plugin class representing the cursor as displayed on the plot. */ $.jqplot.Cursor = function(options) { // Group: Properties // // prop: style // CSS spec for cursor style this.style = 'crosshair'; this.previousCursor = 'auto'; // prop: show // wether to show the cursor or not. this.show = $.jqplot.config.enablePlugins; // prop: showTooltip // show a cursor position tooltip near the cursor this.showTooltip = true; // prop: followMouse // Tooltip follows the mouse, it is not at a fixed location. // Tooltip will show on the grid at the location given by // tooltipLocation, offset from the grid edge by tooltipOffset. this.followMouse = false; // prop: tooltipLocation // Where to position tooltip. If followMouse is true, this is // relative to the cursor, otherwise, it is relative to the grid. // One of 'n', 'ne', 'e', 'se', 's', 'sw', 'w', 'nw' this.tooltipLocation = 'se'; // prop: tooltipOffset // Pixel offset of tooltip from the grid boudaries or cursor center. this.tooltipOffset = 6; // prop: showTooltipGridPosition // show the grid pixel coordinates of the mouse. this.showTooltipGridPosition = false; // prop: showTooltipUnitPosition // show the unit (data) coordinates of the mouse. this.showTooltipUnitPosition = true; // prop: showTooltipDataPosition // Used with showVerticalLine to show intersecting data points in the tooltip. this.showTooltipDataPosition = false; // prop: tooltipFormatString // sprintf format string for the tooltip. // Uses Ash Searle's javascript sprintf implementation // found here: http://hexmen.com/blog/2007/03/printf-sprintf/ // See http://perldoc.perl.org/functions/sprintf.html for reference // Note, if showTooltipDataPosition is true, the default tooltipFormatString // will be set to the cursorLegendFormatString, not the default given here. this.tooltipFormatString = '%.4P, %.4P'; // prop: useAxesFormatters // Use the x and y axes formatters to format the text in the tooltip. this.useAxesFormatters = true; // prop: tooltipAxisGroups // Show position for the specified axes. // This is an array like [['xaxis', 'yaxis'], ['xaxis', 'y2axis']] // Default is to compute automatically for all visible axes. this.tooltipAxisGroups = []; // prop: zoom // Enable plot zooming. this.zoom = false; // zoomProxy and zoomTarget properties are not directly set by user. // They Will be set through call to zoomProxy method. this.zoomProxy = false; this.zoomTarget = false; // prop: clickReset // Will reset plot zoom if single click on plot without drag. this.clickReset = false; // prop: dblClickReset // Will reset plot zoom if double click on plot without drag. this.dblClickReset = true; // prop: showVerticalLine // draw a vertical line across the plot which follows the cursor. // When the line is near a data point, a special legend and/or tooltip can // be updated with the data values. this.showVerticalLine = false; // prop: showHorizontalLine // draw a horizontal line across the plot which follows the cursor. this.showHorizontalLine = false; // prop: constrainZoomTo // 'none', 'x' or 'y' this.constrainZoomTo = 'none'; // // prop: autoscaleConstraint // // when a constrained axis is specified, true will // // auatoscale the adjacent axis. // this.autoscaleConstraint = true; this.shapeRenderer = new $.jqplot.ShapeRenderer(); this._zoom = {start:[], end:[], started: false, zooming:false, isZoomed:false, axes:{start:{}, end:{}}}; this._tooltipElem; this.zoomCanvas; this.cursorCanvas; // prop: intersectionThreshold // pixel distance from data point or marker to consider cursor lines intersecting with point. // If data point markers are not shown, this should be >= 1 or will often miss point intersections. this.intersectionThreshold = 2; // prop: showCursorLegend // Replace the plot legend with an enhanced legend displaying intersection information. this.showCursorLegend = false; // prop: cursorLegendFormatString // Format string used in the cursor legend. If showTooltipDataPosition is true, // this will also be the default format string used by tooltipFormatString. this.cursorLegendFormatString = $.jqplot.Cursor.cursorLegendFormatString; $.extend(true, this, options); }; $.jqplot.Cursor.cursorLegendFormatString = '%s x:%s, y:%s'; // called with scope of plot $.jqplot.Cursor.init = function (target, data, opts){ // add a cursor attribute to the plot var options = opts || {}; this.plugins.cursor = new $.jqplot.Cursor(options.cursor); var c = this.plugins.cursor; if (c.show) { $.jqplot.eventListenerHooks.push(['jqplotMouseEnter', handleMouseEnter]); $.jqplot.eventListenerHooks.push(['jqplotMouseLeave', handleMouseLeave]); $.jqplot.eventListenerHooks.push(['jqplotMouseMove', handleMouseMove]); if (c.showCursorLegend) { opts.legend = opts.legend || {}; opts.legend.renderer = $.jqplot.CursorLegendRenderer; opts.legend.formatString = this.plugins.cursor.cursorLegendFormatString; opts.legend.show = true; } if (c.zoom) { $.jqplot.eventListenerHooks.push(['jqplotMouseDown', handleMouseDown]); $.jqplot.eventListenerHooks.push(['jqplotMouseUp', handleMouseUp]); if (c.clickReset) { $.jqplot.eventListenerHooks.push(['jqplotClick', handleClick]); } if (c.dblClickReset) { $.jqplot.eventListenerHooks.push(['jqplotDblClick', handleDblClick]); } } this.resetZoom = function() { var axes = this.axes; if (!c.zoomProxy) { for (var ax in axes) { axes[ax].reset(); } this.redraw(); } else { var ctx = this.plugins.cursor.zoomCanvas._ctx; ctx.clearRect(0,0,ctx.canvas.width, ctx.canvas.height); } this.plugins.cursor._zoom.isZoomed = false; this.target.trigger('jqplotResetZoom', [this, this.plugins.cursor]); }; if (c.showTooltipDataPosition) { c.showTooltipUnitPosition = false; c.showTooltipGridPosition = false; if (options.cursor.tooltipFormatString == undefined) { c.tooltipFormatString = $.jqplot.Cursor.cursorLegendFormatString; } } } }; // called with context of plot $.jqplot.Cursor.postDraw = function() { var c = this.plugins.cursor; // if (c.zoom) { c.zoomCanvas = new $.jqplot.GenericCanvas(); this.eventCanvas._elem.before(c.zoomCanvas.createElement(this._gridPadding, 'jqplot-zoom-canvas', this._plotDimensions)); var zctx = c.zoomCanvas.setContext(); // } c._tooltipElem = $('<div class="jqplot-cursor-tooltip" style="position:absolute;display:none"></div>'); c.zoomCanvas._elem.before(c._tooltipElem); if (c.showVerticalLine || c.showHorizontalLine) { c.cursorCanvas = new $.jqplot.GenericCanvas(); this.eventCanvas._elem.before(c.cursorCanvas.createElement(this._gridPadding, 'jqplot-cursor-canvas', this._plotDimensions)); var zctx = c.cursorCanvas.setContext(); } // if we are showing the positions in unit coordinates, and no axes groups // were specified, create a default set. if (c.showTooltipUnitPosition){ if (c.tooltipAxisGroups.length === 0) { var series = this.series; var s; var temp = []; for (var i=0; i<series.length; i++) { s = series[i]; var ax = s.xaxis+','+s.yaxis; if ($.inArray(ax, temp) == -1) { temp.push(ax); } } for (var i=0; i<temp.length; i++) { c.tooltipAxisGroups.push(temp[i].split(',')); } } } }; // Group: methods // // method: $.jqplot.Cursor.zoomProxy // links targetPlot to controllerPlot so that plot zooming of // targetPlot will be controlled by zooming on the controllerPlot. // controllerPlot will not actually zoom, but acts as an // overview plot. Note, the zoom options must be set to true for // zoomProxy to work. $.jqplot.Cursor.zoomProxy = function(targetPlot, controllerPlot) { var tc = targetPlot.plugins.cursor; var cc = controllerPlot.plugins.cursor; tc.zoomTarget = true; tc.zoom = true; tc.style = 'auto'; tc.dblClickReset = false; cc.zoom = true; cc.zoomProxy = true; controllerPlot.target.bind('jqplotZoom', plotZoom); controllerPlot.target.bind('jqplotResetZoom', plotReset); function plotZoom(ev, gridpos, datapos, plot, cursor) { tc.doZoom(gridpos, datapos, targetPlot, cursor); } function plotReset(ev, plot, cursor) { targetPlot.resetZoom(); } }; $.jqplot.Cursor.prototype.resetZoom = function(plot, cursor) { var axes = plot.axes; var cax = cursor._zoom.axes; if (!plot.plugins.cursor.zoomProxy && cursor._zoom.isZoomed) { for (var ax in axes) { axes[ax]._ticks = []; axes[ax].min = cax[ax].min; axes[ax].max = cax[ax].max; axes[ax].numberTicks = cax[ax].numberTicks; axes[ax].tickInterval = cax[ax].tickInterval; // for date axes axes[ax].daTickInterval = cax[ax].daTickInterval; } plot.redraw(); cursor._zoom.isZoomed = false; } else { var ctx = cursor.zoomCanvas._ctx; ctx.clearRect(0,0,ctx.canvas.width, ctx.canvas.height); } plot.target.trigger('jqplotResetZoom', [plot, cursor]); }; $.jqplot.Cursor.resetZoom = function(plot) { plot.resetZoom(); }; $.jqplot.Cursor.prototype.doZoom = function (gridpos, datapos, plot, cursor) { var c = cursor; var axes = plot.axes; var zaxes = c._zoom.axes; var start = zaxes.start; var end = zaxes.end; var min, max; var ctx = plot.plugins.cursor.zoomCanvas._ctx; // don't zoom is zoom area is too small (in pixels) if ((c.constrainZoomTo == 'none' && Math.abs(gridpos.x - c._zoom.start[0]) > 6 && Math.abs(gridpos.y - c._zoom.start[1]) > 6) || (c.constrainZoomTo == 'x' && Math.abs(gridpos.x - c._zoom.start[0]) > 6) || (c.constrainZoomTo == 'y' && Math.abs(gridpos.y - c._zoom.start[1]) > 6)) { if (!plot.plugins.cursor.zoomProxy) { for (var ax in datapos) { // make a copy of the original axes to revert back. if (c._zoom.axes[ax] == undefined) { c._zoom.axes[ax] = {}; c._zoom.axes[ax].numberTicks = axes[ax].numberTicks; c._zoom.axes[ax].tickInterval = axes[ax].tickInterval; // for date axes... c._zoom.axes[ax].daTickInterval = axes[ax].daTickInterval; c._zoom.axes[ax].min = axes[ax].min; c._zoom.axes[ax].max = axes[ax].max; } if ((c.constrainZoomTo == 'none') || (c.constrainZoomTo == 'x' && ax.charAt(0) == 'x') || (c.constrainZoomTo == 'y' && ax.charAt(0) == 'y')) { dp = datapos[ax]; if (dp != null) { if (dp > start[ax]) { axes[ax].min = start[ax]; axes[ax].max = dp; } else { span = start[ax] - dp; axes[ax].max = start[ax]; axes[ax].min = dp; } axes[ax].tickInterval = null; // for date axes... axes[ax].daTickInterval = null; axes[ax]._ticks = []; } } // if ((c.constrainZoomTo == 'x' && ax.charAt(0) == 'y' && c.autoscaleConstraint) || (c.constrainZoomTo == 'y' && ax.charAt(0) == 'x' && c.autoscaleConstraint)) { // dp = datapos[ax]; // if (dp != null) { // axes[ax].max == null; // axes[ax].min = null; // } // } } ctx.clearRect(0,0,ctx.canvas.width, ctx.canvas.height); plot.redraw(); c._zoom.isZoomed = true; } plot.target.trigger('jqplotZoom', [gridpos, datapos, plot, cursor]); } }; $.jqplot.preInitHooks.push($.jqplot.Cursor.init); $.jqplot.postDrawHooks.push($.jqplot.Cursor.postDraw); function updateTooltip(gridpos, datapos, plot) { var c = plot.plugins.cursor; var s = ''; var addbr = false; if (c.showTooltipGridPosition) { s = gridpos.x+', '+gridpos.y; addbr = true; } if (c.showTooltipUnitPosition) { var g; for (var i=0; i<c.tooltipAxisGroups.length; i++) { g = c.tooltipAxisGroups[i]; if (addbr) { s += '<br />'; } if (c.useAxesFormatters) { var xf = plot.axes[g[0]]._ticks[0].formatter; var yf = plot.axes[g[1]]._ticks[0].formatter; var xfstr = plot.axes[g[0]]._ticks[0].formatString; var yfstr = plot.axes[g[1]]._ticks[0].formatString; s += xf(xfstr, datapos[g[0]]) + ', '+ yf(yfstr, datapos[g[1]]); } else { s += $.jqplot.sprintf(c.tooltipFormatString, datapos[g[0]], datapos[g[1]]); } addbr = true; } } if (c.showTooltipDataPosition) { var series = plot.series; var ret = getIntersectingPoints(plot, gridpos.x, gridpos.y); var addbr = false; for (var i = 0; i< series.length; i++) { if (series[i].show) { var idx = series[i].index; var label = series[i].label.toString(); var cellid = $.inArray(idx, ret.indices); var sx = undefined; var sy = undefined; if (cellid != -1) { var data = ret.data[cellid].data; if (c.useAxesFormatters) { var xf = series[i]._xaxis._ticks[0].formatter; var yf = series[i]._yaxis._ticks[0].formatter; var xfstr = series[i]._xaxis._ticks[0].formatString; var yfstr = series[i]._yaxis._ticks[0].formatString; sx = xf(xfstr, data[0]); sy = yf(yfstr, data[1]); } else { sx = data[0]; sy = data[1]; } if (addbr) { s += '<br />'; } s += $.jqplot.sprintf(c.tooltipFormatString, label, sx, sy); addbr = true; } } } } c._tooltipElem.html(s); } function moveLine(gridpos, plot) { var c = plot.plugins.cursor; var ctx = c.cursorCanvas._ctx; ctx.clearRect(0,0,ctx.canvas.width, ctx.canvas.height); if (c.showVerticalLine) { c.shapeRenderer.draw(ctx, [[gridpos.x, 0], [gridpos.x, ctx.canvas.height]]); } if (c.showHorizontalLine) { c.shapeRenderer.draw(ctx, [[0, gridpos.y], [ctx.canvas.width, gridpos.y]]); } var ret = getIntersectingPoints(plot, gridpos.x, gridpos.y); if (c.showCursorLegend) { var cells = $(plot.targetId + ' td.jqplot-cursor-legend-label'); for (var i=0; i<cells.length; i++) { var idx = $(cells[i]).data('seriesIndex'); var series = plot.series[idx]; var label = series.label.toString(); var cellid = $.inArray(idx, ret.indices); var sx = undefined; var sy = undefined; if (cellid != -1) { var data = ret.data[cellid].data; if (c.useAxesFormatters) { var xf = series._xaxis._ticks[0].formatter; var yf = series._yaxis._ticks[0].formatter; var xfstr = series._xaxis._ticks[0].formatString; var yfstr = series._yaxis._ticks[0].formatString; sx = xf(xfstr, data[0]); sy = yf(yfstr, data[1]); } else { sx = data[0]; sy = data[1]; } } if (plot.legend.escapeHtml) { $(cells[i]).text($.jqplot.sprintf(c.cursorLegendFormatString, label, sx, sy)); } else { $(cells[i]).html($.jqplot.sprintf(c.cursorLegendFormatString, label, sx, sy)); } } } } function getIntersectingPoints(plot, x, y) { var ret = {indices:[], data:[]}; var s, i, d0, d, j, r; var threshold; var c = plot.plugins.cursor; for (var i=0; i<plot.series.length; i++) { s = plot.series[i]; r = s.renderer; if (s.show) { threshold = c.intersectionThreshold; if (s.showMarker) { threshold += s.markerRenderer.size/2; } for (var j=0; j<s.gridData.length; j++) { p = s.gridData[j]; // check vertical line if (c.showVerticalLine) { if (Math.abs(x-p[0]) <= threshold) { ret.indices.push(i); ret.data.push({seriesIndex: i, pointIndex:j, gridData:p, data:s.data[j]}); } } } } } return ret; } function moveTooltip(gridpos, plot) { var c = plot.plugins.cursor; var elem = c._tooltipElem; switch (c.tooltipLocation) { case 'nw': var x = gridpos.x + plot._gridPadding.left - elem.outerWidth(true) - c.tooltipOffset; var y = gridpos.y + plot._gridPadding.top - c.tooltipOffset - elem.outerHeight(true); break; case 'n': var x = gridpos.x + plot._gridPadding.left - elem.outerWidth(true)/2; var y = gridpos.y + plot._gridPadding.top - c.tooltipOffset - elem.outerHeight(true); break; case 'ne': var x = gridpos.x + plot._gridPadding.left + c.tooltipOffset; var y = gridpos.y + plot._gridPadding.top - c.tooltipOffset - elem.outerHeight(true); break; case 'e': var x = gridpos.x + plot._gridPadding.left + c.tooltipOffset; var y = gridpos.y + plot._gridPadding.top - elem.outerHeight(true)/2; break; case 'se': var x = gridpos.x + plot._gridPadding.left + c.tooltipOffset; var y = gridpos.y + plot._gridPadding.top + c.tooltipOffset; break; case 's': var x = gridpos.x + plot._gridPadding.left - elem.outerWidth(true)/2; var y = gridpos.y + plot._gridPadding.top + c.tooltipOffset; break; case 'sw': var x = gridpos.x + plot._gridPadding.left - elem.outerWidth(true) - c.tooltipOffset; var y = gridpos.y + plot._gridPadding.top + c.tooltipOffset; break; case 'w': var x = gridpos.x + plot._gridPadding.left - elem.outerWidth(true) - c.tooltipOffset; var y = gridpos.y + plot._gridPadding.top - elem.outerHeight(true)/2; break; default: var x = gridpos.x + plot._gridPadding.left + c.tooltipOffset; var y = gridpos.y + plot._gridPadding.top + c.tooltipOffset; break; } c._tooltipElem.css('left', x); c._tooltipElem.css('top', y); } function positionTooltip(plot) { // fake a grid for positioning var grid = plot._gridPadding; var c = plot.plugins.cursor; var elem = c._tooltipElem; switch (c.tooltipLocation) { case 'nw': var a = grid.left + c.tooltipOffset; var b = grid.top + c.tooltipOffset; elem.css('left', a); elem.css('top', b); break; case 'n': var a = (grid.left + (plot._plotDimensions.width - grid.right))/2 - elem.outerWidth(true)/2; var b = grid.top + c.tooltipOffset; elem.css('left', a); elem.css('top', b); break; case 'ne': var a = grid.right + c.tooltipOffset; var b = grid.top + c.tooltipOffset; elem.css({right:a, top:b}); break; case 'e': var a = grid.right + c.tooltipOffset; var b = (grid.top + (plot._plotDimensions.height - grid.bottom))/2 - elem.outerHeight(true)/2; elem.css({right:a, top:b}); break; case 'se': var a = grid.right + c.tooltipOffset; var b = grid.bottom + c.tooltipOffset; elem.css({right:a, bottom:b}); break; case 's': var a = (grid.left + (plot._plotDimensions.width - grid.right))/2 - elem.outerWidth(true)/2; var b = grid.bottom + c.tooltipOffset; elem.css({left:a, bottom:b}); break; case 'sw': var a = grid.left + c.tooltipOffset; var b = grid.bottom + c.tooltipOffset; elem.css({left:a, bottom:b}); break; case 'w': var a = grid.left + c.tooltipOffset; var b = (grid.top + (plot._plotDimensions.height - grid.bottom))/2 - elem.outerHeight(true)/2; elem.css({left:a, top:b}); break; default: // same as 'se' var a = grid.right - c.tooltipOffset; var b = grid.bottom + c.tooltipOffset; elem.css({right:a, bottom:b}); break; } } function handleClick (ev, gridpos, datapos, neighbor, plot) { ev.stopPropagation(); ev.preventDefault(); var c = plot.plugins.cursor; if (c.clickReset) { c.resetZoom(plot, c); } return false; } function handleDblClick (ev, gridpos, datapos, neighbor, plot) { ev.stopPropagation(); ev.preventDefault(); var c = plot.plugins.cursor; if (c.dblClickReset) { c.resetZoom(plot, c); } return false; } function handleMouseLeave(ev, gridpos, datapos, neighbor, plot) { var c = plot.plugins.cursor; if (c.show) { $(ev.target).css('cursor', c.previousCursor); if (c.showTooltip) { c._tooltipElem.hide(); } if (c.zoom) { c._zoom.started = false; c._zoom.zooming = false; if (!c.zoomProxy) { var ctx = c.zoomCanvas._ctx; ctx.clearRect(0,0,ctx.canvas.width, ctx.canvas.height); } } if (c.showVerticalLine || c.showHorizontalLine) { var ctx = c.cursorCanvas._ctx; ctx.clearRect(0,0,ctx.canvas.width, ctx.canvas.height); } if (c.showCursorLegend) { var cells = $(plot.targetId + ' td.jqplot-cursor-legend-label'); for (var i=0; i<cells.length; i++) { var idx = $(cells[i]).data('seriesIndex'); var series = plot.series[idx]; var label = series.label.toString(); if (plot.legend.escapeHtml) { $(cells[i]).text($.jqplot.sprintf(c.cursorLegendFormatString, label, undefined, undefined)); } else { $(cells[i]).html($.jqplot.sprintf(c.cursorLegendFormatString, label, undefined, undefined)); } } } } } function handleMouseEnter(ev, gridpos, datapos, neighbor, plot) { var c = plot.plugins.cursor; if (c.show) { c.previousCursor = ev.target.style.cursor; ev.target.style.cursor = c.style; if (c.showTooltip) { updateTooltip(gridpos, datapos, plot); if (c.followMouse) { moveTooltip(gridpos, plot); } else { positionTooltip(plot); } c._tooltipElem.show(); } if (c.showVerticalLine || c.showHorizontalLine) { moveLine(gridpos, plot); } } } function handleMouseMove(ev, gridpos, datapos, neighbor, plot) { var c = plot.plugins.cursor; var ctx = c.zoomCanvas._ctx; if (c.show) { if (c.showTooltip) { updateTooltip(gridpos, datapos, plot); if (c.followMouse) { moveTooltip(gridpos, plot); } } if (c.zoom && c._zoom.started && !c.zoomTarget) { c._zoom.zooming = true; if (c.constrainZoomTo == 'x') { c._zoom.end = [gridpos.x, ctx.canvas.height]; } else if (c.constrainZoomTo == 'y') { c._zoom.end = [ctx.canvas.width, gridpos.y]; } else { c._zoom.end = [gridpos.x, gridpos.y]; } drawZoomBox.call(c); } if (c.showVerticalLine || c.showHorizontalLine) { moveLine(gridpos, plot); } } } function handleMouseDown(ev, gridpos, datapos, neighbor, plot) { var c = plot.plugins.cursor; var axes = plot.axes; if (c.zoom) { if (!c.zoomProxy) { var ctx = c.zoomCanvas._ctx; ctx.clearRect(0,0,ctx.canvas.width, ctx.canvas.height); } if (c.constrainZoomTo == 'x') { c._zoom.start = [gridpos.x, 0]; } else if (c.constrainZoomTo == 'y') { c._zoom.start = [0, gridpos.y]; } else { c._zoom.start = [gridpos.x, gridpos.y]; } c._zoom.started = true; for (var ax in datapos) { // get zoom starting position. c._zoom.axes.start[ax] = datapos[ax]; } } } function handleMouseUp(ev, gridpos, datapos, neighbor, plot) { var c = plot.plugins.cursor; if (c.zoom && c._zoom.zooming && !c.zoomTarget) { c.doZoom(gridpos, datapos, plot, c); } c._zoom.started = false; c._zoom.zooming = false; } function drawZoomBox() { var start = this._zoom.start; var end = this._zoom.end; var ctx = this.zoomCanvas._ctx; var l, t, h, w; if (end[0] > start[0]) { l = start[0]; w = end[0] - start[0]; } else { l = end[0]; w = start[0] - end[0]; } if (end[1] > start[1]) { t = start[1]; h = end[1] - start[1]; } else { t = end[1]; h = start[1] - end[1]; } ctx.fillStyle = 'rgba(0,0,0,0.2)'; ctx.strokeStyle = '#999999'; ctx.lineWidth = 1.0; ctx.clearRect(0,0,ctx.canvas.width, ctx.canvas.height); ctx.fillRect(0,0,ctx.canvas.width, ctx.canvas.height); ctx.clearRect(l, t, w, h); // IE won't show transparent fill rect, so stroke a rect also. ctx.strokeRect(l,t,w,h); } $.jqplot.CursorLegendRenderer = function(options) { $.jqplot.TableLegendRenderer.call(this, options); this.formatString = '%s'; }; $.jqplot.CursorLegendRenderer.prototype = new $.jqplot.TableLegendRenderer(); $.jqplot.CursorLegendRenderer.prototype.constructor = $.jqplot.CursorLegendRenderer; // called in context of a Legend $.jqplot.CursorLegendRenderer.prototype.draw = function() { if (this.show) { var series = this._series; // make a table. one line label per row. this._elem = $('<table class="jqplot-legend jqplot-cursor-legend" style="position:absolute"></table>'); var pad = false; for (var i = 0; i< series.length; i++) { s = series[i]; if (s.show) { var lt = $.jqplot.sprintf(this.formatString, s.label.toString()); if (lt) { var color = s.color; if (s._stack && !s.fill) { color = ''; } addrow.call(this, lt, color, pad, i); pad = true; } // let plugins add more rows to legend. Used by trend line plugin. for (var j=0; j<$.jqplot.addLegendRowHooks.length; j++) { var item = $.jqplot.addLegendRowHooks[j].call(this, s); if (item) { addrow.call(this, item.label, item.color, pad); pad = true; } } } } } function addrow(label, color, pad, idx) { var rs = (pad) ? this.rowSpacing : '0'; var tr = $('<tr class="jqplot-legend jqplot-cursor-legend"></tr>').appendTo(this._elem); tr.data('seriesIndex', idx); $('<td class="jqplot-legend jqplot-cursor-legend-swatch" style="padding-top:'+rs+';">'+ '<div style="border:1px solid #cccccc;padding:0.2em;">'+ '<div class="jqplot-cursor-legend-swatch" style="background-color:'+color+';"></div>'+ '</div></td>').appendTo(tr); var td = $('<td class="jqplot-legend jqplot-cursor-legend-label" style="vertical-align:middle;padding-top:'+rs+';"></td>'); td.appendTo(tr); td.data('seriesIndex', idx); if (this.escapeHtml) { td.text(label); } else { td.html(label); } } return this._elem; }; })(jQuery);
PypiClean
/mockdjangosaml2-0.16.1.tar.gz/mockdjangosaml2-0.16.1/README.rst
Mock djangosaml2 ================ In projects that use ``djangosaml2`` in production environment it is useful to have a mockup authentication system that can be used in development and testing environments i.e. when ``DEBUG = True``. Install ------- ``pip install mockdjangosaml2`` Usage ----- * in project's ``settings.py``:: if DEBUG: INSTALLED_APPS += ('mockdjangosaml2',) else: INSTALLED_APPS += ('djangosaml2',) * update project's ``urls.py`` file to include separate set of patterns for ``DEBUG = True`` case:: if settings.DEBUG: urlpatterns += patterns('', (r'^saml2/', include('mockdjangosaml2.urls')), ) else: urlpatterns += patterns('', (r'^saml2/', include('djangosaml2.urls')), ) * add mock users and their attributes to ``MOCK_SAML2_USERS`` in ``settings.py``. It should be formated as sample given in applications ``settings.py`` file.
PypiClean
/jupyterhub_url_sharing-0.1.0.tar.gz/jupyterhub_url_sharing-0.1.0/node_modules/@jupyterlab/services/lib/serverconnection.d.ts
export declare namespace ServerConnection { /** * A Jupyter server settings object. * Note that all of the settings are optional when passed to * [[makeSettings]]. The default settings are given in [[defaultSettings]]. */ interface ISettings { /** * The base url of the server. */ readonly baseUrl: string; /** * The app url of the JupyterLab application. */ readonly appUrl: string; /** * The base ws url of the server. */ readonly wsUrl: string; /** * The default request init options. */ readonly init: RequestInit; /** * The authentication token for requests. Use an empty string to disable. */ readonly token: string; /** * Whether to append a token to a Websocket url. The default is `false` in the browser * and `true` in node or jest. */ readonly appendToken: boolean; /** * The `fetch` method to use. */ readonly fetch: (input: RequestInfo, init?: RequestInit) => Promise<Response>; /** * The `Request` object constructor. */ readonly Request: typeof Request; /** * The `Headers` object constructor. */ readonly Headers: typeof Headers; /** * The `WebSocket` object constructor. */ readonly WebSocket: typeof WebSocket; } /** * Create a settings object given a subset of options. * * @param options - An optional partial set of options. * * @returns The full settings object. */ function makeSettings(options?: Partial<ISettings>): ISettings; /** * Make an request to the notebook server. * * @param url - The url for the request. * * @param init - The initialization options for the request. * * @param settings - The server settings to apply to the request. * * @returns a Promise that resolves with the response. * * @throws If the url of the request is not a notebook server url. * * #### Notes * The `url` must start with `settings.baseUrl`. The `init` settings are * merged with `settings.init`, with `init` taking precedence. * The headers in the two objects are not merged. * If there is no body data, we set the content type to `application/json` * because it is required by the Notebook server. */ function makeRequest(url: string, init: RequestInit, settings: ISettings): Promise<Response>; /** * A wrapped error for a fetch response. */ class ResponseError extends Error { /** * Create a ResponseError from a response, handling the traceback and message * as appropriate. * * @param response The response object. * * @returns A promise that resolves with a `ResponseError` object. */ static create(response: Response): Promise<ResponseError>; /** * Create a new response error. */ constructor(response: Response, message?: string, traceback?: string); /** * The response associated with the error. */ response: Response; /** * The traceback associated with the error. */ traceback: string; } /** * A wrapped error for a network error. */ class NetworkError extends TypeError { /** * Create a new network error. */ constructor(original: TypeError); } }
PypiClean
/ivy-testing-release-0.0.0.1.tar.gz/ivy-testing-release-0.0.0.1/ivy/functional/frontends/jax/numpy/indexing.py
import inspect import abc # local import ivy from ivy.functional.frontends.jax.func_wrapper import ( to_ivy_arrays_and_back, ) from .creation import linspace, arange, array from .manipulations import transpose, concatenate, expand_dims @to_ivy_arrays_and_back def diagonal(a, offset=0, axis1=0, axis2=1): return ivy.diagonal(a, offset=offset, axis1=axis1, axis2=axis2) @to_ivy_arrays_and_back def diag(v, k=0): return ivy.diag(v, k=k) @to_ivy_arrays_and_back def diag_indices(n, ndim=2): idx = ivy.arange(n, dtype=int) return (idx,) * ndim # take_along_axis @to_ivy_arrays_and_back def take_along_axis(arr, indices, axis, mode="fill"): return ivy.take_along_axis(arr, indices, axis, mode=mode) @to_ivy_arrays_and_back def tril_indices(n, k=0, m=None): return ivy.tril_indices(n, m, k) @to_ivy_arrays_and_back def triu_indices(n, k=0, m=None): return ivy.triu_indices(n, m, k) @to_ivy_arrays_and_back def triu_indices_from(arr, k=0): return ivy.triu_indices(arr.shape[-2], arr.shape[-1], k) @to_ivy_arrays_and_back def tril_indices_from(arr, k=0): return ivy.tril_indices(arr.shape[-2], arr.shape[-1], k) # unravel_index @to_ivy_arrays_and_back def unravel_index(indices, shape): ret = [x.astype(indices.dtype) for x in ivy.unravel_index(indices, shape)] return tuple(ret) @to_ivy_arrays_and_back def mask_indices(n, mask_func, k=0): mask_func_obj = inspect.unwrap(mask_func) mask_func_name = mask_func_obj.__name__ try: ivy_mask_func_obj = getattr(ivy.functional.frontends.jax.numpy, mask_func_name) a = ivy.ones((n, n)) mask = ivy_mask_func_obj(a, k=k) indices = ivy.argwhere(mask.ivy_array) return indices[:, 0], indices[:, 1] except AttributeError as e: print(f"Attribute error: {e}") @to_ivy_arrays_and_back def diag_indices_from(arr): print(arr) n = arr.shape[0] ndim = ivy.get_num_dims(arr) if not all(arr.shape[i] == n for i in range(ndim)): raise ValueError("All dimensions of input must be of equal length") idx = ivy.arange(n, dtype=int) return (idx,) * ndim @to_ivy_arrays_and_back def indices(dimensions, dtype=int, sparse=False): if sparse: return tuple( ivy.arange(dim) .expand_dims( axis=[j for j in range(len(dimensions)) if i != j], ) .astype(dtype) for i, dim in enumerate(dimensions) ) else: grid = ivy.meshgrid(*[ivy.arange(dim) for dim in dimensions], indexing="ij") return ivy.stack(grid, axis=0).astype(dtype) def _make_1d_grid_from_slice(s): step = 1 if s.step is None else s.step start = 0 if s.start is None else s.start if s.step is not None and ivy.is_complex_dtype(s.step): newobj = linspace(start, s.stop, int(abs(step))) else: newobj = arange(start, s.stop, step) return newobj class _AxisConcat(abc.ABC): axis: int ndmin: int trans1d: int def __getitem__(self, key): key_tup = key if isinstance(key, tuple) else (key,) params = [self.axis, self.ndmin, self.trans1d, -1] directive = key_tup[0] if isinstance(directive, str): key_tup = key_tup[1:] # check two special cases: matrix directives if directive == "r": params[-1] = 0 elif directive == "c": params[-1] = 1 else: vec = directive.split(",") k = len(vec) if k < 4: vec += params[k:] else: # ignore everything after the first three comma-separated ints vec = vec[:3] + [params[-1]] try: params = list(map(int, vec)) except ValueError as err: raise ValueError( f"could not understand directive {directive!r}" ) from err axis, ndmin, trans1d, matrix = params output = [] for item in key_tup: if isinstance(item, slice): newobj = _make_1d_grid_from_slice(item) item_ndim = 0 elif isinstance(item, str): raise ValueError("string directive must be placed at the beginning") else: newobj = array(item, copy=False) item_ndim = newobj.ndim newobj = array(newobj, copy=False, ndmin=ndmin) if trans1d != -1 and ndmin - item_ndim > 0: shape_obj = tuple(range(ndmin)) # Calculate number of left shifts, with overflow protection by mod num_lshifts = ndmin - abs(ndmin + trans1d + 1) % ndmin shape_obj = tuple(shape_obj[num_lshifts:] + shape_obj[:num_lshifts]) newobj = transpose(newobj, shape_obj) output.append(newobj) res = concatenate(tuple(output), axis=axis) if matrix != -1 and res.ndim == 1: # insert 2nd dim at axis 0 or 1 res = expand_dims(res, matrix) return res def __len__(self) -> int: return 0 class RClass(_AxisConcat): axis = 0 ndmin = 1 trans1d = -1 r_ = RClass() class CClass(_AxisConcat): axis = -1 ndmin = 2 trans1d = 0 c_ = CClass()
PypiClean
/pulsar_sdk_py-0.1.35-py3-none-any.whl/pulsar_sdk_py/pulsar_sdk.py
import re import json import uuid import logging from dataclasses import asdict import requests import websockets from typing import AsyncGenerator from pulsar_sdk_py.helpers import filter_non_empty_params from pulsar_sdk_py.schemas.serializer import serialize_to_dataclass from pulsar_sdk_py.enums import ( TierKeys, ChainKeys, TokenSort, TokenType, NFTItemSort, ProtocolSort, NFTCollectionSort, replace_enums_with_values, ) from pulsar_sdk_py.exceptions import ( WebSocketClosed, SerializationError, WrongResponseFormat, ) from pulsar_sdk_py.schemas.schemas import ( NFTItem, Timeseries, ResolvedName, ProtocolData, NFTCollection, ExtendedToken, ResolvedAddress, NFTTraitsFilter, PaginatedNFTItem, ProtocolTimeseries, TimeseriesWithStats, TokenPriceTimeseries, AggregateWalletTokens, WalletRequestSettings, PaginatedNFTCollection, PaginatedTokenWithStats, PaginatedProtocolWithStats, AggregateWalletIntegrations, ) class PulsarSDK: """ A client for interacting with the Pulsar Third Party API. This class provides a high-level interface for interacting with the API, including the ability to retrieve data about tokens, domain names, NFTs, protocols, and wallet balances. The class serves typified ways to interact with the endpoints, through websockets or REST. Args: api_key (str): The API key to use for making requests to the Pulsar Third Party API. """ _BASE_URL = "qa-api.pulsar.finance" @property def REST_API_URL(self): return f"{self._PROTOCOL}://{self._BASE_URL}/v1/thirdparty" @property def WS_API_URL(self): return f"{self._WS_PROTOCOL}://{self._BASE_URL}/v1/thirdparty/ws" def __init__(self, api_key, base_url: str | None = None, use_ssl: bool = True): if base_url: self._BASE_URL = base_url self._PROTOCOL = "https" if use_ssl else "http" self._WS_PROTOCOL = "wss" if use_ssl else "ws" headers = {"Authorization": f"Bearer {api_key}"} # Rest clients self.tokens = self._TokenRestClient(rest_api_url=self.REST_API_URL, headers=headers) self.name_service = self._NameServiceRestClient(rest_api_url=self.REST_API_URL, headers=headers) self.nfts = self._NFTRestClient(rest_api_url=self.REST_API_URL, headers=headers) self.protocols = self._ProtocolRestClient(rest_api_url=self.REST_API_URL, headers=headers) self.wallets = self._WalletRestClient(rest_api_url=self.REST_API_URL, headers=headers) # Websocket clients ws_client = PulsarSDK._WebsocketClient(ws_url=self.WS_API_URL, api_key=api_key) self.balances = self._WalletBalancesClient(ws_client=ws_client) class _WebsocketClient: # noinspection PyUnresolvedReferences """ A helper class for making WebSocket connections to a third-party service. This class provides methods for establishing and managing WebSocket connections to a third-party service. It includes a method for generating responses from a WebSocket connection, as well as methods for handling responses and processing payload data. Attributes: headers (dict): A dictionary of headers to include in WebSocket connection requests sent by instances of this class. uri (str): The URI for the WebSocket connection. websocket_conn: The WebSocket connection object. """ headers = {} websocket_conn = None def __init__(self, ws_url, api_key): self.WS_URL = ws_url self.headers = {"Authorization": api_key} async def __connect_websocket(self): """ Establish a WebSocket connection. This method establishes a WebSocket connection to the specified URI, using the headers provided to the instance. Returns: The WebSocket connection object. """ if self.websocket_conn is not None and self.websocket_conn.open: # WebSocket connection is already open, return the existing connection return self.websocket_conn # Create a new WebSocket connection self.websocket_conn = await websockets.connect(self.WS_URL, extra_headers=self.headers) return self.websocket_conn async def response_generator(self, msg): """ Generate responses from a WebSocket connection. This method sends a message to the WebSocket connection, and then waits for responses to be received from the connection. It generates each response as it is received. Args: msg (dict): A dictionary representing the message to send over the WebSocket connection. Yields: str: The response received from the WebSocket connection. Raises: WebSocketClosed: If the WebSocket connection is unexpectedly closed while waiting for responses. """ try: websocket_conn = await self.__connect_websocket() serialized_data = json.dumps(msg) await websocket_conn.send(serialized_data) while True: yield await websocket_conn.recv() except websockets.ConnectionClosed as e: # Handle connection closed error raise WebSocketClosed( f"Connection unexpectedly closed while waiting for responses from request ID: {msg['request_id']}" ) from e except Exception as e: # Handle other exceptions that may occur logging.error(f"Exception occurred while waiting for responses from request ID: {msg['request_id']}") raise e async def handle_response(self, request_id, msg, finished_event_type): """ Handle responses received from a WebSocket connection. This method generates responses from a WebSocket connection, and then processes the payload data in each response. If the response contains a "finished" event of the specified type, the method returns. Args: request_id (str): The ID of the request associated with the WebSocket connection. msg (dict): A dictionary representing the message to send over the WebSocket connection. finished_event_type (str): The event type indicating that the request has finished. Yields: Any: The processed payload data from the response. Raises: WebSocketClosed: If the WebSocket connection is unexpectedly closed while waiting for responses. WrongResponseFormat: If a response does not contain the expected data format. """ async for response in self.response_generator(msg): try: event_dict = await self.__get_event_dict(request_id=request_id, response=response) if event_dict: event_type = event_dict["key"] if "PREFETCH" not in event_type: if event_payload := event_dict["payload"]: payload_type = event_payload["type"] payload_data = event_payload["data"] async for item in self.__process_payload(payload_type, payload_data): yield item if event_type == finished_event_type: return # TODO this is missing error handling if the response field is_error is True except WrongResponseFormat as e: logging.error(f"Response from API is not valid. Request ID: {request_id}") raise e @staticmethod async def __process_payload(payload_type, payload_data): """ Process the payload data in a WebSocket response. This method processes the payload data in a WebSocket response, converting it to a more easily usable format. Args: payload_type (str): The type of the payload data in the response. payload_data (dict): The payload data to process. Yields: Any: The processed payload data from the response. Raises: SerializationError: If an error occurs during serialization of the payload data. """ try: if payload_type.startswith("Timeseries"): yield serialize_to_dataclass(payload_data[0], Timeseries) elif payload_type.startswith("AggregateWalletIntegrations"): yield serialize_to_dataclass(payload_data, AggregateWalletIntegrations) elif payload_type.startswith("NFTItem"): yield [serialize_to_dataclass(item, NFTItem) for item in payload_data["items"]] elif payload_type.startswith("AggregateWalletTokens"): yield serialize_to_dataclass(payload_data, AggregateWalletTokens) except Exception as e: # Handle serialization error raise SerializationError( f"An error occurred during serialization: {str(e)}\nSerializing {payload_type}." ) from e @staticmethod async def __get_event_dict(request_id, response): """ A coroutine that returns the event dictionary from the WebSocket server response. This method is responsible for parsing the response from the WebSocket server and returning the event dictionary contained within. Args: request_id (str): The ID of the request being made. response (str): The response received from the WebSocket server. Returns: The event dictionary contained within the WebSocket server response. Raises: WrongResponseFormat: If the response from the WebSocket server is not in the expected format. """ json_response = json.loads(response) event_dict = json_response.get("event") if not event_dict: raise WrongResponseFormat("Response does not contain 'event' dictionary.") if "request_id" not in event_dict: raise WrongResponseFormat("Response 'event' dictionary does not contain 'request_id' key.") if event_dict["request_id"] == request_id: return event_dict class _WalletBalancesClient: __KEYS = { "WALLET_BALANCES": { "COMMAND": "WALLET_BALANCES", "FINISHED": "WALLET_BALANCES_FINISHED", }, "GET_WALLET_TIMESERIES": { "COMMAND": "GET_WALLET_TIMESERIES", "FINISHED": "GET_WALLET_TIMESERIES_FINISHED", }, } def __init__(self, ws_client): self.__ws_client = ws_client @replace_enums_with_values async def get_wallet_balances( self, wallet_addr: str, chain: ChainKeys, wallet_request_settings: WalletRequestSettings | None = None ) -> AsyncGenerator[AggregateWalletIntegrations | list[NFTItem] | AggregateWalletTokens | None, None]: request_id = str(uuid.uuid4()) data_dict = {"address": wallet_addr, "chain": chain} if wallet_request_settings: self.__convert_sets_to_lists(wallet_request_settings) data_dict |= asdict(wallet_request_settings) msg = { "method": "COMMAND", "command": { "key": f"{self.__KEYS['WALLET_BALANCES']['COMMAND']}", "data": data_dict, }, "request_id": request_id, } finished_event_type = self.__KEYS["WALLET_BALANCES"]["FINISHED"] async for response in self.__ws_client.handle_response( request_id=request_id, msg=msg, finished_event_type=finished_event_type, ): yield response @replace_enums_with_values async def get_wallet_timeseries( self, wallet_addr: str, chain: ChainKeys, tier: TierKeys ) -> AsyncGenerator[Timeseries, None]: request_id = str(uuid.uuid4()) msg = { "method": "COMMAND", "command": { "key": f"{self.__KEYS['GET_WALLET_TIMESERIES']['COMMAND']}", "data": { "address": f"{wallet_addr}", "chain": f"{chain}", "tier": f"{tier}", }, }, "request_id": request_id, } finished_event_type = self.__KEYS["GET_WALLET_TIMESERIES"]["FINISHED"] async for response in self.__ws_client.handle_response( request_id=request_id, msg=msg, finished_event_type=finished_event_type, ): yield response def __convert_sets_to_lists(self, wallet_request_settings: WalletRequestSettings): wallet_request_settings.hide_nfts = list(wallet_request_settings.hide_nfts) wallet_request_settings.hide_tokens = list(wallet_request_settings.hide_tokens) wallet_request_settings.hide_integrations = list(wallet_request_settings.hide_integrations) class _RestClient: # noinspection PyUnresolvedReferences """ A helper class for making HTTP requests to a REST API. This class provides several methods for sending HTTP requests to Pulsar REST API endpoints. It includes a static method for filtering out any key-value pairs from a dictionary where the value is None, as well as a method for sending an HTTP request to a Pulsar REST API endpoint and returning the JSON response body. Attributes: headers (dict): A dictionary of headers to include in HTTP requests sent by instances of this class. """ headers = {} def __init__(self, rest_api_url, headers): self.REST_API_URL = rest_api_url self.headers = headers @replace_enums_with_values def __get_request_on_endpoint(self, func_name, request_type, request_body=None, **kwargs): """ Send an HTTP request to a specific REST API endpoint and return the JSON response body. Args: func_name (str): The name of a function that corresponds to a specific REST API endpoint. request_type (str): The HTTP method to use for the request (e.g. "GET", "POST", "PUT"). request_body (dict, optional): The JSON payload to include in the request body (default: {}). **kwargs: Key-value pairs to include as path or query parameters in the request URL. Returns: dict: The JSON response body as a dictionary. Raises: HTTPError: If the response from the API endpoint indicates an error status code (e.g. 4xx or 5xx). """ if request_body is None: request_body = {} endpoint_url = self.endpoints[func_name] # This code extracts named parameters from a string (endpoint_url) using regular expressions, # and populates them with corresponding values from a dictionary (kwargs).The resulting string is formed # by substituting the named parameters with their corresponding values, and concatenating the result with # another string (BASE_URL). param_names = re.findall(r"\{([^{}]*)\}", endpoint_url) params = {} for param_name in param_names: if param_name in kwargs: param_value = kwargs.pop(param_name) params[param_name] = param_value formatted_url = endpoint_url.format(**params) endpoint_url = self.REST_API_URL + formatted_url if kwargs: # If there are any remaining kwargs, construct them as query parameters for the endpoint URL query_params = [] for key, value in kwargs.items(): if isinstance(value, list): query_params.extend(f"{key}={item}" for item in value) else: query_params.append(f"{key}={value}") query_params_string = "&".join(query_params) endpoint_url += f"?{query_params_string}" # Add the query parameters to the endpoint URL response = requests.request( method=request_type, url=endpoint_url, json=request_body, headers=self.headers, ) response.raise_for_status() return response.json() class _NameServiceRestClient(_RestClient): endpoints = { "resolve_name": "/name-service/resolve-name", "resolve_address": "/name-service/resolve-address", } # NAME SERVICE def resolve_name(self, name: str) -> ResolvedName: response = self._RestClient__get_request_on_endpoint( func_name="resolve_name", request_type="GET", name=name ) return serialize_to_dataclass(response, ResolvedName) def resolve_address(self, address: str) -> ResolvedAddress: response = self._RestClient__get_request_on_endpoint( func_name="resolve_address", request_type="GET", address=address ) return serialize_to_dataclass(response, ResolvedAddress) class _ProtocolRestClient(_RestClient): endpoints = { "get_protocol": "/protocols/protocols/{protocol_key}", "list_protocols": "/protocols/all-protocols", "get_number_protocols": "/protocols/total-protocols", "get_filtered_protocols": "/protocols", "get_protocol_timeseries": "/protocols/{protocol_key}/timeseries", } def get_protocol(self, protocol_key: str) -> ProtocolData: response = self._RestClient__get_request_on_endpoint( func_name="get_protocol", request_type="GET", protocol_key=protocol_key ) return serialize_to_dataclass(response, ProtocolData) def list_protocols(self, chain: ChainKeys | None = None) -> list[ProtocolData]: params_filtered = filter_non_empty_params(chain=chain) response = self._RestClient__get_request_on_endpoint( func_name="list_protocols", request_type="GET", **params_filtered ) return [serialize_to_dataclass(protocol, ProtocolData) for protocol in response] def get_number_protocols(self) -> int: return self._RestClient__get_request_on_endpoint("get_number_protocols", request_type="GET") def get_filtered_protocols( self, name: str | None = None, chains: list[ChainKeys] | None = None, tvl: str | None = None, sort_by: ProtocolSort | None = None, offset: int = 0, limit: int = 10, ) -> PaginatedProtocolWithStats: params_filtered = filter_non_empty_params( name=name, chains=chains, tvl=tvl, sort_by=sort_by, offset=offset, limit=limit, ) response = self._RestClient__get_request_on_endpoint( func_name="get_filtered_protocols", request_type="GET", **params_filtered, ) return serialize_to_dataclass(response, PaginatedProtocolWithStats) def get_protocol_timeseries(self, protocol_key: str, tier_name: TierKeys) -> ProtocolTimeseries: response = self._RestClient__get_request_on_endpoint( func_name="get_protocol_timeseries", request_type="GET", protocol_key=protocol_key, tier_name=tier_name, ) return serialize_to_dataclass(response, ProtocolTimeseries) class _NFTRestClient(_RestClient): endpoints = { "fetch_collection_by_address": "/nfts/collections/{chain}/{collection_address}", "fetch_nft_by_address": "/nfts/collections/{chain}/{collection_address}/nfts", "list_collection_nfts": "/nfts/collections/{collection_id}/nfts", "fetch_nft": "/nfts/collections/{collection_id}/nfts/{token_id}", "fetch_collection": "/nfts/collections/{collection_id}", "list_nfts": "/nfts", } def list_collection_nfts( self, collection_id: str, search_string: str | None = None, rarity_score: str | None = None, rank_minimum: int | None = None, rank_maximum: int | None = None, traits: NFTTraitsFilter | None = None, sort_by: NFTItemSort | None = None, offset: int = 0, limit: int = 10, ) -> PaginatedNFTItem: params_filtered = filter_non_empty_params( collection_id=collection_id, search_string=search_string, rarity_score=rarity_score, rank_minimum=rank_minimum, rank_maximum=rank_maximum, sort_by=sort_by, offset=offset, limit=limit, ) traits_dict = {"traits": [] if traits is None else traits.traits} response = self._RestClient__get_request_on_endpoint( func_name="list_collection_nfts", request_type="POST", request_body=traits_dict, **params_filtered, ) return serialize_to_dataclass(response, PaginatedNFTItem) def fetch_collection(self, collection_id: str) -> NFTCollection: response = self._RestClient__get_request_on_endpoint( func_name="fetch_collection", request_type="GET", collection_id=collection_id, ) return serialize_to_dataclass(response, NFTCollection) def fetch_collection_by_address(self, collection_address: str, chain: ChainKeys) -> NFTCollection: response = self._RestClient__get_request_on_endpoint( func_name="fetch_collection_by_address", request_type="GET", collection_address=collection_address, chain=chain, ) return serialize_to_dataclass(response, NFTCollection) def fetch_nft(self, collection_id: str, token_id: str) -> NFTItem: response = self._RestClient__get_request_on_endpoint( func_name="fetch_nft", request_type="GET", collection_id=collection_id, token_id=token_id, ) return serialize_to_dataclass(response, NFTItem) def fetch_nft_by_address(self, collection_address: str, chain: ChainKeys, token_id: str) -> NFTItem: response = self._RestClient__get_request_on_endpoint( func_name="fetch_nft_by_address", request_type="GET", collection_address=collection_address, chain=chain, token_id=token_id, ) return serialize_to_dataclass(response, NFTItem) def list_nfts( self, name: str | None = None, chains: list[ChainKeys] | None = None, sort_by: NFTCollectionSort | None = None, offset: int = 0, limit: int = 10, is_fully_index: bool = True, ) -> PaginatedNFTCollection: params_filtered = filter_non_empty_params( name=name, chains=chains, sort_by=sort_by, offset=offset, limit=limit, is_fully_index=is_fully_index, ) response = self._RestClient__get_request_on_endpoint( func_name="list_nfts", request_type="GET", **params_filtered ) return serialize_to_dataclass(response, PaginatedNFTCollection) class _TokenRestClient(_RestClient): endpoints = { "get_token_info_by_id": "/token/{token_id}", "get_token_info_by_address_and_chain": "/token/{token_type}/{address}", "list_tokens": "/tokens", "get_token_timeseries": "/tokens/{token_id}/timeseries", } # TOKENS def get_token_info_by_id(self, token_id: str) -> ExtendedToken: response = self._RestClient__get_request_on_endpoint( func_name="get_token_info_by_id", request_type="GET", token_id=token_id ) return serialize_to_dataclass(response, ExtendedToken) def get_token_info_by_address_and_chain( self, token_type: TokenType, address: str, chain: ChainKeys ) -> ExtendedToken: response = self._RestClient__get_request_on_endpoint( func_name="get_token_info_by_address_and_chain", request_type="GET", token_type=token_type, address=address, chain=chain, ) return serialize_to_dataclass(response, ExtendedToken) def list_tokens( self, text: str | None = None, chains: list[ChainKeys] | None = None, minimum_liquidity: int = 0, sort_by: TokenSort | None = None, whitelisted_only: bool = False, remove_blacklisted: bool = False, offset: int = 0, limit: int = 10, ) -> PaginatedTokenWithStats: params_filtered = filter_non_empty_params( text=text, chains=chains, sort_by=sort_by, offset=offset, limit=limit, minimum_liquidity=minimum_liquidity, whitelisted_only=whitelisted_only, remove_blacklisted=remove_blacklisted, ) response = self._RestClient__get_request_on_endpoint( func_name="list_tokens", request_type="GET", **params_filtered ) return serialize_to_dataclass(response, PaginatedTokenWithStats) def get_token_timeseries(self, token_id: str, tier_name: TierKeys) -> TokenPriceTimeseries: response = self._RestClient__get_request_on_endpoint( func_name="get_token_timeseries", request_type="GET", token_id=token_id, tier_name=tier_name, ) return serialize_to_dataclass(response, TokenPriceTimeseries) class _WalletRestClient(_RestClient): endpoints = { "get_wallet_timeseries": "/wallet/{address}/timeseries", } def get_wallet_timeseries( self, address: str, chain: ChainKeys, tier: TierKeys = TierKeys.ONE_DAY ) -> TimeseriesWithStats: response = self._RestClient__get_request_on_endpoint( func_name="get_wallet_timeseries", request_type="GET", address=address, chain=chain, tier=tier, ) return serialize_to_dataclass(response, TimeseriesWithStats)
PypiClean
/taskcc-alipay-sdk-python-3.3.398.tar.gz/taskcc-alipay-sdk-python-3.3.398/alipay/aop/api/request/AlipayMobilePublicAccountDeleteRequest.py
import json from alipay.aop.api.FileItem import FileItem from alipay.aop.api.constant.ParamConstants import * class AlipayMobilePublicAccountDeleteRequest(object): def __init__(self, biz_model=None): self._biz_model = biz_model self._biz_content = None self._version = "1.0" self._terminal_type = None self._terminal_info = None self._prod_code = None self._notify_url = None self._return_url = None self._udf_params = None self._need_encrypt = False @property def biz_model(self): return self._biz_model @biz_model.setter def biz_model(self, value): self._biz_model = value @property def biz_content(self): return self._biz_content @biz_content.setter def biz_content(self, value): self._biz_content = value @property def version(self): return self._version @version.setter def version(self, value): self._version = value @property def terminal_type(self): return self._terminal_type @terminal_type.setter def terminal_type(self, value): self._terminal_type = value @property def terminal_info(self): return self._terminal_info @terminal_info.setter def terminal_info(self, value): self._terminal_info = value @property def prod_code(self): return self._prod_code @prod_code.setter def prod_code(self, value): self._prod_code = value @property def notify_url(self): return self._notify_url @notify_url.setter def notify_url(self, value): self._notify_url = value @property def return_url(self): return self._return_url @return_url.setter def return_url(self, value): self._return_url = value @property def udf_params(self): return self._udf_params @udf_params.setter def udf_params(self, value): if not isinstance(value, dict): return self._udf_params = value @property def need_encrypt(self): return self._need_encrypt @need_encrypt.setter def need_encrypt(self, value): self._need_encrypt = value def add_other_text_param(self, key, value): if not self.udf_params: self.udf_params = dict() self.udf_params[key] = value def get_params(self): params = dict() params[P_METHOD] = 'alipay.mobile.public.account.delete' params[P_VERSION] = self.version if self.biz_model: params[P_BIZ_CONTENT] = json.dumps(obj=self.biz_model.to_alipay_dict(), ensure_ascii=False, sort_keys=True, separators=(',', ':')) if self.biz_content: if hasattr(self.biz_content, 'to_alipay_dict'): params['biz_content'] = json.dumps(obj=self.biz_content.to_alipay_dict(), ensure_ascii=False, sort_keys=True, separators=(',', ':')) else: params['biz_content'] = self.biz_content if self.terminal_type: params['terminal_type'] = self.terminal_type if self.terminal_info: params['terminal_info'] = self.terminal_info if self.prod_code: params['prod_code'] = self.prod_code if self.notify_url: params['notify_url'] = self.notify_url if self.return_url: params['return_url'] = self.return_url if self.udf_params: params.update(self.udf_params) return params def get_multipart_params(self): multipart_params = dict() return multipart_params
PypiClean
/HarmonyDecoder-1.0.9.tar.gz/HarmonyDecoder-1.0.9/README.md
HARMONY DECODER Project made for fun, to help people with harmony tasks. The project itself is very simple - it takes input as functions in specific format, each function has to be separated by space, and solves the harmony task. All basic harmony rules are being checked to create all solutions. Solutions are sorted by their quality. FORMAT: Symbol^Position/Root+Added+-Deleted-*Alterations*=Suspensions= Eg.: T^/++--**== (C E G in C major) mT^1/1++--**== (C Eb G in C major, in position of root) S^3/5++--**== (F A C in C major, in position of third and fifth in root) D^/+9+-5-**== (G B F A in C major, with deleted fifth) Sii^/++--*3#,5b*== (D F# Ab in C major, in with sharp third and lowered fifth) Version Description Creation time 1.0.* The first version of the project, 18.06.2023 basic functions, additional intervals, alterations, deleted notes are working. Everything else is in progress. Documentation is not created... yet. I'm working on it.
PypiClean
/lns-icloudpy-0.0.3.tar.gz/lns-icloudpy-0.0.3/icloudpy/services/reminders.py
import json import time import uuid from datetime import datetime from tzlocal import get_localzone class RemindersService: """The 'Reminders' iCloud service.""" def __init__(self, service_root, session, params): self.session = session self._params = params self._service_root = service_root self.lists = {} self.collections = {} self.refresh() def refresh(self): """Refresh data.""" params_reminders = dict(self._params) params_reminders.update( { "clientVersion": "4.0", "lang": "en-us", "usertz": get_localzone().zone, "dsid": self.session.service.data["dsInfo"]["dsid"], } ) # Open reminders req = self.session.get( self._service_root + "/rd/startup", params=params_reminders ) data = req.json() self.lists = {} self.collections = {} for collection in data["Collections"]: temp = [] self.collections[collection["title"]] = { "guid": collection["guid"], "ctag": collection["ctag"], } for reminder in data["Reminders"]: if reminder["pGuid"] != collection["guid"]: continue if reminder.get("dueDate"): due = datetime( reminder["dueDate"][1], reminder["dueDate"][2], reminder["dueDate"][3], reminder["dueDate"][4], reminder["dueDate"][5], ) else: due = None temp.append( { "title": reminder["title"], "desc": reminder.get("description"), "due": due, } ) self.lists[collection["title"]] = temp def post(self, title, description="", collection=None, due_date=None): """Adds a new reminder.""" pguid = "tasks" if collection: if collection in self.collections: pguid = self.collections[collection]["guid"] params_reminders = dict(self._params) params_reminders.update( {"clientVersion": "4.0", "lang": "en-us", "usertz": get_localzone().zone} ) due_dates = None if due_date: due_dates = [ int(str(due_date.year) + str(due_date.month) + str(due_date.day)), due_date.year, due_date.month, due_date.day, due_date.hour, due_date.minute, ] req = self.session.post( self._service_root + "/rd/reminders/tasks", data=json.dumps( { "Reminders": { "title": title, "description": description, "pGuid": pguid, "etag": None, "order": None, "priority": 0, "recurrence": None, "alarms": [], "startDate": None, "startDateTz": None, "startDateIsAllDay": False, "completedDate": None, "dueDate": due_dates, "dueDateIsAllDay": False, "lastModifiedDate": None, "createdDate": None, "isFamily": None, "createdDateExtended": int(time.time() * 1000), "guid": str(uuid.uuid4()), }, "ClientState": {"Collections": list(self.collections.values())}, } ), params=params_reminders, ) return req.ok
PypiClean
/robotFramework-DebugUiLibrary-0.9.0.zip/robotFramework-DebugUiLibrary-0.9.0/DebugUiLibrary/DebugUiLibrary.py
from RfInterface import RfInterface # This allows acccess to robotFramework calls and data from DebugUI import DebugUI # This is the pop up UI # This is the main library function used in robotFramework scripts class DebugUiLibrary: """ This test library provides a single keyword 'Debug' to allow debugging of robotFramework scripts *Before running tests* Prior to running tests, DebugUiLibrary be added as an imported library in your Robot test suite. Example: | Library | DebugUiLibrary | """ def Debug(self): """The command "Debug" surfaces the debugger user interface. This suggests possible commands and lets you edit and try them. The commands use valid xpaths found from the page by the debugger. The debugger also lists the current variables in case you want to use them. Add "Library Debug.py" to your project settings section. Add "Debug" in your script where you want to start debugging test steps (the line before a problem). Experiment with commands until you fix the problem. You can manually interact with the page and use browser tools to view xpaths and page source while you debug. When a command works "Save" it When you are finished paste the saved commands into your script. NOTES: Paste buffer content vanishes when the Debugger closes so paste your new commands into your script before exiting it. The debugger may take around 15-30 seconds to surface or update while it searches the page content. Not all controls are found by the debugger - you may have to add or tinker with the command you try. """ # create an interface to robbotFramework - so we can send commands rfInterface=RfInterface() # Open up the debug UI debugUI=DebugUI(rfInterface) # Print a message on exiting the debugger print "Debug complete" # Testing for this code if __name__=='__main__': DebugUiLibrary().Debug() # ------------------ End of File ------------------
PypiClean
/django-classic-user-accounts-1.0.39.tar.gz/django-classic-user-accounts-1.0.39/ClassicUserAccounts/static/matrix-admin-v2/assets/libs/flot/jquery.flot.navigate.js
(function(a){function e(h){var k,j=this,l=h.data||{};if(l.elem)j=h.dragTarget=l.elem,h.dragProxy=d.proxy||j,h.cursorOffsetX=l.pageX-l.left,h.cursorOffsetY=l.pageY-l.top,h.offsetX=h.pageX-h.cursorOffsetX,h.offsetY=h.pageY-h.cursorOffsetY;else if(d.dragging||l.which>0&&h.which!=l.which||a(h.target).is(l.not))return;switch(h.type){case"mousedown":return a.extend(l,a(j).offset(),{elem:j,target:h.target,pageX:h.pageX,pageY:h.pageY}),b.add(document,"mousemove mouseup",e,l),i(j,!1),d.dragging=null,!1;case!d.dragging&&"mousemove":if(g(h.pageX-l.pageX)+g(h.pageY-l.pageY)<l.distance)break;h.target=l.target,k=f(h,"dragstart",j),k!==!1&&(d.dragging=j,d.proxy=h.dragProxy=a(k||j)[0]);case"mousemove":if(d.dragging){if(k=f(h,"drag",j),c.drop&&(c.drop.allowed=k!==!1,c.drop.handler(h)),k!==!1)break;h.type="mouseup"}case"mouseup":b.remove(document,"mousemove mouseup",e),d.dragging&&(c.drop&&c.drop.handler(h),f(h,"dragend",j)),i(j,!0),d.dragging=d.proxy=l.elem=!1}return!0}function f(b,c,d){b.type=c;var e=a.event.dispatch.call(d,b);return e===!1?!1:e||b.result}function g(a){return Math.pow(a,2)}function h(){return d.dragging===!1}function i(a,b){a&&(a.unselectable=b?"off":"on",a.onselectstart=function(){return b},a.style&&(a.style.MozUserSelect=b?"":"none"))}a.fn.drag=function(a,b,c){return b&&this.bind("dragstart",a),c&&this.bind("dragend",c),a?this.bind("drag",b?b:a):this.trigger("drag")};var b=a.event,c=b.special,d=c.drag={not:":input",distance:0,which:1,dragging:!1,setup:function(c){c=a.extend({distance:d.distance,which:d.which,not:d.not},c||{}),c.distance=g(c.distance),b.add(this,"mousedown",e,c),this.attachEvent&&this.attachEvent("ondragstart",h)},teardown:function(){b.remove(this,"mousedown",e),this===d.dragging&&(d.dragging=d.proxy=!1),i(this,!0),this.detachEvent&&this.detachEvent("ondragstart",h)}};c.dragstart=c.dragend={setup:function(){},teardown:function(){}}})(jQuery); /* jquery.mousewheel.min.js * Copyright (c) 2011 Brandon Aaron (http://brandonaaron.net) * Licensed under the MIT License (LICENSE.txt). * Thanks to: http://adomas.org/javascript-mouse-wheel/ for some pointers. * Thanks to: Mathias Bank(http://www.mathias-bank.de) for a scope bug fix. * Thanks to: Seamus Leahy for adding deltaX and deltaY * * Version: 3.0.6 * * Requires: 1.2.2+ */ (function(d){function e(a){var b=a||window.event,c=[].slice.call(arguments,1),f=0,e=0,g=0,a=d.event.fix(b);a.type="mousewheel";b.wheelDelta&&(f=b.wheelDelta/120);b.detail&&(f=-b.detail/3);g=f;void 0!==b.axis&&b.axis===b.HORIZONTAL_AXIS&&(g=0,e=-1*f);void 0!==b.wheelDeltaY&&(g=b.wheelDeltaY/120);void 0!==b.wheelDeltaX&&(e=-1*b.wheelDeltaX/120);c.unshift(a,f,e,g);return(d.event.dispatch||d.event.handle).apply(this,c)}var c=["DOMMouseScroll","mousewheel"];if(d.event.fixHooks)for(var h=c.length;h;)d.event.fixHooks[c[--h]]=d.event.mouseHooks;d.event.special.mousewheel={setup:function(){if(this.addEventListener)for(var a=c.length;a;)this.addEventListener(c[--a],e,!1);else this.onmousewheel=e},teardown:function(){if(this.removeEventListener)for(var a=c.length;a;)this.removeEventListener(c[--a],e,!1);else this.onmousewheel=null}};d.fn.extend({mousewheel:function(a){return a?this.bind("mousewheel",a):this.trigger("mousewheel")},unmousewheel:function(a){return this.unbind("mousewheel",a)}})})(jQuery); (function ($) { var options = { xaxis: { zoomRange: null, // or [number, number] (min range, max range) panRange: null // or [number, number] (min, max) }, zoom: { interactive: false, trigger: "dblclick", // or "click" for single click amount: 1.5 // how much to zoom relative to current position, 2 = 200% (zoom in), 0.5 = 50% (zoom out) }, pan: { interactive: false, cursor: "move", frameRate: 20 } }; function init(plot) { function onZoomClick(e, zoomOut) { var c = plot.offset(); c.left = e.pageX - c.left; c.top = e.pageY - c.top; if (zoomOut) plot.zoomOut({ center: c }); else plot.zoom({ center: c }); } function onMouseWheel(e, delta) { e.preventDefault(); onZoomClick(e, delta < 0); return false; } var prevCursor = 'default', prevPageX = 0, prevPageY = 0, panTimeout = null; function onDragStart(e) { if (e.which != 1) // only accept left-click return false; var c = plot.getPlaceholder().css('cursor'); if (c) prevCursor = c; plot.getPlaceholder().css('cursor', plot.getOptions().pan.cursor); prevPageX = e.pageX; prevPageY = e.pageY; } function onDrag(e) { var frameRate = plot.getOptions().pan.frameRate; if (panTimeout || !frameRate) return; panTimeout = setTimeout(function () { plot.pan({ left: prevPageX - e.pageX, top: prevPageY - e.pageY }); prevPageX = e.pageX; prevPageY = e.pageY; panTimeout = null; }, 1 / frameRate * 1000); } function onDragEnd(e) { if (panTimeout) { clearTimeout(panTimeout); panTimeout = null; } plot.getPlaceholder().css('cursor', prevCursor); plot.pan({ left: prevPageX - e.pageX, top: prevPageY - e.pageY }); } function bindEvents(plot, eventHolder) { var o = plot.getOptions(); if (o.zoom.interactive) { eventHolder[o.zoom.trigger](onZoomClick); eventHolder.mousewheel(onMouseWheel); } if (o.pan.interactive) { eventHolder.bind("dragstart", { distance: 10 }, onDragStart); eventHolder.bind("drag", onDrag); eventHolder.bind("dragend", onDragEnd); } } plot.zoomOut = function (args) { if (!args) args = {}; if (!args.amount) args.amount = plot.getOptions().zoom.amount; args.amount = 1 / args.amount; plot.zoom(args); }; plot.zoom = function (args) { if (!args) args = {}; var c = args.center, amount = args.amount || plot.getOptions().zoom.amount, w = plot.width(), h = plot.height(); if (!c) c = { left: w / 2, top: h / 2 }; var xf = c.left / w, yf = c.top / h, minmax = { x: { min: c.left - xf * w / amount, max: c.left + (1 - xf) * w / amount }, y: { min: c.top - yf * h / amount, max: c.top + (1 - yf) * h / amount } }; $.each(plot.getAxes(), function(_, axis) { var opts = axis.options, min = minmax[axis.direction].min, max = minmax[axis.direction].max, zr = opts.zoomRange, pr = opts.panRange; if (zr === false) // no zooming on this axis return; min = axis.c2p(min); max = axis.c2p(max); if (min > max) { // make sure min < max var tmp = min; min = max; max = tmp; } //Check that we are in panRange if (pr) { if (pr[0] != null && min < pr[0]) { min = pr[0]; } if (pr[1] != null && max > pr[1]) { max = pr[1]; } } var range = max - min; if (zr && ((zr[0] != null && range < zr[0] && amount >1) || (zr[1] != null && range > zr[1] && amount <1))) return; opts.min = min; opts.max = max; }); plot.setupGrid(); plot.draw(); if (!args.preventEvent) plot.getPlaceholder().trigger("plotzoom", [ plot, args ]); }; plot.pan = function (args) { var delta = { x: +args.left, y: +args.top }; if (isNaN(delta.x)) delta.x = 0; if (isNaN(delta.y)) delta.y = 0; $.each(plot.getAxes(), function (_, axis) { var opts = axis.options, min, max, d = delta[axis.direction]; min = axis.c2p(axis.p2c(axis.min) + d), max = axis.c2p(axis.p2c(axis.max) + d); var pr = opts.panRange; if (pr === false) // no panning on this axis return; if (pr) { // check whether we hit the wall if (pr[0] != null && pr[0] > min) { d = pr[0] - min; min += d; max += d; } if (pr[1] != null && pr[1] < max) { d = pr[1] - max; min += d; max += d; } } opts.min = min; opts.max = max; }); plot.setupGrid(); plot.draw(); if (!args.preventEvent) plot.getPlaceholder().trigger("plotpan", [ plot, args ]); }; function shutdown(plot, eventHolder) { eventHolder.unbind(plot.getOptions().zoom.trigger, onZoomClick); eventHolder.unbind("mousewheel", onMouseWheel); eventHolder.unbind("dragstart", onDragStart); eventHolder.unbind("drag", onDrag); eventHolder.unbind("dragend", onDragEnd); if (panTimeout) clearTimeout(panTimeout); } plot.hooks.bindEvents.push(bindEvents); plot.hooks.shutdown.push(shutdown); } $.plot.plugins.push({ init: init, options: options, name: 'navigate', version: '1.3' }); })(jQuery);
PypiClean
/bbc1-1.5.1-py3-none-any.whl/bbc1-1.5.1.data/scripts/file_proof.py
# -*- coding: utf-8 -*- """ Copyright (c) 2017 beyond-blockchain.org. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. """ import argparse import binascii import datetime import hashlib import os import json import sys sys.path.extend(["../../"]) from bbc1.core import bbc_app from bbc1.core.bbc_config import DEFAULT_CORE_PORT from bbc1.core import bbclib from bbc1.core.message_key_types import KeyType from bbc1.core.bbc_error import * MAPPING_FILE = ".bbc_id_mappings" PRIVATE_KEY = ".private_key" PUBLIC_KEY = ".public_key" domain_id = bbclib.get_new_id("file_proof_test_domain", include_timestamp=False) asset_group_id = bbclib.get_new_id("file_proof_asset_group", include_timestamp=False) user_name = "user_default" user_id = bbclib.get_new_id(user_name, include_timestamp=False) key_pair = None def store_id_mappings(name, asset_group_id, transaction_id=None, asset_ids=None): if transaction_id is None and asset_ids is None: return mapping = dict() asset_group_id_str = binascii.b2a_hex(asset_group_id).decode() if os.path.exists(MAPPING_FILE): with open(MAPPING_FILE, "r") as f: mapping = json.load(f) mapping.setdefault(asset_group_id_str, dict()).setdefault(name, dict()) if transaction_id is not None: mapping[asset_group_id_str][name]['transaction_id'] = binascii.b2a_hex(transaction_id).decode() if asset_ids is not None: if isinstance(asset_ids, list): entry = [] for ast in asset_ids: entry.append(binascii.b2a_hex(ast)) mapping[asset_group_id_str][name]['asset_id'] = entry else: mapping[asset_group_id_str][name]['asset_id'] = binascii.b2a_hex(asset_ids).decode() with open(MAPPING_FILE, "w") as f: json.dump(mapping, f, indent=4) def remove_id_mappings(name, asset_group_id): mapping = dict() asset_group_id_str = binascii.b2a_hex(asset_group_id).decode() if os.path.exists(MAPPING_FILE): with open(MAPPING_FILE, "r") as f: mapping = json.load(f) if asset_group_id_str in mapping: mapping[asset_group_id_str].pop(name, None) if len(mapping[asset_group_id_str].keys()) == 0: del mapping[asset_group_id_str] with open(MAPPING_FILE, "w") as f: json.dump(mapping, f, indent=4) def get_id_from_mappings(name, asset_group_id): if not os.path.exists(MAPPING_FILE): return None asset_group_id_str = binascii.b2a_hex(asset_group_id).decode() with open(MAPPING_FILE, "r") as f: mapping = json.load(f) if mapping is None: return None if asset_group_id_str in mapping and name in mapping[asset_group_id_str]: result = dict() if 'transaction_id' in mapping[asset_group_id_str][name]: result['transaction_id'] = binascii.a2b_hex(mapping[asset_group_id_str][name]['transaction_id']) if 'asset_id' in mapping[asset_group_id_str][name]: if isinstance(mapping[asset_group_id_str][name]['asset_id'], list): entry = [] for ast in mapping[asset_group_id_str][name]['asset_id']: entry.append(binascii.a2b_hex(ast)) result['asset_id'] = entry else: result['asset_id'] = binascii.a2b_hex(mapping[asset_group_id_str][name]['asset_id']) return result return None def get_list_from_mappings(asset_group_id): if not os.path.exists(MAPPING_FILE): return None asset_group_id_str = binascii.b2a_hex(asset_group_id).decode() with open(MAPPING_FILE, "r") as f: mapping = json.load(f) if mapping is None: return None if asset_group_id_str in mapping: result = [] for name in mapping[asset_group_id_str]: result.append(name) return result return None def domain_setup(): tmpclient = bbc_app.BBcAppClient(port=DEFAULT_CORE_PORT, multiq=False, loglevel="all") if os.path.exists("node_key.pem"): tmpclient.set_node_key("node_key.pem") tmpclient.domain_setup(domain_id) tmpclient.callback.synchronize() tmpclient.unregister_from_core() print("Domain %s is created." % (binascii.b2a_hex(domain_id[:4]).decode())) print("Setup is done.") def setup_bbc_client(): bbc_app_client = bbc_app.BBcAppClient(port=DEFAULT_CORE_PORT, multiq=False, loglevel="all") bbc_app_client.set_user_id(user_id) bbc_app_client.set_domain_id(domain_id) bbc_app_client.set_callback(bbc_app.Callback()) ret = bbc_app_client.register_to_core() assert ret return bbc_app_client def require_receiver_info_for(filename): print("Your name is [", user_name, "] and user_id is [", binascii.b2a_hex(user_id).decode(), "]") print("Please enter the receiver user name for file %s." % filename) receiver_name = input('>> ') receiver_user_id = bbclib.get_new_id(receiver_name, include_timestamp=False) return receiver_name, receiver_user_id def search_reference_txid_from_mappings(filename): reference_txid = None file_info = get_id_from_mappings(os.path.basename(filename), asset_group_id) if file_info: reference_txid = file_info["transaction_id"] return reference_txid def send_signreq(receiver_name, receiver_user_id, ref_txids=None, file_data=None, bbc_app_client=None): transaction = bbclib.make_transaction(relation_num=1, witness=True) user_info_msg = "Ownership is transfered from %s to %s" % (user_name, receiver_name) bbclib.add_relation_asset(transaction, relation_idx=0, asset_group_id=asset_group_id, user_id=receiver_user_id, asset_body=user_info_msg, asset_file=file_data) transaction.witness.add_witness(user_id) transaction.witness.add_witness(receiver_user_id) for i, ref_txid in enumerate(ref_txids): bbc_app_client.search_transaction(ref_txid) response_data = bbc_app_client.callback.synchronize() if response_data[KeyType.status] < ESUCCESS: print("ERROR: ", response_data[KeyType.reason].decode()) sys.exit(0) prev_tx, fmt_type = bbclib.deserialize(response_data[KeyType.transaction_data]) bbclib.add_relation_pointer(transaction, 0, ref_transaction_id=prev_tx.digest()) asset_id = transaction.relations[0].asset.asset_id asset_files = {asset_id: file_data} ret = bbc_app_client.gather_signatures(transaction, destinations=[receiver_user_id], asset_files=asset_files) if not ret: print("Failed to send sign request") sys.exit(0) return transaction def wait_for_signs(transaction, bbc_app_client): response_data = bbc_app_client.callback.synchronize() if response_data[KeyType.status] < ESUCCESS: print("Rejected because ", response_data[KeyType.reason].decode(), "") sys.exit(0) result = response_data[KeyType.result] transaction.witness.add_signature(user_id=result[1], signature=result[2]) sig_mine = transaction.sign(private_key=key_pair.private_key, public_key=key_pair.public_key) transaction.witness.add_signature(user_id=user_id, signature=sig_mine) transaction.digest() return transaction def create_transaction_object_for_filedata(receiver_name, receiver_user_id, ref_txids=None, file_data=None, bbc_app_client=None): if ref_txids is None or ref_txids[0] is None: ref_txids = [] transaction = send_signreq(receiver_name, receiver_user_id, ref_txids, file_data, bbc_app_client) return wait_for_signs(transaction, bbc_app_client) def insert_signed_transaction_to_bbc_core(transaction=None, bbc_app_client=None, file_name=None): print("Insert the transaction into BBc-1") ret = bbc_app_client.insert_transaction(transaction) assert ret response_data = bbc_app_client.callback.synchronize() if response_data[KeyType.status] < ESUCCESS: print("ERROR: ", response_data[KeyType.reason].decode()) sys.exit(0) def send_transaction_info_msg(bbc_app_client=None, transaction=None, file_name=None, receiver_user_id=None): transaction_info = [os.path.basename(file_name), transaction.transaction_id] bbc_app_client.send_message(transaction_info, receiver_user_id) def wait_for_transaction_msg(bbc_app_client=None): print("Your name is [", user_name, "] and user_id is [", binascii.b2a_hex(user_id).decode(), "]") print("Waiting for file transfer.....") response_data = bbc_app_client.callback.synchronize() if KeyType.transaction_data not in response_data or KeyType.all_asset_files not in response_data: print("**** Invalid message is received...") print(response_data) bbc_app_client.sendback_denial_of_sign(response_data[KeyType.source_user_id], response_data[KeyType.transaction_id], "Invalid message is received.") sys.exit(1) return response_data def pick_valid_transaction_info(received_data=None, bbc_app_client=None): transaction, fmt_type = bbclib.deserialize(received_data[KeyType.transaction_data]) asset_files = received_data[KeyType.all_asset_files] asset_id = transaction.relations[0].asset.asset_id if asset_id not in asset_files: print("**** No valid file is received...") print(received_data) bbc_app_client.sendback_denial_of_sign(received_data[KeyType.source_user_id], transaction.transaction_id, "No valid file is received.") sys.exit(1) file_to_obtain = asset_files[asset_id] file_digest = hashlib.sha256(file_to_obtain).digest() print("--------------------------") print("File digest written in the transaction data: ", binascii.b2a_hex(transaction.relations[0].asset.asset_file_digest).decode()) print("File digest calculated from the received file:", binascii.b2a_hex(file_digest).decode()) print("--------------------------") return transaction, received_data[KeyType.source_user_id] def prompt_user_to_accept_the_file(bbc_app_client=None, source_id=None, transaction_id=None): print("====> Do you want to accept the file?") answer = input('(Y/N) >> ') if answer != "Y": bbc_app_client.sendback_denial_of_sign(source_id, transaction_id, "Denied to accept the file") sys.exit(1) def wait_for_file_info_msg(bbc_app_client=None): print("Waiting for the message from the sender...") response_data = bbc_app_client.callback.synchronize(timeout=10) if response_data is None: print("No final message received... Ask the sender about the filename and transaction_id") sys.exit(0) if KeyType.message not in response_data: print("Received invalid message....") sys.exit(0) filename, transaction_id = response_data[KeyType.message] print("--> file name is %s and the transaction_id is %s" % (filename.decode(), binascii.b2a_hex(transaction_id).decode())) return filename, transaction_id def store_proc(file, txid=None): with open(file, "rb") as fin: data = fin.read() bbc_app_client = setup_bbc_client() store_transaction = bbclib.make_transaction(relation_num=1, witness=True) user_info = "Owner is %s" % user_name bbclib.add_relation_asset(store_transaction, relation_idx=0, asset_group_id=asset_group_id, user_id=user_id, asset_body=user_info, asset_file=data) store_transaction.witness.add_witness(user_id) if txid: bbc_app_client.search_transaction(txid) response_data = bbc_app_client.callback.synchronize() if response_data[KeyType.status] < ESUCCESS: print("ERROR: ", response_data[KeyType.reason].decode()) sys.exit(0) prev_tx, fmt_type = bbclib.deserialize(response_data[KeyType.transaction_data]) bbclib.add_relation_pointer(transaction=store_transaction, relation_idx=0, ref_transaction_id=prev_tx.transaction_id) sig = store_transaction.sign(private_key=key_pair.private_key, public_key=key_pair.public_key) store_transaction.get_sig_index(user_id) store_transaction.add_signature_object(user_id=user_id, signature=sig) store_transaction.digest() print(store_transaction) ret = bbc_app_client.insert_transaction(store_transaction) assert ret response_data = bbc_app_client.callback.synchronize() if response_data[KeyType.status] < ESUCCESS: print("ERROR: ", response_data[KeyType.reason].decode()) sys.exit(0) store_id_mappings(os.path.basename(file), asset_group_id, transaction_id=response_data[KeyType.transaction_id], asset_ids=store_transaction.relations[0].asset.asset_id) def store_file(file): fileinfo = get_id_from_mappings(os.path.basename(file), asset_group_id) if fileinfo is not None: print("the file already stored : %s" % os.path.basename(file)) sys.exit(0) store_proc(file=file, txid=None) print("file stored : %s" % os.path.basename(file)) print("done store %s" % file) def get_file(file): fileinfo = get_id_from_mappings(os.path.basename(file), asset_group_id) if fileinfo is None: print("Not exists in local mapping cache. So, asset_id is not known...") sys.exit(1) bbc_app_client = setup_bbc_client() ret = bbc_app_client.search_transaction_with_condition(asset_group_id=asset_group_id, asset_id=fileinfo["asset_id"]) assert ret response_data = bbc_app_client.callback.synchronize() if response_data[KeyType.status] < ESUCCESS: print("ERROR: ", response_data[KeyType.reason].decode()) sys.exit(0) get_transaction, fmt_type = bbclib.deserialize(response_data[KeyType.transactions][0]) if KeyType.all_asset_files in response_data: asset_file_dict = response_data[KeyType.all_asset_files] asset_id = get_transaction.relations[0].asset.asset_id data = asset_file_dict[asset_id] else: data = get_transaction.relations[0].asset.asset_body out_file_name = file if os.path.exists(out_file_name): current_datetime = datetime.datetime.now() time_str = current_datetime.strftime('_%Y%m%d%H%M%S') out_file_name += time_str with open(out_file_name, "wb") as outfile: outfile.write(data) print("done get %s" % out_file_name) def remove_file(file): fileinfo = get_id_from_mappings(os.path.basename(file), asset_group_id) if fileinfo is None: print("File does not exist: %s" % os.path.basename(file)) sys.exit(0) fileinfo = remove_id_mappings(os.path.basename(file), asset_group_id) print("done remove %s" % file) def list_file(): fileinfo = get_list_from_mappings(asset_group_id) if fileinfo is None: print("No files present in local mapping cache. So, asset_id is not known...") sys.exit(1) print("%s" % '\n'.join(fileinfo)) def update_file(file): fileinfo = get_id_from_mappings(os.path.basename(file), asset_group_id) if fileinfo is None: print("Not exists in local mapping cache. So, transaction_id is not known...") sys.exit(1) transaction_id = fileinfo["transaction_id"] # TODO consider whether to check existence of the transaction object store_proc(file=file, txid=transaction_id) print("done update %s" % os.path.basename(file)) def verify_file(file): fileinfo = get_id_from_mappings(os.path.basename(file), asset_group_id) if fileinfo is None: print("Not exists in local mapping cache. So, asset_id is not known...") sys.exit(1) bbc_app_client = setup_bbc_client() ret = bbc_app_client.search_transaction_with_condition(asset_group_id=asset_group_id, asset_id=fileinfo["asset_id"]) assert ret response_data = bbc_app_client.callback.synchronize() if response_data[KeyType.status] < ESUCCESS: print("ERROR: ", response_data[KeyType.reason].decode()) sys.exit(0) transaction, fmt_type = bbclib.deserialize(response_data[KeyType.transactions][0]) digest = transaction.digest() ret = transaction.signatures[0].verify(digest) if not ret: print("Transaction data is invalid.") sys.exit(1) with open(file, "rb") as fin: data = fin.read() file_digest = hashlib.sha256(data).digest() if file_digest == transaction.relations[0].asset.asset_file_digest: print("%s is valid" % file) else: print("%s is invalid" % file) print("done verify %s" % os.path.basename(file)) print("Content of the transaction:::") print(transaction) def create_keypair(): keypair = bbclib.KeyPair() keypair.generate() with open(PRIVATE_KEY, "wb") as fout: fout.write(keypair.private_key) with open(PUBLIC_KEY, "wb") as fout: fout.write(keypair.public_key) print("created private_key and public_key : %s, %s" % (PRIVATE_KEY, PUBLIC_KEY)) def enter_file_wait_mode(): bbc_app_client = setup_bbc_client() recvdat = wait_for_transaction_msg(bbc_app_client=bbc_app_client) transaction, source_id = pick_valid_transaction_info(received_data=recvdat, bbc_app_client=bbc_app_client) prompt_user_to_accept_the_file(bbc_app_client=bbc_app_client, source_id=source_id, transaction_id=transaction.transaction_id) signature = transaction.sign(keypair=key_pair) bbc_app_client.sendback_signature(source_id, transaction.transaction_id, -1, signature) filename, transaction_id = wait_for_file_info_msg(bbc_app_client=bbc_app_client) store_id_mappings(os.path.basename(filename.decode()), asset_group_id, transaction_id=transaction_id, asset_ids=transaction.relations[0].asset.asset_id) def enter_file_send_mode(filename): receiver_name, receiver_user_id = require_receiver_info_for(filename) with open(filename, "rb") as fin: file_data = fin.read() txid_for_reference = search_reference_txid_from_mappings(filename) bbc_app_client = setup_bbc_client() transfer_transaction = create_transaction_object_for_filedata(receiver_name, receiver_user_id, ref_txids=[txid_for_reference], file_data=file_data, bbc_app_client=bbc_app_client) insert_signed_transaction_to_bbc_core(transaction=transfer_transaction, bbc_app_client=bbc_app_client, file_name=filename) remove_id_mappings(os.path.basename(filename), asset_group_id) send_transaction_info_msg(bbc_app_client=bbc_app_client, transaction=transfer_transaction, file_name=filename, receiver_user_id=receiver_user_id) print("Transfer is done.....") def sys_check(args): if args.command_type in ("store", "update", "verify") and \ not os.path.exists(args.target_file): raise Exception("file not found : %s" % args.target_file) # TODO consider whether to check core accessibility if args.command_type != "keypair": if not os.path.exists(PRIVATE_KEY): message = "not exist private key\n" message += "create a key pair with keypair option" raise Exception(message) if not os.path.exists(PUBLIC_KEY): message = "not exist public key\n" message += "create a key pair with keypair option" raise Exception(message) with open(PRIVATE_KEY, "rb") as fin: private_key = fin.read() with open(PUBLIC_KEY, "rb") as fin: public_key = fin.read() global key_pair key_pair = bbclib.KeyPair(privkey=private_key, pubkey=public_key) def argument_parser(): argparser = argparse.ArgumentParser() subparsers = argparser.add_subparsers(dest="command_type", help='commands') # put command store_parser = subparsers.add_parser('store', help='Store a file') store_parser.add_argument('target_file', action='store', help='A target file') store_parser.add_argument('-o', '--user', action='store', help='Your name (for calculating user_id)', default='user_default') # get command get_parser = subparsers.add_parser('get', help='Get a file') get_parser.add_argument('target_file', action='store', help='A target file') get_parser.add_argument('-o', '--user', action='store', help='Your name (for calculating user_id)', default='user_default') # remove command get_parser = subparsers.add_parser('remove', help='Remove a file') get_parser.add_argument('target_file', action='store', help='A target file') get_parser.add_argument('-o', '--user', action='store', help='Your name (for calculating user_id)', default='user_default') # list command list_parser = subparsers.add_parser('list', help='Get a file list') list_parser.add_argument('-o', '--user', action='store', help='Your name (for calculating user_id)', default='user_default') # update command update_parser = subparsers.add_parser('update', help='Update a file') update_parser.add_argument('target_file', action='store', help='A target file') update_parser.add_argument('-o', '--user', action='store', help='Your name (for calculating user_id)', default='user_default') # verify command verify_parser = subparsers.add_parser('verify', help='Verify a file') verify_parser.add_argument('target_file', action='store', help='A target file') verify_parser.add_argument('-o', '--user', action='store', help='Your name (for calculating user_id)', default='user_default') # key pair command subparsers.add_parser('keypair', help='Create a key pair') # wait mode for receiving file wait_parser = subparsers.add_parser('wait', help='Wait for receiving a file') wait_parser.add_argument('-o', '--user', action='store', help='Your name (for calculating user_id)', default='user_default') # send mode send_parser = subparsers.add_parser('send', help='Send a file') send_parser.add_argument('target_file', action='store', help='A target file') send_parser.add_argument('-o', '--user', action='store', help='Your name (for calculating user_id)', default='user_default') # setup command store_parser = subparsers.add_parser('setup', help='Setup domain and asset group') return argparser.parse_args() if __name__ == '__main__': parsed_args = argument_parser() try: sys_check(parsed_args) except Exception as e: print(str(e)) sys.exit(0) if parsed_args.command_type == "keypair": create_keypair() elif parsed_args.command_type == "setup": domain_setup() else: user_name = parsed_args.user user_id = bbclib.get_new_id(user_name, include_timestamp=False) if parsed_args.command_type == "store": store_file(file=parsed_args.target_file) elif parsed_args.command_type == "get": get_file(file=parsed_args.target_file) elif parsed_args.command_type == "remove": remove_file(file=parsed_args.target_file) elif parsed_args.command_type == "list": list_file() elif parsed_args.command_type == "update": update_file(file=parsed_args.target_file) elif parsed_args.command_type == "verify": verify_file(file=parsed_args.target_file) elif parsed_args.command_type == "wait": enter_file_wait_mode() elif parsed_args.command_type == "send": enter_file_send_mode(filename=parsed_args.target_file) sys.exit(0)
PypiClean
/ckls_testlib4-4.2.8.tar.gz/ckls_testlib4-4.2.8/sparknlp_jsl/annotator/chunker/assertion_filterer.py
from sparknlp_jsl.common import * from sparknlp_jsl.utils.licensed_annotator_type import InternalAnnotatorType class AssertionFilterer(AnnotatorModelInternal): """Filters entities coming from ASSERTION type annotations and returns the CHUNKS. Filters can be set via a white list on the extracted chunk, the assertion or a regular expression. White list for assertion is enabled by default. To use chunk white list, ``criteria`` has to be set to ``isin``. For regex, ``criteria`` has to be set to ``regex``. ============================== ====================== Input Annotation types Output Annotation type ============================== ====================== ``DOCUMENT, CHUNK, ASSERTION`` ``CHUNK`` ============================== ====================== Parameters ---------- whiteList If defined, list of entities to process. The rest will be ignored regex If defined, list of entities to process. The rest will be ignored. criteria Tag representing what is the criteria to filter the chunks. possibles values (assertion|isIn|regex) assertion: Filter by the assertion isIn : Filter by the chunk regex : Filter using a regex entitiesConfidence Entity pairs to remove based on the confidence level Examples -------- >>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp_jsl.common import * >>> from sparknlp.annotator import * >>> from sparknlp.training import * >>> import sparknlp_jsl >>> from sparknlp_jsl.base import * >>> from sparknlp_jsl.annotator import * >>> from pyspark.ml import Pipeline To see how the assertions are extracted, see the example for AssertionDLModel. Define an extra step where the assertions are filtered >>> assertionFilterer = AssertionFilterer() \\ ... .setInputCols(["sentence","ner_chunk","assertion"]) \\ ... .setOutputCol("filtered") \\ ... .setCriteria("assertion") \\ ... .setWhiteList(["present"]) ... >>> assertionPipeline = Pipeline(stages=[ ... documentAssembler, ... sentenceDetector, ... tokenizer, ... embeddings, ... nerModel, ... nerConverter, ... clinicalAssertion, ... assertionFilterer ... ]) ... >>> assertionModel = assertionPipeline.fit(data) >>> result = assertionModel.transform(data) >>> result.selectExpr("ner_chunk.result", "assertion.result").show(3, truncate=False) +--------------------------------+--------------------------------+ |result |result | +--------------------------------+--------------------------------+ |[severe fever, sore throat] |[present, present] | |[stomach pain] |[absent] | |[an epidural, PCA, pain control]|[present, present, hypothetical]| +--------------------------------+--------------------------------+ >>> result.select("filtered.result").show(3, truncate=False) +---------------------------+ |result | +---------------------------+ |[severe fever, sore throat]| |[] | |[an epidural, PCA] | +---------------------------+ """ inputAnnotatorTypes = [AnnotatorType.DOCUMENT, AnnotatorType.CHUNK, InternalAnnotatorType.ASSERTION] outputAnnotatorType = AnnotatorType.CHUNK name = "ChunksFilter" whiteList = Param( Params._dummy(), "whiteList", "If defined, list of entities to process. The rest will be ignored.", typeConverter=TypeConverters.toListString ) caseSensitive = Param( Params._dummy(), "caseSensitive", "Determines whether the definitions of the white listed entities are case sensitive.", typeConverter=TypeConverters.toBoolean ) regex = Param( Params._dummy(), "regex", "If defined, list of regex to process. The rest will be ignored.", typeConverter=TypeConverters.toListString ) criteria = Param(Params._dummy(), "criteria", "Assertion find by assertion", TypeConverters.toString) entitiesConfidence = Param(Params._dummy(), "entitiesConfidence", "Entity pairs to remove based on the confidence level", typeConverter=TypeConverters.identity) def setWhiteList(self, value): """Sets list of entities to process. The rest will be ignored. Parameters ---------- value : list If defined, list of entities to process. The rest will be ignored. """ return self._set(whiteList=value) def setCaseSensitive(self, value): """Determines whether the definitions of the white listed entities are case sensitive. Parameters ---------- value : bool Whether white listed entities are case sensitive or not. """ return self._set(caseSensitive=value) def setRegex(self, value): """Sets llist of regex to process. The rest will be ignored. Parameters ---------- value : list List of dash-separated pairs of named entities """ return self._set(regex=value) def setCriteria(self, s): """Set tag representing what is the criteria to filter the chunks. possibles values (assertion|isIn|regex) Parameters ---------- pairs : list List of dash-separated pairs of named entities """ return self._set(criteria=s) # TODO set confidence def __init__(self, classname="com.johnsnowlabs.nlp.annotators.chunker.AssertionFilterer", java_model=None): super(AssertionFilterer, self).__init__( classname=classname, java_model=java_model )
PypiClean
/love_course_2016_2019-2023.3.1.0-py3-none-any.whl/LoveCourse20162019/docs/liang-nan/富饶私教课:私教正课:10.富饶私教——如何聊天.md
# 富饶私教课:私教正课:10.富饶私教——如何聊天 OK 一位录平 對,好 那我們就是說 現在就開始我們今天要講的內容,就是今天我們主要講的就是告訴大家如何去和女生聊天,那我們講到這個就是聊天的話 實在是一個問題,就是你 我生話中的門對聊天的一個理解。 有一個基本的理解 就是聊天是一個你和女生在互動的一個方式,最主要的是聊天是你和女生在互動的一個方式,那麼首先我們在說聊天之前還有一個什麼呢,就是我們要分辨出來哪些女生其實是可以聊的。 哪些是不可以聊的 或者是說在講聊天之前我們要先講,讓這個就是這個女生他有一個基本的好感,他願意跟你講話 願意去跟你聊天,這個我們才可以進行這件事情 如果一個女生他都不願意大力你。 每一句話都不願意跟你說的話 那這個就是根本沒有辦法做這件事情,所以說我們今天所講的內容都是建立在女生願意,這種跟你說話 或者是願意跟你正常交流,平等溝通的基礎上 然後進行聊天。 那麼如果他不願意跟你聊天 那有可能是你的展示面問題,這個我們會在下面的課給大家講,但是今天我們就是執掌聊天,那麼我們在講這個聊天的時候,我想跟大家說一個概念 就是很多時候我特別極其的不建議。 你們去學一些慣例或者話數 因為很簡單,因為你的對手不一樣 然後你這個每個女生不一樣,每個女生的類型喜歡的聊天方式都不一樣,如果你被話數關立 像那樣是什麼呢。 因為你只關注你自己用什麼樣的套路 只關注你自己用,就是用那些話怎麼樣去聊天,你不觀眾對方 那麼就會造成一個事情,就是你用你這套方法被的這個話數關立,這個方法你把到的女生。 你的成功的機率會遠遠的小於你失敗的機率,你想一下你是不是絕大部分的女生,就是你願意跟你聊的會,你聊的好的是這樣少部分 但是絕大部分的女生,你都聊不好 是不是這樣一個問題,所以說就什麼呢。 就是說說明你用的聊天方式不正確,或者說你用的聊天方式太單一,即使你把某一樣的聊天的方法練到極致,或者把某一個話數關立被的特別熟悉,但是女生的力量還是很多,你只能單殺那一個懂不懂。 所以我的建議是不要去這個被話數關立,不要去固定的死科某一句話,死科這句話怎麼回,而是一種思維是你要學習的,那麼我們而且我也及其不建議大家在聊天的時候,你一來或者一開場,你就用一些別樣的話。 或者是一種非常顯得你特別與眾不同的這樣的話,因為這樣的話就會造成什麼呢,造成就像我們這個打拳擊一樣,就像我們這個打拳擊一樣,拳擊手是怎麼樣去打拳的,他絕對不會一來就直拳勾拳。 然後所有的把他的閉殺技拳都用出來,一定是什麼一開始先前擺拳,前直拳先試探一下,雙方先碰一下拳,打個那麼一兩局先瞭解一下對方的情況,然後再出招,那麼同樣的也是的,因為一來他所有招數都出來了。 所有的方法都出來了,那麼你就讓對方找到了你的破戰,你對方瞭解你是一個什麼樣子,他就會有他的方法來刻你,所以說但是呢,如果你不出拳,至少他不會暴露你的缺點,即使你就是這個,沒有花力胡燒的聊天。 沒有那種飛洋巴色的聊天,沒有什麼技巧的聊天,也不會減分,但是如果你一旦,比如說運用什麼極其鬥逼的,或者是完全跟普通人差別,太大的聊天的話,那麼很有可能,就是一下子成功,但是也有可能會讓這個女生不喜歡。 懂不懂,所以這個是一個雙向選擇,有可能會特別好,有可能剝頭好,但是如果你像正常人一樣聊天,像普通人一樣聊天,至少只要不犯一些初級錯誤,至少不會死掉,好吧,那麼這個時候呢,我們說到。 如果一個女生你剛開始給他說,首先你第一點要試探他,應該要怎麼樣和他聊天,就是這個女生能聊,還是不能聊,所以你首先學會的一件事情,叫做判斷窗口,那麼,你判斷完窗口之後,其實有兩種和女生聊天的方法。 第一種叫做進攻式聊天,第二種叫做被動反駁,也就是像打拳一樣,有的是,如果你覺得什麼叫進攻式聊天,就是相當於是如果這個對手,你覺得你完完全全,你的能力在他之上,那麼你可以花式的手法,無所謂怎麼樣的。 OK你都能勝利,但是如果你和他是一個水平線,一個這個級別的,那麼你就需要大家,就是慢慢慢慢的一步一步的,這個深入然後找到一個氣積,直接拿下,這是兩種思路,那麼,我們我現在這裡,給大家講一個超級大的干貨。 就是也是我今天最主要想,給他分享的,就是為什麼你和女生聊天,聊不下去,大家有沒有想過一個問題,為什麼你會沒話說,為什麼你會聊不下去,為什麼你會,聊死掉,或者這女生會不回,你會出現之類的情況。 那麼終極原因是什麼,我告訴大家,是因為你所有的聊天,你說的跟女生互動的,所有的話都是單細包的,都是單一的,也就是說你和女生的對話,你的每一句話,都極大的去限制了,這個女生的回復。 就你因為你自己的說話方式,去限制了女生的回復,什麼意思呢,比如說你在幹嘛,那麼這個女生就只能說,我在看電視,對吧,你說了一句你在幹嘛,他只能說我在看電視,或者說你問他你吃飯了沒,他只能說,吃了或者沒吃。 那麼你問他看電視了沒有,他只能說你問他在幹嘛,他只能說看電視或者逛街,他只有一個詞來回你,一個詞一個逛街,你最多能想到在哪逛的,對不對,然後他吃飯了沒有,他說吃了你只能回一個,吃了什麼。 然後因為這一個詞,你可能還能眼神出來一點,對吧,他說吃了這個,什麼吃的火鍋,那麼你只能說火鍋拉不拉什麼的,然後慢慢慢慢慢慢,可能一開始,三輪對吧,你在幹嘛看電視,就是很多時候,你們都對話模型。 就是你在幹嘛呢,女生說我在看電影,你說看什麼電影,他說我在看他們的這個鋼鐵俠,你說鋼鐵俠第幾步,他第三步,你看到多少分鐘了,看到57分鐘了,沒話說了,對不對,很多時候,你們是出現的這種情況。 你所有的問題,所有的問話,都是一種單細包的,那麼我今天要交給大家的,一個點叫做,你永遠你學了,今天我跟你說的,這可能要20分鐘時間,你永遠不會這個缺話題了,永遠不會這個沒話說,好吧。 那麼我們就首先來說到這個,我給大家演示一下,究竟你應該怎麼樣做到,好吧,好,首先呢,我告訴大家三個公式吧,就是關於聊天的三個一個點,就是你的這句話的三個點,第一個叫做什麼呢,第一個叫做,奇問詩,對吧。 那可能你不知道我,但是我這裡,比如說這是你,這是你出招了,跟女生要聊天了,那麼這時候你出招了,你跟女生要聊天了,那麼可能,這個吧,叫做,欸,比如說我們說到這個,奇問詩,等一下,等一下,奇問詩。 必叫做什麼呢,必叫做分享詩,分享詩,詩叫什麼呢,詩問,好,OK,我們簡單的,到底我最,這什麼意思,可能大家看不懂,奇問詩分享是評論詩,大家一定要把這三個,這九個字,詩死的刻在你的心裡。 牢牢的記在你的心裡,比如說同樣一句話,這個出招就是我們,比較同樣一句話,我們比如說,比方說我們加到一個女生,然後她這個朋友圈,你看到之前都是在青島的,然後最近再成都了,然後你可能從探探。 社交軟件上面加到這個女生,你先問一個A提問詩,叫做你,是成都人嗎,對吧,你來成都旅遊的嗎,對不對,這個叫提問詩,什麼意思呢,我給大家點示一下,比方說你會跟那個女生說,你是,來成都,旅遊的嗎,旅遊的嗎。 對吧,然後分享是什麼呢,我之前去過,去過青島,那裡的,你說分享是不是,我去過青島,到特美,然後評論是什麼意思呢,就是你可以說,這個,成都的,我鍋,特別不錯,OK,OK 大家看到,看到這幾句話。 你可以在跟她聊天的時候,同時三句話甩過去,第一個叫做提問詩,對吧,第一個叫做提問詩,你是來旅遊的嗎,然後緊接著再發一句,我去過青島特美,再發一句,程度的火鍋特別不錯,一個是提問,一個是分享。 一個是詩評論式,那麼,女生,會回什麼呢,女生一定會回,你是來成都旅遊的嗎,她會根據你這句話,回答一個,她會根據你這句話,回答一個,是,或者,或者不是,對吧,她說,你說我去過青島特美,她說,青島的。 她可能說,因為青島在大海邊,然後她說,成都的火鍋,你說成都的火鍋特別不錯,她說,她說還沒來的急去吃了,OK,對不對,你同時發這三句話,那麼她一定會回這些東西,對吧,因為你是問她三個問題。 你是來成都旅遊的嗎,她說是火不是,我去過青島特美,因為她說,成都的火鍋特別不錯,還沒來的急去吃了,對不對,你同時這三句話,跑過去,那麼大概她也會回你,這三句話,然後她每回的一句話,每回的這一句話。 你都可以用三種,這樣的方式來進行回饋,比如說她說是,OK,是怎麼用提問是,你還去過哪裡,對不對,她說不是,你不是來成都旅遊的嗎,你可以怎麼提問你是不是來成都的,你是成都人,對不對,然後,B這個是。 怎麼用分享式呢,她就是,你是來成都旅遊的嗎,她說是你怎麼用分享式呢,你說,成都的,這個,成都的,Space,也很不錯,也很還是全國,最火的98之一,對不對,分享什麼呢,分享是小龍卡的火鍋,特別不錯。 我建議你去吃,對吧,然後,如果她說不是,對吧,你是來成都旅遊的嗎,她說不是,那你怎麼分享呢,你說我,我這個,建議你,來過成都,還要再去一趟重慶,是最棒的,對不對,然後呢,這個以為在清,以為她可能回。 以為清島在大海邊,因為清海,在清島在大海邊,那麼你怎麼提問是,你說,欸那個聽說這個,那個北方的女生,都特別的,高,然後你又多高,對不對,分享式呢,比分小式,以為清島在大海邊,分小式怎麼分小的。 你說我還去過大連的海邊,那裡更不錯,對不對,然後評論是什麼呢,我覺得,清島的,不但天氣很舒服,而且海鮮也特別好吃,對不對,那OK,還沒來的局勢吃呢,你怎麼回的,什麼叫提問式,就說欸,那你剛剛吃了什麼。 那你準備去,吃完去幹嘛,分享式,小龍卡的火鍋很不錯,對吧,穿西巴日火鍋很不錯,評論式,我覺得成都的火鍋很不錯,但是重慶的,更棒,對不對,然後呢,他可能,還會,這個,我不知道,那時候有沒有聽懂,我不知道。 兄弟有沒有聽懂這個對話,模型,我想大家這個,這個,這個回饋以上,有沒有,就是無論女生說哪一句話,你都要用這三個點,同時拋出來說三句話,那麼女生會跟女生,三句話做出三個回饋,他的這三個回饋的每一句話。 你又可以用ABC來去,什麼意思呢,就是,什麼意思呢,就是,你說了A,你說了ABC,女生可能,可能會回你一個,第一F,那麼你根據他的,你可以說A1A2A3,他回了你,一可以說A1A2A3,回了你F。 你可以說A1A2A3,然後你的A1,又可以得到,三個,反饋,你再可以說B1B2B3,懂不懂,懂不懂,案例的話,因為我,現在我們這個,就是在分口浪間上,所以我那個電音節,包括什麼事情,我都沒有辦法講。 過了這段時間嘛,我再跟大家分享,也就是說你所有的話,無論什麼話,你都可以用,比如說這個,隨便說一個話題,就是你,你,就是我們說一個,興趣愛好嗎,對吧,我們說一個,興趣愛好,興趣愛好,OK,那興趣愛好。 提問是,提問是怎麼說呢,提問是,對吧,你就是說你平時,你喜歡,旅遊嗎,對吧,提問是你喜歡旅遊嗎,然後,分享是,是什麼呢,分享是,我,去過,我去過,最棒的,地方,是,巴黎島,對吧,OK,那麼,這個。 評論是什麼呢,評論是,評論是,我覺得,最,是,最,是和生活的,生活的,城,是,程度,對吧,OK,就簡簡單的,隨便說一個,興趣愛好,你可以說這三句話,你喜歡旅遊嗎,我最喜歡,去的地方是,巴黎島。 我覺得最適合的,城市是程度,對吧,那可能這個女生,她會根據你這個回覆,對吧,她可能會說,喜歡,對吧,她說,我還沒去過呢,對吧,對吧,然後她可能是什麼呢,她可能說,嗯,我也覺得,對吧,你可能說了這三句話。 你可能,這是你,這是我們男人,對吧,這是我們男人,說了這三句話,她給了,這三個反饋,對不對,對不對,這個都是男人,給了三句話,然後這個女生,給了反饋,對不對,你問了她這三個,你隨便說,三句話。 她會給你這三個反饋,然後她根據這三個反饋,你又有很多話說,比如說她就喜歡,喜歡,那麼,那時候就延伸到,你要提問了,對吧,你要提問了,那你怎麼說呢,你說,你去過哪裡,對吧,她說喜歡,那你就會延伸到。 你想了提問,你最喜歡的地方去哪裡,最喜歡的地方是哪裡,或是你去過哪些地方,然後分享呢,她說一個喜歡,你有什麼好分享的呢,對吧,你說,我,我覺得,我覺得我每天的,這個,我每年都給自己一個,目標。 就是每年去一趟立江,每次我心煩了什麼事情的時候,有壓力的時候,我都會選擇去立江,過待幾天,對吧,評論是,我覺得這個,立江和大理都是,遼商的最好的,藥,對不對,那麼我還沒去過呢,提問是怎麼弄。 那你去過哪裡,對不對,那你是做什麼工作的,沒有時間嗎,對吧,分享是,是,我也沒有去過你的城市,對嗎,評論是什麼呢,我覺得你那個地方,應該還滿不錯的,對不對,她說個,我也覺得,我也覺得提問是什麼呢。 那你應該,這個,滿喜歡,那你應該,這個,就是,滿,生活滿精彩的,對吧,那你應該有一雙,發現美的眼睛,對吧,分享是,什麼呢,你說看來,我們兩個都有,共同的價值觀,以及我們喜歡的,人和事物都是一樣的。 我覺得,這個你還,怎麼樣,怎麼樣,對吧,然後評論怎麼評論呢,你說,原來你是一個,女生喜歡,到處旅行的女生,這樣子的女生,通常來講,都特別的,感愛感恨,對不對,那麼她可能,你剛剛說了,因為她一個喜歡。 你說你還去過什麼地方,她又說什麼地方,然後你評論,她說我還沒去過呢,你的分享是說,你這個,為什麼沒有去過,去過哪裡,什麼 eight,我們的,沒有,怎麼辦,我有缺點,因為,他們,會先,無故,不會,生心。 我,冰淇,欸,是,保養,三个点 然后三个点 每一个点你用用这三句话,来分别用这三个方式来回饰,那么就产生出了九个点,九个点呢 他又会回你可能三个点,这三个点又可以连接出来从十几个点。 会聊到你们他妈的这个觉得,锅草 手 他妈累得要死,会觉得你为什么会让这个女生觉得,你们俩为什么太投机了,这好吧 从来没有遇到过这么投机的人,真的是跟他有说不完的话,总有说原原不断的话题。 他总能让我就是感觉话不够用的感觉,而永远不会缺点冷藏,当你随随便便和女生在任何情况下,情景当中环境下,还是微信信下聊天的时候,脑袋里都想着,你们俩在说话的时候,有没有某一个点,你可以用提问时说出来。 再分享一下 再评论一下,然后让他来给你回饰,他要给完你回饰之后每一个回饰,再用三个点来说,再用三个点来说,再用三个点来说,然后这三个点他会回你三个点,这三个点呢 你又可以延伸出九个。 这三个又可以延伸出九个,这三个可以延伸出九个,懂不懂,就这样月聊月都月聊月都月聊月,到时候你们聊的多的要死,懂不懂,是这种对话模型,我希望兄弟们能够,这个,知道,能够明白,听明白的兄弟打个一。 听明白的打个一,来 我看一下,讲到几点讲半个小时吗,就是我们每次讲一个小时,他现在已经讲了半个小时了,好,听明白了,那么我们接着来,好吗,接着来让你们感觉到聊天,OK 那么首先你会了这个点。 就可以保证一件事情,就是你绝对不会,你绝对不会,你绝对不会,你绝对不会,你绝对不会,你绝对不会,你绝对不会,绝对不会和女生没有话说了,对不对,你绝对不会没有话提了了,绝对不会,了死了,绝对不会,说这个。 就是,不知道说什么了,对不对,你至少不会问我,我下一句怎么回了,因为很简单,你们只会觉得,我怎么能少回一点,你觉得这个时候你每天可能都会问我,怎么样这么多女生,这么多话,我可能觉得聊烦了,而不是没话说。 OK,那么我们再继续往下看,那么很多时候,这个兄弟会出现一个问题,就是他说,不如事情可能并不是像你讲的,一样这么顺利,有可能我聊了三个点,他只回我两个点,或者他不回我信息,或者是他根本没有按照我想。 要的那个流程去走,或者是你,你就会可能会,问我各种各样的问题,比如说这个不回复,对吧,那我们就根据这个不回复,来给大家讲一下,首先我认为,一个女生,她不回你的原因,只有三个点,一个女生她不回你的原因。 只有三个点,第一个是没有时间,没空,没看到,没有办法回你,比如说没信号,没有网络,没注意到,没观察到,工作忙,这都属于一个点,没空,第二个点,就是对你不敢兴趣,不喜欢你,懒得回你看不上你。 觉得回你没意义,这是第二个点,第三个点是对你说的这个话题,他不敢兴趣,对吧,你跟他聊政治,聊军事,聊他妈的这个,车的什么组装零件,什么变金金钢,什么这,他不敢兴趣,对不对,那么,他不回你的这个原因。 无非就是这几点,那么怎么去判断,他到底是其中的哪个点呢,很简单,就是用正常男人的聊天的方式去聊天,对吧,用正常的,因为很简单,如果你说的,话很花烧的话,可能是因为他对你的这个话题不敢兴趣。 并不是对你人不敢兴趣,所以你想判断出来,他究竟是哪个方面不理你,不回你信心,所以你需要,就是用正常男人方式和他聊天,那么首先呢,第一个你在开场的时候,刚刚加到他的微信,在跟他开场的时候。 我们就发一些叫做,无兴趣测试,也就是说,无兴趣测试就是说,先发你的名字,把你的名字比如说复扰,发过去,那么他把他的名字发过来,我在发什么的被驻,对吧,我在发一个被驻,然后他把他的名字发过来。 我说认识你很开心,他可能也怎么样,就发个表情,OK,OK,这个时候,我们可以确定一件什么事情,这个时候,我们可以确定一件什么事情,我刚刚说了,一个女生不回你信心的,几个原因,第一个是没空。 第二个是对你不感兴趣,第三个是对你的话题不感兴趣,但是你给他发名字,他回你了,你给他发被驻,他回你了,你给他说,认识你很开心,他回你了,那么我们这三句话,是不是你是测试不出来。 他究竟对你的话题感不感兴趣,因为他没有话题,你也测试不出来,对你有没有信心,因为这个是所有正常人,即使他不喜欢你也会聊的,但是他此时此刻,可以断定一件事情,就是他有空,对不对,你给他发了名字。 你让他发了被驻,你让他发了这个,就是,认识你很开心,他回了你,代表他此时此刻是有空的,所以这个点,我们可以判断出来了,如果接下来,他再不回你,就说明是后面两个原因,对不对,这个思路答案明白没有,对不对。 那么第二个点,就是,就是你测试女生对你这个人感不感兴趣,那么你前面聊完三句话测试了,ok他此时此刻有空的,那接下来我们就要测试,他对你感不感兴趣了,一个出始的吸引有没有,首先,两个点,第一个点看女生。 她对你的投入度,再给你聊天的投入度,也就是说你分享一个状态的时候,分享一个话题的时候,她是,这个,嗯,她是只是针对你说的这个话,做出一个回应,还是说,你说了一句话,她会隐身出,她自己会了很多这样问题。 比如说很多兄弟,你们想一想,我们为了追求一个女生,我们都想到了,先要提问事,先要分享事,再要评论事,再要评论事,她随便以说一句话,我们要说出很多点,对吧,让她有所回复,让她有很多回复的空间,那么。 因为我们为什么要这样做呢,因为我们是想喜欢这个女生,我们要追求到这个女生,我们要让她,尽可能多说话,我们的机会才可以增大,那突然一个女生喜欢你,她也是需要通过,延伸出来很多话题,说很多的东西。 来让你也很好的回复她,对不对,所以你要看女生,再给你聊天中的投入度,也就是说,你比如说,你说我躺在这个沙发上发呆呢,她就说,她就说,你好无聊,她还是说,我今天也刚逛完结,现在累了一天,高跟鞋。 穿了高跟鞋,走了一天,累了鞋子,现在刚刚喜欢早躺在床上,对吧,这是不一样的,懂不懂,你说我在吃饭,她说,我今天吃了一个,什么烤猪铁,把我的嘴动动了,算死了吃了一个,那个三Q鸡排,怎么样,对不对。 你要看她的反应是,只会针对你这一句话,做出一个点,还是说她会,因为你这句话,做出很多评论,这个是,第一个,那么这个合什么,其实是相符相成的,就是回复你的自数,和回复你的自数,通常来讲,对你感兴趣女生。 她一定是会愿意回复你,很多的这个话的,因为,就是,这种是不自然的,她会回复,就像什么的,就像,你和你的老板,对话一样,一定是,你说的,你老板说的要多,你会说,老板什么种,对吧,亮种我今天。 因为有什么事情,我要去三亚丁节,所以我想跟你请个假,因为我要拍一个实战视频,怎么样,那么那那那,亮男可能就说,啊好或者不好,对吧,对吧,那肯定是,第一的一方她要,像高雷方说很多话的,高雷方她要。 眼睛应该,所以,你要,第二点,观察女生,回复你的,自数的多少,那么这是第二点,第三点,就是,她会不会,就是主动去问你问题,那么这个时候,我是希望,我要告诉大家,几个很重要的点,这个是,之前我有。 告诉大家的,就通常你跟女生,在聊天的时候,你一定要,记住,有一些问题,你需要,呃,优化一点的回答,也就说,我们在跟女生聊天的,前情女生,一定会问你几个问题,第一个,就是,有一些问题,是必打体。 我们称之为,你可能回答不好,就死掉的,我觉得东西是无关痛点,比如说你这个人,有没有结婢,对吧,你有或没有,其实是无关影响,你们这段感情的,比如说你爱不爱吃辣,这个也是不重要的,你喝不喝酒,吃不重要。 这个都是,你不抽,更好抽了,也OK的,但是有一些问题,大家记住,一定要记住,是必打体,第一个叫做你做什么工作的,第二个叫做,你的身高多少,第三个,你会不会,经常待在,程度或者,经常待在这个城市,对吧。 第四个,你年龄多大,对吧,这个都是你,你加到一个女生,在前期,你和她聊天中,你必须要回答,好的回答的漂亮的,一些话数,因为这些问题,会至少,建立在一个基本,她会觉得你和她有可能,懂不懂,这几个问题。 回答好,她觉得你和她有可能,那么首先,第一个工作,在女生,其实她问你工作,她并不是想知道你,她们真的想干吗的,而是她是想通过,问你回答她,你做什么工作的这个点,来判断出你一个月赚多少钱,其实是这样的。 懂不懂,也就是说你说我,做一个什么什么,很高辣上的名字,在你一个月赚三千块,其实也是不OK的,但你说我是这个,什么你,卖红鼠的你一个月赚两万,也OK懂不懂,其实女生,她就不关心,你是做什么某个职业。 而是她去通过你这个职业,去判断你的级别层次,判断在这个社会,环境下你的工作,大概会赚多少钱,你是部门经历你还是,自己创业,还是招久完舞的,普通的白领,还是在事业单位什么单位,她去判断这个点。 第二个是你的升高,那升高呢,通常来讲,就是可能各自高的兄弟,就没有这个班长,我的话一般通常,讲我都会讲我自己穿上,4550米之后增高鞋垫的高度,来说的,因为很简单,因为可能我们穿上,我一米68。 我穿上个50米增高鞋垫,可能加上个鞋高,我有个一米74,也是OK的,对吧,那,因为我就算我说了一米74,到时候我去跟她约会的时候,我穿上增高鞋垫穿上鞋,也是不会露线的,但是如果我说一米8那个。 就有点过了,所以说我尽量,你要把你的升高说得好一点,第三个是你的年龄,那就是太小了不好,太大了,15岁之内,你都可以正常地说,那么如果你小的话,我这里不建议大家去欺骗女生,但是如果你真的小的。 或者大了你,可以用一些,这个,画素来去平衡一下,比如说,像我可能年龄比较小,很多遇到很多比较大的24,五多女生,我都会说,其实,这个朋友都说我还蛮成熟的,对吧,其实我很早就出来工作了,然后从小就开始。 在社会上磨练了,然后其实可能我和,大我4岁的男生的心理年龄是一样的,因为他们即便是28、9岁,但是他们是从22岁,大学毕业才开始出社会,但是我从18岁就开始出社会,所以是差不多的。 都要用这样的画素去平衡,你如果年龄大了,你可以说,我因为这个,年轻的时候一直在忙事业,一直在工作,其实也没怎么顾及自己的感情,然后直到这个过年的时候,父母,亲几朋友都说我感情方面的事情。 我都觉得有点尴尬,才觉得自己事实后,找一个女生陪自己,对不对,用这样的画素是平衡,那么最后一个就是,你会不会经常待在这个城市,我觉得大家,要有一个标准的回答,就是一定会,懂不懂,就是我的建议是,你回答。 当女生问你会不会经常待在这个城市,或者你会不会到处跑的话,你要回答不会,回答我经常,我一定会待在这个城市,好吧,然后,这次几个女生在前期聊天的,几个必达题,那么你答对了,OK。 然后再加上我刚刚说的那几个,就是对话模型,然后再加上我告诉你的,几个测试女生猖猖的,那么我刚讲的这些都是测试女生,对你感不感兴趣的几个点,就是她回复你的自数,还有她对你聊天的投入度,然后回复的这个。 就是会不会主动问你问题,对吧,然后看她回复你的时间的长短,她回你的频率,对不对,她回你的频率是她妈的半个小时,才会或者爱回不回,还是说很急时的就回了,对吧,那么第三个,叫做对你的话题感不感兴趣。 就是说你可能通常来讲,我们转三个话题,我们转话,比如说你可能聊这个话题,她不感兴趣,她会对你,她不说,然后你在变一个话题,她这个如果感兴趣了,或者说你说了两个话题,她都对你回复很冷,但你说了第三个话题。 她对你慢慢反应变好了,那么就说明,是因为话题的原因,导致她前面,两段聊天她对你冷淡,但是并不代表她不喜欢你,但是如果你同样聊了三个话题,对吧,她这个,她也有时间,然后她回不你很冷,但我们就可以判断出来。 她不喜欢你,对吧,她不喜欢你,那么她不喜欢你,我们应该怎么做在展示面的一节课,但我们这一节课就不讲,OK,那么这个时候,我跟大家想说两个点,就是第一个叫做,生高关系,那其实就是和女生在聊天的时候。 你要去生高关系,那其实生高关系,只要记住一个点,只要记住一个点,就是把所有的话题,都转到,射射的,用调情的方式,来转移到射射的话题上,对吧,那么我希望大家能够,记住这个技巧,我现在开始讲。 什么叫所有的话题都可以进,都可以转化到射射的话题,随便说,比如说,比如说健身,对吧,这个妹子说她的健身,你说啊,我其实就喜欢健身的女生,我觉得你健身应该,屁股蛮臭的吧,对不对,吃饭,吃这么多胖死你。 肉肉的,这个,这个穿那个,比基尼夏天穿,都不能穿,比基尼这个酷了,都不能穿,那种,那个背心,掉带背心,加这个酷了,对不对,随便说一个话题,现在你们随便说一个词,说一个话题,打在评论上,我来一句话转到。 性话题上,好吧,说什么,来,把你们随便,你想到任何词,都打在公平上,然后我来一句话,给转移到性上,好吧,来吧,好吧,说就不忘你消火,就喜欢你这种什么的女生,得了供景,得了供景,你看。 现在社交软件这么普及,大家都乱约,有好多女生都这个,特别开放,但是我觉得你一定不是那样的女生,对不对,来,继续,继续,还有没有词吗,随便打一个词出来,我们看我怎么转移到性话题上,真的吗,你们没有听吗。 我生病了,我平常没有吃饭,对吧,吃饭,你说这个吃饭,其实很多时候,如果,很多女生,她都这个,健身,结实,不吃饭,让自己变得很瘦,但是呢,其实我还是比较喜欢,有肉感的女生,我喜欢那种可爱中带一点性感。 我不喜欢像两根筷子一样,同不懂,我这里讲的是,兄弟们你要学会,就是一句话,一句话,转移,转移到性话题上,这个就是身高关系的机场,就是任何一个词,任何一句话,你都可以转移到,射射的性话题上,那么。 这个时候呢,大了我们,不要总把这些东西,往好的方向去想往好的结果,因为所有的事情,如果回复的好的话,那你肯定还有更多的办法,聊得更好,所以说,我们现在讨论一下,就是,如果女生对你所有的反应回复都不好。 对吧,你应该怎么回应,对吧,然后呢,你只需要记住了一句话,记住要只需要记住这一个点,就是一句课套话,比方说,比方说我们刚刚说,你这个健身了,能不能拍一张,就说你真的好,这个特别漂亮,是我喜欢类型。 可爱又性感,她说,但是我有男朋友,你说没关系,我只是说去课套话而已,对吧,你说,我觉得你挺好看的,我觉得我们走在一起还蛮合选的,她说她说,不会吧,你太矮了,对吧,你说我只是跟你开玩笑,你这么疼真的是。 对不对吗,这种感觉,你说,我们什么时候手牵手出来,又个会,她说我不见莫生人了,你说,ok,那我们继续微信上千里传英,对不对,对,就是用一种,但是你要有礼貌,你要有礼貌,如果你遭遇到她不好的回应。 比如说你升高关系的时候,遭到了不好的欢迎,回应你只需要,转来转去,所以你一定要学会,像我这样的,两个点,就是你如果在升高的话,一定要学会,一句话,升高关系,但一句话又回来,你和女生聊天的过程中,一样。 回去回来,来回转换了很变迹,懂不懂,咋回来,就说,没关我,随便嘛,比如说你,你说,我觉得你,我们俩还挺配的,对吧,我们俩还挺配的,或者说,你说,明天我们一起手间手看电影,做最后一排,对不对,然后她说。 这个,食药和你手间手,你说那我们相近如今歌十米远吧,对吧,她说好啊,你说那我们,不是被别人看起来很奇怪吗,要不然我就抱着你看吧,对吧,当然啊,你太那个了,你这个,这个,你是一个,那我不跟你出来了。 你说开玩笑了,你这么认真干吗,我才不会抱你,对不对吗,我才不会抱你呢,你说我抱着动你吗,对啊,她说,她可能说,你再这么说,我就生气了,对吧,或者说怎么样,你说哎,其实呢,刚刚都是跟你开玩笑,我只是想。 跟你,接触了近一点,这样,会让你的体验更好一点,我自己无所谓,她说对不起,不需要,我跟你做的就挺好的,你说,哎你真的,你是神经解决吗,不是人间缉,对吧,就是回去回来,回去回来,对吧,一句话,生到关系。 一句话又回来,对吧,让我们这个什么呢,但是,随时可进攻,想进攻进攻,走到不好反应,立马回来,跟我们的,这个呢,才是一个,她妈的,真的会聊天的人,当你这个月来月熟练的时候,你就不会永远,你会,总在那里。 或者会尴尬在那里,因为很多时候,你们尴尬都是,因为你生高关系,没得不好的回应,你不知道怎么说了,对不对,或者很多时候呢,你这个,不好的反应回来,你又不知道怎么生高了,你叫许的会,一句话,扯到性话题。 但是一句话,又拉回到现实中,那么你扯到性话题,就是把你这个词和性话题,能够联想到的所有的东西,去,去联在一起,你回来的一个,就是你的思路,就是,用,就是你心里要想一个点,就是你开不起玩笑,对吧。 你开不起玩笑,你不懂我们说的意思,你跟不上我们的节奏,对吧,我们只是,幽默,懂不懂,对吧,你用这样的方式回来,好,那么这个就是,当女生,你在遭遇女生,不冷淡,反应不好的时候,你的对待方法。 那么我们再说到,如果,很多兄弟可能就说了,如果我用你的方法,升高关系了,先用你的这个,这个测试,无兴趣指标,先用你的无兴趣指标,然后测试出来了女生,是什么样子,对我感不感兴趣,然后又用了。 你刚刚讲的对话模型,跟女生圆不断地,聊下去了,但是我在升高关系的时候,突然一句话死掉了,我回,我虽然是这个,我升高关系死掉,我虽然回来了,一句话转回来,你虽然说了一句,那我们相近如兵。 但是这个女生不回你了,怎么办,对不对,就是当你去升高关系失败了,你虽然回来不丢人,但是,你怎么继续去,跟她聊天,去跟她互动呢,这个时候我们就需要,一个东西叫做开启话题,二次重新开启话题,对吧。 那么开启话题,有几种方式,第一种叫做,鸡原巧合的契机,什么叫鸡原巧合,比如说我在这个,我在那个,三亚临丁节的时候,因为现场,我是搭擅塔的,然后我跟她聊天,但是现场没有信号,我一直见不到她。 然后没想到我们俩,在酒店门口遇到了,而且我发现我们俩,就住的是,同一个酒店,头一个楼层,对吧,这个就是,鸡原巧合去遇到,或者说,你会发现,比如说她,向你尋求某个东西,对不对,比如说她是做这个。 她喜欢逛街,刚好你是做这个服装了,她就问你,拿拿有卖什么东西,就是类似于你们俩,尋求帮助,生活中,尋求帮助或者,因为什么事件遇到一起了,对吧,这是第一种方式,但是这个是很少的,几乎是特别像我们这种。 比如说,可能用通过社交软,这样和通过,搭擅认识女生这样的,途径,我们跟这个女生生活,是没有交集的,工作也没有交集,很难出现这种情况,那么,还有会出现第二种情况,叫做发现共有,对吧,比如说你。 认识这个女生,你会发现,你有个同事也认识,跟女生,你一个朋友也认识这个女生,或者你的亲戚,也是这个女生,或者是你发现你的,一个好哥们,的女朋友的,闺蜜斯,她就,诸如此类的,叫发现共有,这是第二点,就是。 发现共有,你应该去,就是可以开句话的,第三种,就是生活中遇到,对吧,比方说,你们俩,在,尺域上遇到了,你们俩在,KDW遇到了,你们俩在,咖啡厅遇到了,你们俩在,路上遇到了,在,学校里遇到了。 在某一个商业,商业,鞋子楼遇到了,这个都是生活中遇到的,那么这个统一,成为什么呢,原分,对不对,就是统一原分,但是,这种,当然是最好的,但是我们很难遇到,那么,下面就是,我要告诉大家,如何就是。 开启话题,第一种叫做,节日性的信息,节日性的,开启话题的方式,怎么意思呢,每个月,都有这样的节日,比如说什么,三八妇女节,唯一勞动节,元销,春节,这个,种子节,端午节,就猪如此类,每个月,几乎对我。 就是,然后,但是你不要统一群方,而且带上,他的名字,说一段,祝福的话一级,就是你可以这样说,你说,比如说同样一个人,叫,成者,然后叫苹果,我们就拿苹果举例,你说苹果,唯一勞动节到了,想到之前。 和你在这个,什么,春西路械厚,那个时候的,你还是,怎么样,后面,因为我自己工作,太忙了,没能跟你其实,现在是端午节了,发一个短信,祝福你,以后,这个月来月漂亮,身体健康,对吧,然后呢,你用这样的方式。 它可能会回你,这是第一种,针对节日性,信息开启,话题,第二种叫,寻求,问题,开启,就说,你可以,比如说你问他,哎,这你,那个,在他朋友圈里面,找一张图片发给他,你说你,这是在哪里拍的,对吧,你说。 这个事情有什么,律径,这是用什么软件,这个地方,在哪里,这个地方,就是人居消费是多少,寻求问题过来,说你那个,权责挺好看的,我这个姐姐过生日,我也想给他买一条,你这个,那个音响挺好的,我也想买音响。 对吧,因为用这种,这种,寻求问题帮助的,开启方式,第三种叫做,查看,有没有删除方法开启,就是你突然,给他发个信息,发个问号,或者发一个,嘿,哈啰,发一个表情,然后他回你了,你就说,哎,我就看一下。 你没有删除我,对吧,愿用这种方式来开启话题,那么就算他不理你,你也可以说,我只是看,你没有删掉我,他理了你,你也不会,保护寻求感,这第三种方式,第四种方式,叫做,朋友圈,找一张照片,加一句话开启话题。 就是你要,翻他的朋友圈,随便找一张,图片开的话,比如说,你说,他一样寵物的话,你就发一张图片,过去,然后给他说,哎,我也一样的,这只狗,我也一样的这个,这个法斗,他喜欢,睡觉还打呼了,还打寒。 弄得我家特别,这个都是毛,然后你找出他一张照片,你说你也去三亚了,我觉得那个,那里的这个,酒店,给我的感觉不是很好,因为周围都没有什么吃的,对吧,你就用这样的方式来,去开启话题,这个叫做。 朋友圈照片开启话题,那么,这个我给大家磨练了四种,四种,开启话题的方式,第一种叫做,针对节日性开场,第二种寻求帮助,第三个叫做,查看有没有删除,第四种叫做,朋友圈,加一张照片,评论,开启话题。 就是当你在,可他,聊点冷淡的时候,你的二次开启话题,对吧,那么,我们再说,打出来一下,你可以看回放,我们是有回放的,如果你听懂,我会在群里,这个再给大家,然后,经常有兄弟会说到一个点,我们接下来。 再进行下面一个命题,就很多兄弟,老师说,我要聊天,怎么显得好玩有趣,我要怎么显得幽默,对吧,我要怎么样,让这个女生觉得,我很有趣,那其实,好玩有趣的聊天,是如何做到的,重点是,你需要传递。 好玩有趣的情绪,也就是说,当女生对你反应,不好的任何的情况下,你都要有一种开心快乐的,气氛来回应,再加上,搞笑曲解的技巧,表达方式,从而达到,幽默的效果,对吧,可能这个你,我不知道你能不能说明什么意思。 就是你无论在和女生聊天的任何时候,你都保持的一种,开心快乐,这种情绪,然后可以曲解它,那么我可能举例子,你不能明白,我这里大家,去多看一些搞笑的视频,多看一些梗,多看一些断子,然后去培养你的这种感觉。 那我可能,就是,用一个比较可能,但这已经过时了,我用一个例子来,来这个来表达,比如说这个女生说你太矮了,你说我其实很高的,有这个,我其实很高的,对吧,你说我有180米,对吧,你说,对不对。 就是当女生说你很,当女生说你,比如说我用一个,不恰当的例子,但这已经是以前的梗了,所以当女生说你矮的时候,你说我躺着很高,那女生说有没有180米,你说那是我切掉一半的,你说我切掉一半有这么差。 或者说我对折之后有这么差,但这个已经过时了,这个例子就是这个是夸张加趋劫,加上一种好玩有趣的女生,在打压你的时候,你只有开心的氛围跟她去,斗逼同时你又有夸张的,这种方式,然后趋劫她的问题,所以就通。 这个猪此类的这样的方式,从来打到优默的效果,总之到,反正无论你遇到任何问题,你都要有一种开心快乐的情绪,去解的方式加上夸大的方式,来搞笑,好吧,这个需要你慢慢去误,其实短暂的在这里,一两句话也讲不完。 那么最后一个点,就是我们想,送给大家的,聊天三警,也就是我的私教课,可能聊天到这里,就是我们今天要结束了,一个一个东西,因为我们可能下一趟课,会讲展示面或者是收集资源,然后我可能,我把聊天的。 大概都给她讲完了,那最后再送大家,三个聊天请,第一个叫做,快速腰约,也就是我们的朋友全调娱,对吧,然后,就是,如果你想快速腰约,如果今天晚上,你想找到一个女生,陪你出来去看电影,去干嘛。 你想快速找到一个女生,那么我的建议是发朋友全,我建议是发朋友全,就是,SweeZ来的举手,电影来的举手,但是呢,这里要强调一点,这个方式,只对你约女生,见面出来,有效果,但是,不能保证,你有什么后续。 只能保证,你约到女生出来,但是不能保证,有什么后续,对吧,这是第一个,第二个,警囊叫做,生意,聊一些试证的话题,那么,给,十一点过后,还发朋友全的女生,直接切丝聊,玩,知闪着,来玩真心话大冒险。 然后问的问题,有钱入身,好吧,好吧,就是,对于,针对于,十一点过后,还发自拍的,或者,露腿的,露丝,什么那样的女生,直接切丝聊,然后,跟她玩游戏,真心话大冒险,知闪着,输了问问题,然后,有钱入身。 就第二个警囊,第三个叫做,第三个就不长了,第三个,因为在这里,不能讲,这个,我到时候在全力给大家说,好吧,那大家还有没有什么问题,在在下面打出来,然后我们就,接下来给大家打大一,好吧。 给大家打个五分钟的一,然后我们就结束了今天的这个聊天课,我不知道大家有没有听懂,就是我给大家罗列呀,首先我们说的是这个,首先我给大家说一下这个,聊天,首先是你的,你要找到那种愿意跟你,平等交流的女生了。 而不是去搬,扭,扭一个不甜的瓜,不配到罗斯印尼,而是找一个愿意跟你说话的女生,然后一开始,先用无性去指标测试窗口,然后呢,去判断出来这个女生对你,是属于这三个方便,第一个是对你人不感兴趣。 还是对你的话题不感兴趣,还是对直直的没空,判断出来之后,再进行对话模型,原来不断的跟他聊,原来不大家聊,然后再进行升高关系,升高关系测回来回转化,然后再进行开启化,然后可能是第一轮。 然后再叠选开启化题的能力,开启完这个话题之后,就是有一些针对女生,不好的回应的时候的一些做法,然后最后就是这个就是,幽默,然后对,这里我忽略了一个点,我这里忽略了一个点,这里给他,这个补上。 就是你在聊天的时候,你如果,就是比如说你想让这个女生,做你女朋友,或者你想跟他,真的是你觉得这个是靠谱女孩,你想跟他结婚,你想跟他真的是一辈子在一起这样的女生,你的聊天的核心思路只有一个。 就是让他觉得你这个人,是在走心,就是无论你有什么样的,无论你说什么话也好了,怎么样也好,你的核心思路只有一个,就是你在任任真真的走心,并且你是一个特别靠谱的,各程头续完全更新的,那么你怎么样。 是7年这样的,568,非常靠谱自己,非常地,这种对,对对感情很认真,然后你很走心了,就是讲故事,你一定要讲故事,讲述自己和自己之前女朋友,你们俩之间,经历过的一些很深情的故事,对吧,然后。 然后再讲你未来对感情的规划,对未来感情的一个期许,对吧,但是你在讲的时候,有个很关键点,你如果不好的事情,你说是我朋友,你说这些朋友案里,我有一个朋友他和他女朋友,怎么怎么样,我觉得他这个做法对我不对。 就是你尽量是讲你的朋友,因为很简单,你可能说了之后,这个女生不喜欢,他就对你会减分,最好的是你说你自己的朋友,对吧,然后,还有一个点,我想给大家说,就是今年我在,翻我朋友圈的时候,我看到了我三年前。 就认识了一个女生,然后当时他过生日,他让我过去,然后我也过去了,就是三年前认识女生,但这些年都没有聊过天,然后在今年,我看到他发了一个帮助圈,就是他结婚了,发了他发了他一个婚礼的照片,然后当时我。 一下子我觉得,哇,这个物事人非的感觉,然后我在和一个,我和那个,一个女生聊天的时候,我就把这个事情告诉他了,我就把那个我,就是那个女生他结婚的图片,发给他,我就说,我也觉得,听上结婚了,就结婚感觉。 就很好,然后我没想到我发,跟他聊完了这一段之后,他对我好像看法就变了,对吧,他觉得,我好像是,并不是他想象的那种,好像,那么不靠谱一样,他反而对我的观念,有时候转变了,当然了你也要,有一些话,比如说。 就举例子来说明,比如说,周杰伦和困灵在一起,是因为合适的时间,对吧,而不是,因为想结婚了结婚,因为是因为,在合适的时间,合适的,气气遇到了,所以他们结了,对吧,同样你也说,我们,我对待感情,也不是说我。 我玩够了才结婚,而是我一直没有遇到对的人,如果在一个某一个时间段,某一个空间某一个,气气对了,感觉对了,我也会选择结婚,知道吗,然后,刚很多可能有兄弟会,问我,说,为什么你讲聊天,你不讲什么。 如何和女生聊星座,对吧,聊八卦,聊什么之类,那我告诉你,其实我们在和女生聊天中,很少遇到这样的问题,很少聊这些问题,知道吗,这是聊星座,八卦这些东西,是第三方话题,无关于我,两个人,所以这些东西对于。 我和这个女生,起码,从升级关系这个角度来说,是没有用的,可能对于,就是一些,可能我会在,她和我在一起之后,我们去聊这些东西,对不对,然后,我们最后再说一个,就是我们之前,我刚刚给她讲的是可能,是一个。 试探阶段,就是你要试探她,怎么,她对你感不感兴趣,还有一个,升级阶段,就是告诉大家,怎么聊天怎么升级,现在我可能要再给她讲了,最后一个阶段,就是当你这个,你和这个女生,在一起了之后。 你们俩交往的这个阶段,你们在微信上聊天,应该怎么样,就是,你每天,早安晚,这个仪式感一定要做足,就如果她真的很在一起,是女朋友的话,一定要早安晚,你就说,每天我给你说完,你辅则早安好不好,就是这种。 仪式感一定要OK,这第一个,第二个是,她的,逢年过节的生日,你必须在场,而且要静你自己,最大的能力送礼物,就是她的,生日也好,或者她,你们俩在一起,多长时间的,今年日也好,一定要静自己,最大的。 能力范围内,送她礼物,然后你自己本人也要在场,这个是,没什么好说的是,一定的,然后,总之就一个思路,就是你的,仪式感,一定要做足,好吧,OK,那我们在这个,说一下这个,说一下这个,我们的,课。 说一下我们的这个,回答一下我们的问题,然后就兄弟说,三雅不是说一个案例,我们下次说吧,下次说吧,因为太多了,他们的说过完,好,咱们试叫,上多少节,大概是八节课这样,大概是八节课,从认识女生到,最后。 一个阶段,每一个阶段都会讲,我们上一次是讲,对,吸引力的原理,然后对于,情感的理解,这节课是,如何聊天,可能下一节课,我们就讲,后面,两周时间都讲,分别讲,如何,这个,就是,做展示面,如何去认识,新的。 靠谱的女生,对吧,然后再讲如何约会,每周都会有,这个兄弟说,小德的说,如何聊天提供价值,而不是锁取价值,就是你在聊天的时候,你作为,分享话题的这一方,就是你要分,因为,你问,就是,提供价值,是什么呢。 在聊天中的体现,就是分享好玩有趣的,视频也好,说好玩有趣的话也好,让他感到开心,或者说你说,你自己精力过什么,读特的事情,让他有所成长,或你说一些什么东西,让他觉得自己,跟你在一块学到东西了,对吧。 但是呢,你,问一些问题,或者问一些更隐私的问题,这个就叫锁取价值,所以你尽量避免,你去问一些别人的隐私,而多,说一些自己精力过的,读特的事情,这个就叫提供价值,有聊天就录可以发权力吗,我会发的。 人家就录可以发的,然后,大家还有什么问题吗,然后现在是这个,15分,然后我们再过五分钟的结束,好吧,那这样吗,大家有什么问题,就可以这个,权力问我,我们今天上个轴到这里,好不好,下一个专门讲三亚嘛。 我可能会带着讲,带着讲对,好,那我们这个,今天就,会的我会录评,然后到时候发的权力,好,那我们今天都到这里,拜拜,拜拜大家下周日晚上的,8。30分,我们再开始讲。 拜拜。
PypiClean
/ensmallen_graph-0.6.0-cp37-cp37m-manylinux2010_x86_64.whl/ensmallen_graph/datasets/string/nocardiasp348mftsu51.py
from typing import Dict from ..automatic_graph_retrieval import AutomaticallyRetrievedGraph from ...ensmallen_graph import EnsmallenGraph # pylint: disable=import-error def NocardiaSp348mftsu51( directed: bool = False, verbose: int = 2, cache_path: str = "graphs/string", **additional_graph_kwargs: Dict ) -> EnsmallenGraph: """Return new instance of the Nocardia sp. 348MFTsu51 graph. The graph is automatically retrieved from the STRING repository. Parameters ------------------- directed: bool = False, Wether to load the graph as directed or undirected. By default false. verbose: int = 2, Wether to show loading bars during the retrieval and building of the graph. cache_path: str = "graphs", Where to store the downloaded graphs. additional_graph_kwargs: Dict, Additional graph kwargs. Returns ----------------------- Instace of Nocardia sp. 348MFTsu51 graph. Report --------------------- At the time of rendering these methods (please see datetime below), the graph had the following characteristics: Datetime: 2021-02-02 23:07:21.512251 The undirected graph Nocardia sp. 348MFTsu51 has 5155 nodes and 779420 weighted edges, of which none are self-loops. The graph is dense as it has a density of 0.05867 and has 19 connected components, where the component with most nodes has 5114 nodes and the component with the least nodes has 2 nodes. The graph median node degree is 251, the mean node degree is 302.39, and the node degree mode is 3. The top 5 most central nodes are 1172185.KB911526_gene740 (degree 2258), 1172185.KB911514_gene4126 (degree 2191), 1172185.KB911515_gene2303 (degree 1756), 1172185.KB911511_gene3233 (degree 1544) and 1172185.KB911522_gene1877 (degree 1533). References --------------------- Please cite the following if you use the data: @article{szklarczyk2019string, title={STRING v11: protein--protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets}, author={Szklarczyk, Damian and Gable, Annika L and Lyon, David and Junge, Alexander and Wyder, Stefan and Huerta-Cepas, Jaime and Simonovic, Milan and Doncheva, Nadezhda T and Morris, John H and Bork, Peer and others}, journal={Nucleic acids research}, volume={47}, number={D1}, pages={D607--D613}, year={2019}, publisher={Oxford University Press} } Usage example ---------------------- The usage of this graph is relatively straightforward: .. code:: python # First import the function to retrieve the graph from the datasets from ensmallen_graph.datasets.string import NocardiaSp348mftsu51 # Then load the graph graph = NocardiaSp348mftsu51() # Finally, you can do anything with it, for instance, compute its report: print(graph) # If you need to run a link prediction task with validation, # you can split the graph using a connected holdout as follows: train_graph, validation_graph = graph.connected_holdout( # You can use an 80/20 split the holdout, for example. train_size=0.8, # The random state is used to reproduce the holdout. random_state=42, # Wether to show a loading bar. verbose=True ) # Remember that, if you need, you can enable the memory-time trade-offs: train_graph.enable( vector_sources=True, vector_destinations=True, vector_outbounds=True ) # Consider using the methods made available in the Embiggen package # to run graph embedding or link prediction tasks. """ return AutomaticallyRetrievedGraph( graph_name="NocardiaSp348mftsu51", dataset="string", directed=directed, verbose=verbose, cache_path=cache_path, additional_graph_kwargs=additional_graph_kwargs )()
PypiClean
/alipay-sdk-python-pycryptodome-3.3.202.tar.gz/alipay-sdk-python-pycryptodome-3.3.202/alipay/aop/api/domain/AlipayTradePeerpayprodRelationCreateModel.py
import json from alipay.aop.api.constant.ParamConstants import * class AlipayTradePeerpayprodRelationCreateModel(object): def __init__(self): self._aliapy_related_id = None self._alipay_user_id = None self._relation_name = None self._relation_type = None self._taobao_related_id = None self._taobao_user_id = None @property def aliapy_related_id(self): return self._aliapy_related_id @aliapy_related_id.setter def aliapy_related_id(self, value): self._aliapy_related_id = value @property def alipay_user_id(self): return self._alipay_user_id @alipay_user_id.setter def alipay_user_id(self, value): self._alipay_user_id = value @property def relation_name(self): return self._relation_name @relation_name.setter def relation_name(self, value): self._relation_name = value @property def relation_type(self): return self._relation_type @relation_type.setter def relation_type(self, value): self._relation_type = value @property def taobao_related_id(self): return self._taobao_related_id @taobao_related_id.setter def taobao_related_id(self, value): self._taobao_related_id = value @property def taobao_user_id(self): return self._taobao_user_id @taobao_user_id.setter def taobao_user_id(self, value): self._taobao_user_id = value def to_alipay_dict(self): params = dict() if self.aliapy_related_id: if hasattr(self.aliapy_related_id, 'to_alipay_dict'): params['aliapy_related_id'] = self.aliapy_related_id.to_alipay_dict() else: params['aliapy_related_id'] = self.aliapy_related_id if self.alipay_user_id: if hasattr(self.alipay_user_id, 'to_alipay_dict'): params['alipay_user_id'] = self.alipay_user_id.to_alipay_dict() else: params['alipay_user_id'] = self.alipay_user_id if self.relation_name: if hasattr(self.relation_name, 'to_alipay_dict'): params['relation_name'] = self.relation_name.to_alipay_dict() else: params['relation_name'] = self.relation_name if self.relation_type: if hasattr(self.relation_type, 'to_alipay_dict'): params['relation_type'] = self.relation_type.to_alipay_dict() else: params['relation_type'] = self.relation_type if self.taobao_related_id: if hasattr(self.taobao_related_id, 'to_alipay_dict'): params['taobao_related_id'] = self.taobao_related_id.to_alipay_dict() else: params['taobao_related_id'] = self.taobao_related_id if self.taobao_user_id: if hasattr(self.taobao_user_id, 'to_alipay_dict'): params['taobao_user_id'] = self.taobao_user_id.to_alipay_dict() else: params['taobao_user_id'] = self.taobao_user_id return params @staticmethod def from_alipay_dict(d): if not d: return None o = AlipayTradePeerpayprodRelationCreateModel() if 'aliapy_related_id' in d: o.aliapy_related_id = d['aliapy_related_id'] if 'alipay_user_id' in d: o.alipay_user_id = d['alipay_user_id'] if 'relation_name' in d: o.relation_name = d['relation_name'] if 'relation_type' in d: o.relation_type = d['relation_type'] if 'taobao_related_id' in d: o.taobao_related_id = d['taobao_related_id'] if 'taobao_user_id' in d: o.taobao_user_id = d['taobao_user_id'] return o
PypiClean