id
int64
20
338k
vocab_size
int64
2
671
ast_levels
int64
4
32
nloc
int64
1
451
n_ast_nodes
int64
12
5.6k
n_identifiers
int64
1
186
n_ast_errors
int64
0
10
n_words
int64
2
2.17k
n_whitespaces
int64
2
13.8k
fun_name
stringlengths
2
73
commit_message
stringlengths
51
15.3k
url
stringlengths
31
59
code
stringlengths
51
31k
ast_errors
stringlengths
0
1.46k
token_counts
int64
6
3.32k
file_name
stringlengths
5
56
language
stringclasses
1 value
path
stringlengths
7
134
commit_id
stringlengths
40
40
repo
stringlengths
3
28
complexity
int64
1
153
262,663
46
10
8
91
7
0
58
122
check_values
Adding OverFlow (#2183) * Adding encoder * currently modifying hmm * Adding hmm * Adding overflow * Adding overflow setting up flat start * Removing runs * adding normalization parameters * Fixing models on same device * Training overflow and plotting evaluations * Adding inference * At the end of epoch the test sentences are coming on cpu instead of gpu * Adding figures from model during training to monitor * reverting tacotron2 training recipe * fixing inference on gpu for test sentences on config * moving helpers and texts within overflows source code * renaming to overflow * moving loss to the model file * Fixing the rename * Model training but not plotting the test config sentences's audios * Formatting logs * Changing model name to camelcase * Fixing test log * Fixing plotting bug * Adding some tests * Adding more tests to overflow * Adding all tests for overflow * making changes to camel case in config * Adding information about parameters and docstring * removing compute_mel_statistics moved statistic computation to the model instead * Added overflow in readme * Adding more test cases, now it doesn't saves transition_p like tensor and can be dumped as json
https://github.com/coqui-ai/TTS.git
def check_values(self): assert self.ar_order > 0, "AR order must be greater than 0 it is an autoregressive model." assert ( len(self.outputnet_size) >= 1 ), f"Parameter Network must have atleast one layer check the config file for parameter network. Provided: {self.parameternetwork}" assert ( 0 < self.flat_start_params["transition_p"] < 1 ), f"Transition probability must be between 0 and 1. Provided: {self.flat_start_params['transition_p']}"
44
overflow_config.py
Python
TTS/tts/configs/overflow_config.py
3b8b105b0d6539ac12972de94e0b2a5077fa1ce2
TTS
1
128,067
8
7
118
25
6
0
8
14
test_exclusion
[RuntimeEnv, Windows] Fix working_dir, pip & conda for windows (#28589) * Fix working_dir, conda and pip options for Windows Signed-off-by: Jeroen Bédorf <[email protected]> * More test fixes Signed-off-by: Jeroen Bédorf <[email protected]> * More test fixes Signed-off-by: Jeroen Bédorf <[email protected]> * More test fixes and style format fixes Signed-off-by: Jeroen Bédorf <[email protected]> * Style fixes Signed-off-by: Jeroen Bédorf <[email protected]> * Restructure, enable more tests Signed-off-by: Jeroen Bédorf <[email protected]> * Initial fixes to tests Signed-off-by: Jeroen Bédorf <[email protected]> * Fix lint errors Signed-off-by: Jeroen Bédorf <[email protected]> * Fix style Signed-off-by: Jeroen Bédorf <[email protected]> Signed-off-by: Jeroen Bédorf <[email protected]>
https://github.com/ray-project/ray.git
def test_exclusion(start_cluster, tmp_working_dir, option): cluster, address = start_cluster
526
test_runtime_env_working_dir.py
Python
python/ray/tests/test_runtime_env_working_dir.py
de79e6df4ba98773c5a32531dcdda27ceb957247
ray
5
166,238
17
10
6
81
10
0
20
46
from_dataframe
ENH: Implement DataFrame interchange protocol (#46141)
https://github.com/pandas-dev/pandas.git
def from_dataframe(df, allow_copy=True): if isinstance(df, pd.DataFrame): return df if not hasattr(df, "__dataframe__"): raise ValueError("`df` does not support __dataframe__") return _from_dataframe(df.__dataframe__(allow_copy=allow_copy))
48
from_dataframe.py
Python
pandas/core/exchange/from_dataframe.py
90140f055892a46f473bd26affab88a7f171e394
pandas
3
289,502
36
8
11
44
5
0
40
77
test_recorder_bad_commit
Add CI job which runs recorder tests on MariaDB (#80586) Co-authored-by: Franck Nijhof <[email protected]>
https://github.com/home-assistant/core.git
def test_recorder_bad_commit(hass_recorder, recorder_db_url): if recorder_db_url.startswith("mysql://"): # This test is specific for SQLite: mysql does not raise an OperationalError # which triggers retries for the bad query below, it raises ProgrammingError # on which we give up return hass = hass_recorder()
63
test_util.py
Python
tests/components/recorder/test_util.py
f4951a4f31217568cd467901db9aa2524fd2b697
core
2
248,016
5
6
10
17
2
0
5
12
_set_expiration_date_when_missing
Add some type hints to datastore (#12423) * Add some type hints to datastore * newsfile * change `Collection` to `List` * refactor return type of `select_users_txn` * correct type hint in `stream.py` * Remove `Optional` in `select_users_txn` * remove not needed return type in `__init__` * Revert change in `get_stream_id_for_event_txn` * Remove import from `Literal`
https://github.com/matrix-org/synapse.git
async def _set_expiration_date_when_missing(self) -> None:
22
registration.py
Python
synapse/storage/databases/main/registration.py
1783156dbcf4164692e66275d1c29857c434995b
synapse
1
154,575
28
13
12
138
18
0
29
137
agg
FEAT-#4946: Replace OmniSci with HDK (#4947) Co-authored-by: Iaroslav Igoshev <[email protected]> Signed-off-by: Andrey Pavlenko <[email protected]>
https://github.com/modin-project/modin.git
def agg(self, agg): assert isinstance(agg, str) agg_exprs = OrderedDict() for col in self.columns: agg_exprs[col] = AggregateExpr(agg, self.ref(col)) return self.__constructor__( columns=self.columns, dtypes=self._dtypes_for_exprs(agg_exprs), op=GroupbyAggNode(self, [], agg_exprs, {"sort": False}), index_cols=None, force_execution_mode=self._force_execution_mode, )
92
dataframe.py
Python
modin/experimental/core/execution/native/implementations/hdk_on_native/dataframe/dataframe.py
e5b1888cd932909e49194d58035da34b210b91c4
modin
2
199,988
40
17
8
209
16
0
59
102
R_nl
applied backtick correction to the remainder of the project
https://github.com/sympy/sympy.git
def R_nl(n, l, nu, r): n, l, nu, r = map(S, [n, l, nu, r]) # formula uses n >= 1 (instead of nodal n >= 0) n = n + 1 C = sqrt( ((2*nu)**(l + Rational(3, 2))*2**(n + l + 1)*factorial(n - 1))/ (sqrt(pi)*(factorial2(2*n + 2*l - 1))) ) return C*r**(l)*exp(-nu*r**2)*assoc_laguerre(n - 1, l + S.Half, 2*nu*r**2)
140
sho.py
Python
sympy/physics/sho.py
a0989bcfd26470833cf03737941bfd80f511c745
sympy
1
11,745
27
12
12
203
21
0
40
100
test_segmenter
test: fix tests failing after new docarray patch (#4449) * test: fix tests failing after new docarray patch * test: fix failing tests
https://github.com/jina-ai/jina.git
def test_segmenter(segmenter_doc_array, tmpdir): create_test_img(path=str(tmpdir), file_name='1.jpg') create_test_img(path=str(tmpdir), file_name='2.jpg') with Flow().add(uses=Segmenter) as f: res = f.index(inputs=segmenter_doc_array) assert len(res) == 2 for doc in res: assert len(doc.chunks) == 2 assert doc.chunks[0].mime_type == 'text/plain' assert doc.chunks[1].mime_type == 'image/jpeg' assert doc.chunks[1].mime_type == 'image/jpeg' assert doc.uri.startswith('data')
121
test_executors.py
Python
tests/unit/helloworld/multimodal/test_executors.py
217a11bb8dc613ed1136b8b541a68e6d53ca4fc1
jina
2
101,603
52
12
12
219
20
0
71
185
_parse_arguments
Overhaul sort: - Standardize image data reading and writing - Optimize loading (just one pass required) - Make all sort groups binnable (to greater or lesser results) - Add sort by pitch - Deprecate multiple options - linting, docs + locales
https://github.com/deepfakes/faceswap.git
def _parse_arguments(self, arguments): logger.debug("Cleaning arguments: %s", arguments) if arguments.sort_method == "none" and arguments.group_method == "none": logger.error("Both sort-by and group-by are 'None'. Nothing to do.") sys.exit(1) # Prepare sort, group and final process method names arguments.sort_method = arguments.sort_method.lower().replace("-", "_") arguments.group_method = arguments.group_method.lower().replace("-", "_") arguments = self._set_output_folder(arguments) if arguments.log_changes and arguments.log_file_path == "sort_log.json": # Assign default sort_log.json value if user didn't specify one arguments.log_file_path = os.path.join(self._args.input_dir, 'sort_log.json') logger.debug("Cleaned arguments: %s", arguments) return arguments
124
sort.py
Python
tools/sort/sort.py
98d01760e469fd2108eed8d0b0a1ba6297c3177c
faceswap
5
32,000
6
7
5
29
4
0
6
12
get_supported_tasks
feat: add pipeline registry abstraction (#17905) * feat: add pipeline registry abstraction - added `PipelineRegistry` abstraction - updates `add_new_pipeline.mdx` (english docs) to reflect the api addition - migrate `check_task` and `get_supported_tasks` from transformers/pipelines/__init__.py to transformers/pipelines/base.py#PipelineRegistry.{check_task,get_supported_tasks} Signed-off-by: Aaron Pham <[email protected]> * fix: update with upstream/main chore: Apply suggestions from sgugger's code review Signed-off-by: Aaron Pham <[email protected]> Co-authored-by: Sylvain Gugger <[email protected]> * chore: PR updates - revert src/transformers/dependency_versions_table.py from upstream/main - updates pipeline registry to use global variables Signed-off-by: Aaron Pham <[email protected]> * tests: add tests for pipeline registry Signed-off-by: Aaron Pham <[email protected]> * tests: add test for output warning. Signed-off-by: Aaron Pham <[email protected]> * chore: fmt and cleanup unused imports Signed-off-by: Aaron Pham <[email protected]> * fix: change imports to top of the file and address comments Signed-off-by: Aaron Pham <[email protected]> Co-authored-by: Sylvain Gugger <[email protected]>
https://github.com/huggingface/transformers.git
def get_supported_tasks() -> List[str]: return PIPELINE_REGISTRY.get_supported_tasks()
16
__init__.py
Python
src/transformers/pipelines/__init__.py
49cd736a288a315d741e5c337790effa4c9fa689
transformers
1
297,083
5
6
4
17
3
0
5
12
async_ipv6_active
Add IPv6 sensor to fritz component (#75708) * Add IPv6 sensor to fritz component * Cast return type to string * Make ipv6 sensor suitable * simplify cast to str * use extisting property Co-authored-by: chemelli74 <[email protected]> Co-authored-by: mib1185 <[email protected]>
https://github.com/home-assistant/core.git
async def async_ipv6_active(self) -> bool:
23
common.py
Python
homeassistant/components/fritz/common.py
64a72daa277a4575006c6f9cb635711417b6a751
core
1
176,103
11
10
7
51
3
0
11
46
test_model_optional_prop_01
Pull assert_data_shape out of testbase.server and use it for model tests (#3315)
https://github.com/edgedb/edgedb.git
def test_model_optional_prop_01(self): self.assert_test_query( r, {(False, 'boxing'), (True, 'unboxing'), (False, 'dynamic')} )
32
test_eval_model.py
Python
tests/test_eval_model.py
20ca6e2fa7bab2adc8c37d8c42049076c692782e
edgedb
1
104,908
5
9
2
34
5
0
5
19
download_and_extract
Add API code examples for Builder classes (#4313) * 📝 add examples for builder classes * 📝 apply quentin review
https://github.com/huggingface/datasets.git
def download_and_extract(self, url_or_urls): return self.extract(self.download(url_or_urls))
20
streaming_download_manager.py
Python
src/datasets/utils/streaming_download_manager.py
d1d4f1065fd4ab91b2c8682643dbd12f86d66fcd
datasets
1
94,383
14
9
2
54
9
0
15
29
generate_random_issue_events
feat(suspect-resolutions): Add suspect resolutions backend logic (#37588)
https://github.com/getsentry/sentry.git
def generate_random_issue_events(self, start, end, window): return [(t, random.randint(0, 30)) for t in range(start, end, window)]
38
test_metric_correlation.py
Python
tests/sentry/utils/suspect_resolutions/test_metric_correlation.py
6e75cfca0319b5e4e3448cf52469acf6482ba659
sentry
2
177,896
17
19
9
217
21
1
21
79
remove_annotation_update_counters
fix: DEV-2410: Separate remove annotation signal (#2409) * fix: DEV-2406: Fix counters for skipped annotations (#2366) * fix: DEV-2406: Fix counters for skipped annotations * Fixes Co-authored-by: makseq-ubnt <[email protected]> Co-authored-by: Konstantin Korotaev <[email protected]> Co-authored-by: makseq-ubnt <[email protected]> * fix: DEV-2417: Remove migration to django command (#2371) * fix: DEV-2410: Fix Total Annotations counter not decremented after annotation delete (#2387) * fix: DEV-2410: Fix Total Annotations counter not decremented after annotation delete * fix: DEV-2410: Fix total annotations counter is not decremented * fix: DEV-2410: Add debug logs to signals * fix: DEV-2410: Separate remove annotation signal Co-authored-by: Nikita Belonogov <[email protected]> Co-authored-by: makseq-ubnt <[email protected]> Co-authored-by: Sergey Zhuk <[email protected]>
https://github.com/heartexlabs/label-studio.git
def remove_annotation_update_counters(sender, instance, **kwargs): if instance.was_cancelled: Task.objects.filter(id=instance.task.id).update( cancelled_annotations=instance.task.annotations.all().filter(was_cancelled=True).count()) logger.debug(f"Updated cancelled_annotations for {instance.task.id}.") else: Task.objects.filter(id=instance.task.id).update( total_annotations=instance.task.annotations.all().filter(was_cancelled=False).count()) logger.debug(f"Updated total_annotations for {instance.task.id}.") @receiver(pre_delete, sender=Prediction)
@receiver(pre_delete, sender=Prediction)
112
models.py
Python
label_studio/tasks/models.py
f343a7a5b91dad169d49accb47bb264cab535701
label-studio
2
322,052
4
7
2
22
4
0
4
18
task_path
Unified customization for Taskflow (#1517) * Add custom model inferface for lac & user dict interface for wordtag * Update README.md * Update term-linking * Update README.md * Update README.md * add custom method * Update README.md * Update README.md * Unified custom interface for Taskflow * Update model inference * Add config files * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * remove unused code * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update README.md * Update main cls
https://github.com/PaddlePaddle/PaddleNLP.git
def task_path(self): return self.task_instance._task_path
12
taskflow.py
Python
paddlenlp/taskflow/taskflow.py
c1d5241d581569b544c04f5d23b069a29a6e6209
PaddleNLP
1
176,372
36
11
29
100
11
0
56
120
maximal_matching
Update matching functions for error validation and speed (#4897) * First steps to update matching functions for #4644 Expand tests Change API to raise NetworkXError when matching involves nodes not in G Update is_*_matching to 100+ times faster. * improve matching_dict_to_set and docs for min_weight_matching * fix sphinx error
https://github.com/networkx/networkx.git
def maximal_matching(G): r matching = set() nodes = set() for edge in G.edges(): # If the edge isn't covered, add it to the matching # then remove neighborhood of u and v from consideration. u, v = edge if u not in nodes and v not in nodes and u != v: matching.add(edge) nodes.update(edge) return matching
60
matching.py
Python
networkx/algorithms/matching.py
28b3014d68d2b4e40d3e02219770296a827bd55c
networkx
5
224,543
5
6
2
18
3
0
5
19
run_validation
Move some documentation into code, add misc API docs page (#2934)
https://github.com/mkdocs/mkdocs.git
def run_validation(self, value): return value
10
base.py
Python
mkdocs/config/base.py
9c0a8e50b11b70f803500cd73e7256b63f64b5e3
mkdocs
1
323,196
39
14
22
191
25
1
44
116
_write_setup_file
Add model parallel for FasterGPT. (#1755) * Add model parallel for FasterGPT. * Make GPT model parallel runable * Make FT model parallel optional. * Fix _write_setup_file when kwargs is not empty. * Fix ext_utils.load * Add file lock for model parallel. * Fix model_parallel.flag in CMakeLists.txt. * Use a separated lib for model parallel. * Make from_pretrained get partial model. * Make model parallel support layer group in python. * Fix fit_partial_model when model having keys state not including. Add api docs for model parallel. * Fix the default world_size when mpi is not available. * Add demo for GPT model parallel. * Fix default global ft_para_conf. * Fix GPTModel state_dict wrapper for layer parallel. * Set seed for tensor parallel. * Fix location of gpt.h in cmake. * Fix seed in gpt.h * Fix NV FT GPT embedding. * Add more cases in gpt_mp_sample.py * Fix seed in ker_curand_setupLauncher. Put build dir of FG in PPNLP_HOME with digest of current path. * Refine gpt_mp_sample.py
https://github.com/PaddlePaddle/PaddleNLP.git
def _write_setup_file(name, file_path, build_dir, **kwargs): template = textwrap.dedent().lstrip() kwargs_str = "" for key, value in kwargs.items(): kwargs_str += key + "=" + (f"'{value}'" if isinstance(value, str) else str(value)) + "," content = template.format( name=name, kwargs_str=kwargs_str, build_dir=build_dir) with open(file_path, 'w') as f: f.write(content) @file_lock(os.path.join(PPNLP_HOME, "load_ext.lock"))
@file_lock(os.path.join(PPNLP_HOME, "load_ext.lock"))
97
ext_utils.py
Python
paddlenlp/ops/ext_utils.py
c541f4ba1fcab8304c7ac4efdce3d63a2e478176
PaddleNLP
3
22,109
10
8
10
48
6
0
11
24
put
Rename notpip to pip. Vendor in pip-22.2.1 and latest requirementslib and vistir.
https://github.com/pypa/pipenv.git
def put(self, url, data=None, **kwargs): r return self.request("PUT", url, data=data, **kwargs)
32
sessions.py
Python
pipenv/patched/pip/_vendor/requests/sessions.py
cd5a9683be69c86c8f3adcd13385a9bc5db198ec
pipenv
1
81,872
112
13
46
831
49
1
179
558
test_combine_prompts_WFJT_to_node
Write logic to combing workflow labels, IGs with nodes Additionally, move the inventory-specific hacks of yesteryear into the prompts_dict method of the WorkflowJob model try to make it clear exactly what this is hacking and why Correctly summarize label prompts, and add missing EE Expand unit tests to apply more fields adding missing fields to preserve during copy to workflow.py Fix bug where empty workflow job vars blanked node vars (#12904) * Fix bug where empty workflow job vars blanked node vars * Fix bug where workflow job has no extra_vars, add test * Add empty workflow job extra vars to assure fix
https://github.com/ansible/awx.git
def test_combine_prompts_WFJT_to_node(self, project, inventory, organization): jt = JobTemplate.objects.create( project=project, inventory=inventory, ask_variables_on_launch=True, ask_credential_on_launch=True, ask_instance_groups_on_launch=True, ask_labels_on_launch=True, ask_limit_on_launch=True, ) wj = WorkflowJob.objects.create(name='test-wf-job', extra_vars='{}') common_ig = InstanceGroup.objects.create(name='common') common_ct = CredentialType.objects.create(name='common') node = WorkflowJobNode.objects.create(workflow_job=wj, unified_job_template=jt, extra_vars={'node_key': 'node_val'}) node.limit = 'node_limit' node.save() node_cred_unique = Credential.objects.create(credential_type=CredentialType.objects.create(name='node')) node_cred_conflicting = Credential.objects.create(credential_type=common_ct) node.credentials.add(node_cred_unique, node_cred_conflicting) node_labels = [Label.objects.create(name='node1', organization=organization), Label.objects.create(name='node2', organization=organization)] node.labels.add(*node_labels) node_igs = [common_ig, InstanceGroup.objects.create(name='node')] for ig in node_igs: node.instance_groups.add(ig) # assertions for where node has prompts but workflow job does not data = node.get_job_kwargs() assert data['extra_vars'] == {'node_key': 'node_val'} assert set(data['credentials']) == set([node_cred_conflicting, node_cred_unique]) assert data['instance_groups'] == node_igs assert set(data['labels']) == set(node_labels) assert data['limit'] == 'node_limit' # add prompts to the WorkflowJob wj.limit = 'wj_limit' wj.extra_vars = {'wj_key': 'wj_val'} wj.save() wj_cred_unique = Credential.objects.create(credential_type=CredentialType.objects.create(name='wj')) wj_cred_conflicting = Credential.objects.create(credential_type=common_ct) wj.credentials.add(wj_cred_unique, wj_cred_conflicting) wj.labels.add(Label.objects.create(name='wj1', organization=organization), Label.objects.create(name='wj2', organization=organization)) wj_igs = [InstanceGroup.objects.create(name='wj'), common_ig] for ig in wj_igs: wj.instance_groups.add(ig) # assertions for behavior where node and workflow jobs have prompts data = node.get_job_kwargs() assert data['extra_vars'] == {'node_key': 'node_val', 'wj_key': 'wj_val'} assert set(data['credentials']) == set([wj_cred_unique, wj_cred_conflicting, node_cred_unique]) assert data['instance_groups'] == wj_igs assert set(data['labels']) == set(node_labels) # as exception, WFJT labels not applied assert data['limit'] == 'wj_limit' @pytest.mark.django_db
@pytest.mark.django_db
494
test_workflow.py
Python
awx/main/tests/functional/models/test_workflow.py
b38e08174a29bb4d5bb80b2587271203dae9ba74
awx
3
106,050
54
20
51
341
19
0
91
517
flatten
Clean up remaining Main Classes docstrings (#5349) clean up docstrings
https://github.com/huggingface/datasets.git
def flatten(self, max_depth=16) -> "Features": for depth in range(1, max_depth): no_change = True flattened = self.copy() for column_name, subfeature in self.items(): if isinstance(subfeature, dict): no_change = False flattened.update({f"{column_name}.{k}": v for k, v in subfeature.items()}) del flattened[column_name] elif isinstance(subfeature, Sequence) and isinstance(subfeature.feature, dict): no_change = False flattened.update( { f"{column_name}.{k}": Sequence(v) if not isinstance(v, dict) else [v] for k, v in subfeature.feature.items() } ) del flattened[column_name] elif hasattr(subfeature, "flatten") and subfeature.flatten() != subfeature: no_change = False flattened.update({f"{column_name}.{k}": v for k, v in subfeature.flatten().items()}) del flattened[column_name] self = flattened if no_change: break return self
201
features.py
Python
src/datasets/features/features.py
c78559cacbb0ca6e0bc8bfc313cc0359f8c23ead
datasets
13
287,856
30
11
28
252
18
0
58
234
test_service_set_person_away
Netatmo refactor to use pyatmo 7.0.1 (#73482) (#78523) Co-authored-by: Robert Svensson <[email protected]>
https://github.com/home-assistant/core.git
async def test_service_set_person_away(hass, config_entry, netatmo_auth): with selected_platforms(["camera"]): await hass.config_entries.async_setup(config_entry.entry_id) await hass.async_block_till_done() await hass.async_block_till_done() data = { "entity_id": "camera.hall", "person": "Richard Doe", } with patch("pyatmo.home.Home.async_set_persons_away") as mock_set_persons_away: await hass.services.async_call( "netatmo", SERVICE_SET_PERSON_AWAY, service_data=data ) await hass.async_block_till_done() mock_set_persons_away.assert_called_once_with( person_id="91827376-7e04-5298-83af-a0cb8372dff3", ) data = { "entity_id": "camera.hall", } with patch("pyatmo.home.Home.async_set_persons_away") as mock_set_persons_away: await hass.services.async_call( "netatmo", SERVICE_SET_PERSON_AWAY, service_data=data ) await hass.async_block_till_done() mock_set_persons_away.assert_called_once_with( person_id=None, )
137
test_camera.py
Python
tests/components/netatmo/test_camera.py
81abeac83ed85c5753cb8f2ac317caf079cf1868
core
1
41,848
139
13
12
178
16
0
215
425
_adjust_cat_axis
Improve legend for categorical scatterplots (#2828) * Improve legend for categorical scatterplots * Move legend attribute assignment to fix empty plot * Don't create axis labels inside plotting functions * Add slight hack to enable catplot with empty x/y vectors * Don't set axis limits for empty categorical plot * Avoid expensive and uncessary computation when stripplot is not dodged * Add tests
https://github.com/mwaskom/seaborn.git
def _adjust_cat_axis(self, ax, axis): # Note: in theory, this could happen in _attach for all categorical axes # But two reasons not to do that: # - If it happens before plotting, autoscaling messes up the plot limits # - It would change existing plots from other seaborn functions if self.var_types[axis] != "categorical": return # If both x/y data are empty, the correct way to set up the plot is # somewhat undefined; because we don't add null category data to the plot in # this case we don't *have* a categorical axis (yet), so best to just bail. if self.plot_data[axis].empty: return # We can infer the total number of categories (including those from previous # plots that are not part of the plot we are currently making) from the number # of ticks, which matplotlib sets up while doing unit conversion. This feels # slightly risky, as if we are relying on something that may be a matplotlib # implementation detail. But I cannot think of a better way to keep track of # the state from previous categorical calls (see GH2516 for context) n = len(getattr(ax, f"get_{axis}ticks")()) if axis == "x": ax.xaxis.grid(False) ax.set_xlim(-.5, n - .5, auto=None) else: ax.yaxis.grid(False) # Note limits that correspond to previously-inverted y axis ax.set_ylim(n - .5, -.5, auto=None)
103
categorical.py
Python
seaborn/categorical.py
dce315003b5bfba69e4bfe0a9d8e3ed9c46ea7fa
seaborn
4
160,486
10
10
4
61
10
0
11
43
test_property
ENH: Add support for symbol to polynomial package (#16154) Adds a symbol attribute to the polynomials from the np.polynomial package to allow the user to control/modify the symbol used to represent the independent variable for a polynomial expression. This attribute corresponds to the variable attribute of the poly1d class from the old np.lib.polynomial module. Marked as draft for now as it depends on #15666 - all _str* and _repr* methods of ABCPolyBase and derived classes would need to be modified (and tested) to support this change. Co-authored-by: Warren Weckesser <[email protected]>
https://github.com/numpy/numpy.git
def test_property(self): p = poly.Polynomial(self.c, symbol='x') with pytest.raises(AttributeError): p.symbol = 'z'
33
test_symbol.py
Python
numpy/polynomial/tests/test_symbol.py
b84a53df9346a73fe8f6df0aaad8727f9bf56076
numpy
1
249,082
24
13
11
91
12
0
24
103
test_unknown_devices
Use literals in place of `HTTPStatus` constants in tests (#13469)
https://github.com/matrix-org/synapse.git
def test_unknown_devices(self) -> None: channel = self.make_request( "POST", self.url, access_token=self.admin_user_tok, content={"devices": ["unknown_device1", "unknown_device2"]}, ) # Delete unknown devices returns status 200 self.assertEqual(200, channel.code, msg=channel.json_body)
55
test_device.py
Python
tests/rest/admin/test_device.py
c97042f7eef3748e17c90e48a4122389a89c4735
synapse
1
176,452
40
13
16
141
11
0
56
155
reversed
Minor improvements from general code readthrough (#5414) * Add deprecated directive to reversed docstring. * Add missing dep directives to shpfiles. * Remove defn of INF sentinel. * typo. * str -> comment in forloop. * STY: appropriate casing for var name.
https://github.com/networkx/networkx.git
def reversed(G): msg = ( "context manager reversed is deprecated and to be removed in 3.0." "Use G.reverse(copy=False) if G.is_directed() else G instead." ) warnings.warn(msg, DeprecationWarning) directed = G.is_directed() if directed: G._pred, G._succ = G._succ, G._pred G._adj = G._succ try: yield finally: if directed: # Reverse the reverse. G._pred, G._succ = G._succ, G._pred G._adj = G._succ
82
contextmanagers.py
Python
networkx/utils/contextmanagers.py
cc1db275efc709cb964ce88abbfa877798d58c10
networkx
4
259,776
9
10
3
57
7
0
9
18
strip_accents_ascii
DOC Ensures that strip_accents_ascii passes numpydoc validation (#23250)
https://github.com/scikit-learn/scikit-learn.git
def strip_accents_ascii(s): nkfd_form = unicodedata.normalize("NFKD", s) return nkfd_form.encode("ASCII", "ignore").decode("ASCII")
30
text.py
Python
sklearn/feature_extraction/text.py
49a5e0ce59168ead764224abc2cb07e77914cd24
scikit-learn
1
265,891
5
6
2
21
4
0
5
19
clear
Closes #10560: New global search (#10676) * Initial work on new search backend * Clean up search backends * Return only the most relevant result per object * Clear any pre-existing cached entries on cache() * #6003: Implement global search functionality for custom field values * Tweak field weights & document guidance * Extend search() to accept a lookup type * Move get_registry() out of SearchBackend * Enforce object permissions when returning search results * Add indexers for remaining models * Avoid calling remove() on non-cacheable objects * Use new search backend by default * Extend search backend to filter by object type * Clean up search view form * Enable specifying lookup logic * Add indexes for value field * Remove object type selector from search bar * Introduce SearchTable and enable HTMX for results * Enable pagination * Remove legacy search backend * Cleanup * Use a UUID for CachedValue primary key * Refactoring search methods * Define max search results limit * Extend reindex command to support specifying particular models * Add clear() and size to SearchBackend * Optimize bulk caching performance * Highlight matched portion of field value * Performance improvements for reindexing * Started on search tests * Cleanup & docs * Documentation updates * Clean up SearchIndex * Flatten search registry to register by app_label.model_name * Clean up search backend classes * Clean up RestrictedGenericForeignKey and RestrictedPrefetch * Resolve migrations conflict
https://github.com/netbox-community/netbox.git
def clear(self, object_types=None): raise NotImplementedError
12
backends.py
Python
netbox/netbox/search/backends.py
9628dead07ccef9608b32906aa8194bc948e5a09
netbox
1
247,484
52
11
22
293
36
0
56
231
test_knock
Add some type hints to the tests.handlers module. (#12207)
https://github.com/matrix-org/synapse.git
def test_knock(self) -> None: # create a knockable v7 room room_id = self.helper.create_room_as( self.user1, room_version=RoomVersions.V7.identifier, tok=self.token1 ) self.helper.send_state( room_id, EventTypes.JoinRules, {"join_rule": JoinRules.KNOCK}, tok=self.token1, ) self.helper.send(room_id, body="Hello!", tok=self.token1) self.helper.knock(room_id, self.user2, tok=self.token2) writer = Mock() self.get_success(self.admin_handler.export_user_data(self.user2, writer)) writer.write_events.assert_not_called() writer.write_state.assert_not_called() writer.write_knock.assert_called_once() args = writer.write_knock.call_args[0] self.assertEqual(args[0], room_id) self.assertEqual(args[1].content["membership"], "knock") self.assertTrue(args[2]) # Assert there is at least one bit of state
186
test_admin.py
Python
tests/handlers/test_admin.py
e10a2fe0c28ec9206c0e2275df492f61ff5025f2
synapse
1
144,201
10
10
3
40
6
0
10
35
_check_valid
[runtime env] Local uri caching for working_dir, py_modules and conda (#20273) Previously, local files corresponding to runtime env URIs were eagerly garbage collected as soon as there were no more references to them. In this PR, we store this data in a cache instead, so when the reference count for a URI drops to zero, instead of deleting it we simple mark it as unused in the cache. When the cache exceeds its size limit (default 10 GB) it will delete unused URIs until the cache is back under the size limit or there are no more unused URIs. Design doc: https://docs.google.com/document/d/1x1JAHg7c0ewcOYwhhclbuW0B0UC7l92WFkF4Su0T-dk/edit - Adds unit tests for caching and integration tests for working_dir caching
https://github.com/ray-project/ray.git
def _check_valid(self): if self._debug_mode: assert self._used_uris & self._unused_uris == set()
23
uri_cache.py
Python
python/ray/_private/runtime_env/uri_cache.py
78f882dbbc4d09128bc421eb8206d15d6d2c45a2
ray
2
126,955
158
17
60
517
48
0
248
1,120
training_step
[RLlib] Move learning_starts logic from buffers into `training_step()`. (#26032)
https://github.com/ray-project/ray.git
def training_step(self) -> ResultDict: train_results = {} # We alternate between storing new samples and sampling and training store_weight, sample_and_train_weight = calculate_rr_weights(self.config) for _ in range(store_weight): # Sample (MultiAgentBatch) from workers. new_sample_batch = synchronous_parallel_sample( worker_set=self.workers, concat=True ) # Update counters self._counters[NUM_AGENT_STEPS_SAMPLED] += new_sample_batch.agent_steps() self._counters[NUM_ENV_STEPS_SAMPLED] += new_sample_batch.env_steps() # Store new samples in replay buffer. self.local_replay_buffer.add_batch(new_sample_batch) global_vars = { "timestep": self._counters[NUM_ENV_STEPS_SAMPLED], } # Update target network every `target_network_update_freq` sample steps. cur_ts = self._counters[ NUM_AGENT_STEPS_SAMPLED if self._by_agent_steps else NUM_ENV_STEPS_SAMPLED ] if cur_ts > self.config["num_steps_sampled_before_learning_starts"]: for _ in range(sample_and_train_weight): # Sample training batch (MultiAgentBatch) from replay buffer. train_batch = sample_min_n_steps_from_buffer( self.local_replay_buffer, self.config["train_batch_size"], count_by_agent_steps=self._by_agent_steps, ) # Postprocess batch before we learn on it post_fn = self.config.get("before_learn_on_batch") or (lambda b, *a: b) train_batch = post_fn(train_batch, self.workers, self.config) # for policy_id, sample_batch in train_batch.policy_batches.items(): # print(len(sample_batch["obs"])) # print(sample_batch.count) # Learn on training batch. # Use simple optimizer (only for multi-agent or tf-eager; all other # cases should use the multi-GPU optimizer, even if only using 1 GPU) if self.config.get("simple_optimizer") is True: train_results = train_one_step(self, train_batch) else: train_results = multi_gpu_train_one_step(self, train_batch) # Update replay buffer priorities. update_priorities_in_replay_buffer( self.local_replay_buffer, self.config, train_batch, train_results, ) last_update = self._counters[LAST_TARGET_UPDATE_TS] if cur_ts - last_update >= self.config["target_network_update_freq"]: to_update = self.workers.local_worker().get_policies_to_train() self.workers.local_worker().foreach_policy_to_train( lambda p, pid: pid in to_update and p.update_target() ) self._counters[NUM_TARGET_UPDATES] += 1 self._counters[LAST_TARGET_UPDATE_TS] = cur_ts # Update weights and global_vars - after learning on the local worker - # on all remote workers. with self._timers[SYNCH_WORKER_WEIGHTS_TIMER]: self.workers.sync_weights(global_vars=global_vars) # Return all collected metrics for the iteration. return train_results # Deprecated: Use ray.rllib.algorithms.dqn.DQNConfig instead!
316
dqn.py
Python
rllib/algorithms/dqn/dqn.py
0dceddb912ed92286032b5563dd2e541a8a7031f
ray
9
192,979
65
12
13
244
24
0
86
133
sequence_loss
Upgrade usort to `1.0.2` and black to 22.3.0 (#5106) * upgrade usort to * Also update black * Actually use 1.0.2 * Apply pre-commit Co-authored-by: Nicolas Hug <[email protected]>
https://github.com/pytorch/vision.git
def sequence_loss(flow_preds, flow_gt, valid_flow_mask, gamma=0.8, max_flow=400): if gamma > 1: raise ValueError(f"Gamma should be < 1, got {gamma}.") # exlude invalid pixels and extremely large diplacements flow_norm = torch.sum(flow_gt**2, dim=1).sqrt() valid_flow_mask = valid_flow_mask & (flow_norm < max_flow) valid_flow_mask = valid_flow_mask[:, None, :, :] flow_preds = torch.stack(flow_preds) # shape = (num_flow_updates, batch_size, 2, H, W) abs_diff = (flow_preds - flow_gt).abs() abs_diff = (abs_diff * valid_flow_mask).mean(axis=(1, 2, 3, 4)) num_predictions = flow_preds.shape[0] weights = gamma ** torch.arange(num_predictions - 1, -1, -1).to(flow_gt.device) flow_loss = (abs_diff * weights).sum() return flow_loss
157
utils.py
Python
references/optical_flow/utils.py
6ca9c76adb6daf2695d603ad623a9cf1c4f4806f
vision
2
124,015
7
9
7
32
6
0
7
21
_new_batch_builder
[RLlib] EnvRunnerV2 and EpisodeV2 that support Connectors. (#25922)
https://github.com/ray-project/ray.git
def _new_batch_builder(self, _) -> _PolicyCollectorGroup: return _PolicyCollectorGroup(self._worker.policy_map)
19
env_runner_v2.py
Python
rllib/evaluation/env_runner_v2.py
52bb8e47d483082e528fc8595005e0813a46efb8
ray
1
274,738
32
11
4
79
14
1
37
60
top_k_categorical_accuracy
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def top_k_categorical_accuracy(y_true, y_pred, k=5): # Note: wraps metrics_utils.top_k_categorical_matches. This seperates # public facing top_k_categorical_accuracy behavior from the vital behavior # of the top_k_categorical_matches method needed in backend dependencies. return metrics_utils.sparse_top_k_categorical_matches( tf.math.argmax(y_true, axis=-1), y_pred, k ) @keras_export("keras.metrics.sparse_top_k_categorical_accuracy") @tf.__internal__.dispatch.add_dispatch_support
@keras_export("keras.metrics.sparse_top_k_categorical_accuracy") @tf.__internal__.dispatch.add_dispatch_support
35
metrics.py
Python
keras/metrics/metrics.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
272,922
20
10
4
49
8
0
20
39
compress
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def compress(summary, epsilon): # TODO(b/184863356): remove the numpy escape hatch here. return tf.numpy_function( lambda s: _compress_summary_numpy(s, epsilon), [summary], tf.float32 )
31
discretization.py
Python
keras/layers/preprocessing/discretization.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
267,729
160
18
82
892
63
0
332
1,630
execute_list_collection
Fix listing collections that are missing the metadata required by build (#76596) * Rethread pr/70185 through the dependency resolver Hang optional metadata toggle on the ConcreteArtifactsManager instead of threading it through whole list codepath Don't error while listing collections if a collection's metadata is missing keys required for building a collection. Give an informative warning if metadata has been badly formatted. Co-authored-by: Sam Doran <[email protected]>
https://github.com/ansible/ansible.git
def execute_list_collection(self, artifacts_manager=None): if artifacts_manager is not None: artifacts_manager.require_build_metadata = False output_format = context.CLIARGS['output_format'] collections_search_paths = set(context.CLIARGS['collections_path']) collection_name = context.CLIARGS['collection'] default_collections_path = AnsibleCollectionConfig.collection_paths collections_in_paths = {} warnings = [] path_found = False collection_found = False for path in collections_search_paths: collection_path = GalaxyCLI._resolve_path(path) if not os.path.exists(path): if path in default_collections_path: # don't warn for missing default paths continue warnings.append("- the configured path {0} does not exist.".format(collection_path)) continue if not os.path.isdir(collection_path): warnings.append("- the configured path {0}, exists, but it is not a directory.".format(collection_path)) continue path_found = True if collection_name: # list a specific collection validate_collection_name(collection_name) namespace, collection = collection_name.split('.') collection_path = validate_collection_path(collection_path) b_collection_path = to_bytes(os.path.join(collection_path, namespace, collection), errors='surrogate_or_strict') if not os.path.exists(b_collection_path): warnings.append("- unable to find {0} in collection paths".format(collection_name)) continue if not os.path.isdir(collection_path): warnings.append("- the configured path {0}, exists, but it is not a directory.".format(collection_path)) continue collection_found = True try: collection = Requirement.from_dir_path_as_unknown( b_collection_path, artifacts_manager, ) except ValueError as val_err: six.raise_from(AnsibleError(val_err), val_err) if output_format in {'yaml', 'json'}: collections_in_paths[collection_path] = { collection.fqcn: {'version': collection.ver} } continue fqcn_width, version_width = _get_collection_widths([collection]) _display_header(collection_path, 'Collection', 'Version', fqcn_width, version_width) _display_collection(collection, fqcn_width, version_width) else: # list all collections collection_path = validate_collection_path(path) if os.path.isdir(collection_path): display.vvv("Searching {0} for collections".format(collection_path)) collections = list(find_existing_collections( collection_path, artifacts_manager, )) else: # There was no 'ansible_collections/' directory in the path, so there # or no collections here. display.vvv("No 'ansible_collections' directory found at {0}".format(collection_path)) continue if not collections: display.vvv("No collections found at {0}".format(collection_path)) continue if output_format in {'yaml', 'json'}: collections_in_paths[collection_path] = { collection.fqcn: {'version': collection.ver} for collection in collections } continue # Display header fqcn_width, version_width = _get_collection_widths(collections) _display_header(collection_path, 'Collection', 'Version', fqcn_width, version_width) # Sort collections by the namespace and name for collection in sorted(collections, key=to_text): _display_collection(collection, fqcn_width, version_width) # Do not warn if the specific collection was found in any of the search paths if collection_found and collection_name: warnings = [] for w in warnings: display.warning(w) if not path_found: raise AnsibleOptionsError("- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type'])) if output_format == 'json': display.display(json.dumps(collections_in_paths)) elif output_format == 'yaml': display.display(yaml_dump(collections_in_paths)) return 0
529
galaxy.py
Python
lib/ansible/cli/galaxy.py
05608b20e8f875d51866a184f8c579fe60498e05
ansible
22
248,305
6
6
5
19
3
0
6
20
can_native_upsert
Tidy up and type-hint the database engine modules (#12734) Co-authored-by: Sean Quah <[email protected]>
https://github.com/matrix-org/synapse.git
def can_native_upsert(self) -> bool: return True
10
postgres.py
Python
synapse/storage/engines/postgres.py
1fe202a1a3343fad77da270ffe0923a46f1944dd
synapse
1
309,149
87
17
109
819
33
0
211
1,162
test_doorbell_update_via_pubnub
Split august motion and image capture binary sensors (#62154)
https://github.com/home-assistant/core.git
async def test_doorbell_update_via_pubnub(hass): doorbell_one = await _mock_doorbell_from_fixture(hass, "get_doorbell.json") pubnub = AugustPubNub() await _create_august_with_devices(hass, [doorbell_one], pubnub=pubnub) assert doorbell_one.pubsub_channel == "7c7a6672-59c8-3333-ffff-dcd98705cccc" binary_sensor_k98gidt45gul_name_motion = hass.states.get( "binary_sensor.k98gidt45gul_name_motion" ) assert binary_sensor_k98gidt45gul_name_motion.state == STATE_OFF binary_sensor_k98gidt45gul_name_ding = hass.states.get( "binary_sensor.k98gidt45gul_name_ding" ) assert binary_sensor_k98gidt45gul_name_ding.state == STATE_OFF pubnub.message( pubnub, Mock( channel=doorbell_one.pubsub_channel, timetoken=_timetoken(), message={ "status": "imagecapture", "data": { "result": { "created_at": "2021-03-16T01:07:08.817Z", "secure_url": "https://dyu7azbnaoi74.cloudfront.net/zip/images/zip.jpeg", }, }, }, ), ) await hass.async_block_till_done() binary_sensor_k98gidt45gul_name_image_capture = hass.states.get( "binary_sensor.k98gidt45gul_name_image_capture" ) assert binary_sensor_k98gidt45gul_name_image_capture.state == STATE_ON pubnub.message( pubnub, Mock( channel=doorbell_one.pubsub_channel, timetoken=_timetoken(), message={ "status": "doorbell_motion_detected", "data": { "event": "doorbell_motion_detected", "image": { "height": 640, "width": 480, "format": "jpg", "created_at": "2021-03-16T02:36:26.886Z", "bytes": 14061, "secure_url": "https://dyu7azbnaoi74.cloudfront.net/images/1f8.jpeg", "url": "https://dyu7azbnaoi74.cloudfront.net/images/1f8.jpeg", "etag": "09e839331c4ea59eef28081f2caa0e90", }, "doorbellName": "Front Door", "callID": None, "origin": "mars-api", "mutableContent": True, }, }, ), ) await hass.async_block_till_done() binary_sensor_k98gidt45gul_name_motion = hass.states.get( "binary_sensor.k98gidt45gul_name_motion" ) assert binary_sensor_k98gidt45gul_name_motion.state == STATE_ON binary_sensor_k98gidt45gul_name_ding = hass.states.get( "binary_sensor.k98gidt45gul_name_ding" ) assert binary_sensor_k98gidt45gul_name_ding.state == STATE_OFF new_time = dt_util.utcnow() + datetime.timedelta(seconds=40) native_time = datetime.datetime.now() + datetime.timedelta(seconds=40) with patch( "homeassistant.components.august.binary_sensor._native_datetime", return_value=native_time, ): async_fire_time_changed(hass, new_time) await hass.async_block_till_done() binary_sensor_k98gidt45gul_name_image_capture = hass.states.get( "binary_sensor.k98gidt45gul_name_image_capture" ) assert binary_sensor_k98gidt45gul_name_image_capture.state == STATE_OFF pubnub.message( pubnub, Mock( channel=doorbell_one.pubsub_channel, timetoken=_timetoken(), message={ "status": "buttonpush", }, ), ) await hass.async_block_till_done() binary_sensor_k98gidt45gul_name_ding = hass.states.get( "binary_sensor.k98gidt45gul_name_ding" ) assert binary_sensor_k98gidt45gul_name_ding.state == STATE_ON new_time = dt_util.utcnow() + datetime.timedelta(seconds=40) native_time = datetime.datetime.now() + datetime.timedelta(seconds=40) with patch( "homeassistant.components.august.binary_sensor._native_datetime", return_value=native_time, ): async_fire_time_changed(hass, new_time) await hass.async_block_till_done() binary_sensor_k98gidt45gul_name_ding = hass.states.get( "binary_sensor.k98gidt45gul_name_ding" ) assert binary_sensor_k98gidt45gul_name_ding.state == STATE_OFF
475
test_binary_sensor.py
Python
tests/components/august/test_binary_sensor.py
ea5b18c1ef16b64cd7916f2540692ab5de2d2edf
core
1
337,864
21
11
13
119
14
0
28
95
gather
FSDP integration enhancements and fixes (#522) * FSDP integration enhancements and fixes * bug fixes 1. fix circular dependency 2. Add model print statement in FSDP example 3. minor fixes * removing `always_wrap` as it is rarely useful * removing comment * resolving comments * fsdp fp16 mp uses ShardedGradScaler * fix import * fix check * add exception when class to wrap not found in model * adding `FSDP_BACKWARD_PREFETCH` * fix
https://github.com/huggingface/accelerate.git
def gather(tensor): if AcceleratorState().distributed_type == DistributedType.TPU: return _tpu_gather(tensor, name="accelerate.utils.gather") elif AcceleratorState().distributed_type in [ DistributedType.DEEPSPEED, DistributedType.MULTI_GPU, DistributedType.FSDP, ]: return _gpu_gather(tensor) elif AcceleratorState().distributed_type == DistributedType.MULTI_CPU: return _cpu_gather(tensor) else: return tensor
73
operations.py
Python
src/accelerate/utils/operations.py
c93b3eb5d714ab9a2743f6f567e2f5599f1f8316
accelerate
4
80,591
90
18
20
260
26
0
120
402
update_model
Decoupled callback functions from BaseTask Class --- Removed all callback functions from 'jobs.py' and put them in a new file '/awx/main/tasks/callback.py' --- Modified Unit tests unit moved --- Moved 'update_model' from jobs.py to /awx/main/utils/update_model.py
https://github.com/ansible/awx.git
def update_model(model, pk, _attempt=0, **updates): try: with transaction.atomic(): # Retrieve the model instance. instance = model.objects.get(pk=pk) # Update the appropriate fields and save the model # instance, then return the new instance. if updates: update_fields = ['modified'] for field, value in updates.items(): setattr(instance, field, value) update_fields.append(field) if field == 'status': update_fields.append('failed') instance.save(update_fields=update_fields) return instance except DatabaseError as e: # Log out the error to the debug logger. logger.debug('Database error updating %s, retrying in 5 seconds (retry #%d): %s', model._meta.object_name, _attempt + 1, e) # Attempt to retry the update, assuming we haven't already # tried too many times. if _attempt < 5: time.sleep(5) return update_model(model, pk, _attempt=_attempt + 1, **updates) else: logger.error('Failed to update %s after %d retries.', model._meta.object_name, _attempt)
156
update_model.py
Python
awx/main/utils/update_model.py
443bdc1234682dd0004bae372078512fcf37cce9
awx
6
132,830
21
11
8
79
9
0
24
92
should_checkpoint
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
https://github.com/ray-project/ray.git
def should_checkpoint(self): result = self.last_result or {} if result.get(DONE) and self.checkpoint_at_end: return True return ( self.checkpoint_freq and result.get(TRAINING_ITERATION, 0) % self.checkpoint_freq == 0 )
49
trial.py
Python
python/ray/tune/trial.py
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
5
100,808
61
12
16
169
15
0
83
258
freeze
Refactoring and TravisCI to Github Actions (#1239) * refactor training * travis to actions
https://github.com/deepfakes/faceswap.git
def freeze(self) -> None: # Blanket unfreeze layers, as checking the value of :attr:`layer.trainable` appears to # return ``True`` even when the weights have been frozen for layer in get_all_sub_models(self._model): layer.trainable = True if not self._do_freeze: logger.debug("Freeze weights deselected. Not freezing") return for layer in get_all_sub_models(self._model): if layer.name in self._freeze_layers: logger.info("Freezing weights for '%s' in model '%s'", layer.name, self._name) layer.trainable = False self._freeze_layers.remove(layer.name) if self._freeze_layers: logger.warning("The following layers were set to be frozen but do not exist in the " "model: %s", self._freeze_layers)
100
io.py
Python
plugins/train/model/_base/io.py
ff6b0209dd5ad57b81b0aca570df7f39a7119bfb
faceswap
6
20,843
14
17
6
70
10
0
18
57
_numbers_column_width
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def _numbers_column_width(self) -> int: column_width = 0 if self.line_numbers: column_width = len(str(self.start_line + self.code.count("\n"))) + 2 return column_width
40
syntax.py
Python
pipenv/patched/notpip/_vendor/rich/syntax.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
2
315,626
18
9
9
99
13
0
26
53
test_set_dateandtime_button
Add SetSystemDateandTime Button (#66419) * add SetSystemDateandTime * fix * address review * follow recommendation to set date and time on start * add set date and time button test
https://github.com/home-assistant/core.git
async def test_set_dateandtime_button(hass): await setup_onvif_integration(hass) state = hass.states.get("button.testcamera_set_system_date_and_time") assert state assert state.state == STATE_UNKNOWN registry = er.async_get(hass) entry = registry.async_get("button.testcamera_set_system_date_and_time") assert entry assert entry.unique_id == f"{MAC}_setsystemdatetime"
54
test_button.py
Python
tests/components/onvif/test_button.py
4e2de2479a3f1ae861c7683fe3c5c0881a242c17
core
1
102,202
44
17
9
214
31
0
53
174
test_observers_not_touched_by_tracing
dbr quant: break up test class into multiple classes (#70246) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/70246 Breaks up the large `TestQuantizeDBR` test case into 1. `TestQuantizeDBRIndividualOps` for testing functionality of ops 2. `TestQuantizeDBRMultipleOps` for testing non-fusion interactions between ops 3. `TestQuantizeDBR` for everything else We may need to refactor this more in the future, but this should unblock things for the near future. Test Plan: ``` python test/test_quantization.py TestQuantizeDBR python test/test_quantization.py TestQuantizeDBRIndividualOps python test/test_quantization.py TestQuantizeDBRMultipleOps ``` Reviewed By: jerryzh168 Differential Revision: D33255925 Pulled By: vkuzo fbshipit-source-id: 82db1a644867e9303453cfedffed2d81d083c9cd
https://github.com/pytorch/pytorch.git
def test_observers_not_touched_by_tracing(self): m = nn.Sequential(nn.Conv2d(1, 1, 1)).eval() qconfig = torch.quantization.default_qconfig mp = _quantize_dbr.prepare(m, {'': qconfig}, (torch.randn(1, 1, 1, 1),)) for _, mod in mp.named_modules(): if isinstance(mod, (ObserverBase, FakeQuantizeBase)): scale, zp = mod.calculate_qparams() # Assume that if scale is 1.0 and zp is 0, no calibration # has happened. self.assertTrue(torch.allclose(scale, torch.ones(1))) self.assertTrue(torch.equal(zp, torch.zeros(1, dtype=torch.long)))
138
test_quantize_dbr.py
Python
test/quantization/dbr/test_quantize_dbr.py
4e90fa6a8c15eb92578da5ef77b2c515cb592abf
pytorch
3
275,052
75
13
15
136
13
0
99
229
strategy_supports_loss_scaling
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def strategy_supports_loss_scaling(): if not tf.distribute.has_strategy(): return True strategy = tf.distribute.get_strategy() # Strategies are supported if either there is only one replica or if variables # are replicated per device. Otherwise, the current model.fit() implementation # and most custom training loops incorrectly unscale the gradients. Currently, # gradients are unscaled once per compute replica, but they should be unscaled # once per variable replica. When there is one variable replica for each # compute replica, this works fine, but otherwise issues will occur. # TODO(reedwm): Support all strategies. return isinstance( strategy, ( tf.distribute.MultiWorkerMirroredStrategy, tf.compat.v1.distribute.experimental.MultiWorkerMirroredStrategy, tf.distribute.OneDeviceStrategy, tf.compat.v1.distribute.OneDeviceStrategy, tf.distribute.MirroredStrategy, tf.compat.v1.distribute.MirroredStrategy, ), )
85
loss_scale_optimizer.py
Python
keras/mixed_precision/loss_scale_optimizer.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
2
81,182
7
10
3
41
6
0
7
32
_get_local_with_cache
Only use in-memory cache for database settings, set ttl=5 (#12166) * Only use in-memory cache for database settings Make necessary adjustments to monkeypatch as it is very vunerable to recursion Remove migration exception that is now redundant Clear cache if a setting is changed * Use dedicated middleware for setting cache stuff Clear cache for each request * Add tests for in-memory cache
https://github.com/ansible/awx.git
def _get_local_with_cache(self, name): with _ctit_db_wrapper(trans_safe=True): return self._get_local(name)
23
settings.py
Python
awx/conf/settings.py
aaad634483b0f293a07be1863220e3bfdce2a8ba
awx
1
30,696
9
10
7
41
5
0
9
82
compute_loss_context_manager
Support compilation via Torchdynamo, AOT Autograd, NVFuser (#17308) * Support compilation via Torchdynamo, AOT Autograd, NVFuser * Address comments * Lint * Stas comments - missing quality test * Lintere * Quality test * Doc lint * Reset CUDA peak mem * Add CustomTrainer * require a single gpu Co-authored-by: Stas Bekman <[email protected]>
https://github.com/huggingface/transformers.git
def compute_loss_context_manager(self): return ContextManagers( [ self.torchdynamo_smart_context_manager(), self.autocast_smart_context_manager(), ] )
24
trainer.py
Python
src/transformers/trainer.py
897a8dd89f40817201bc4aebe532a096405bdeb1
transformers
1
20,070
11
9
2
32
3
0
11
32
like
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def like(self): # type: () -> str return self.os_release_attr("id_like") or ""
15
distro.py
Python
pipenv/patched/notpip/_vendor/distro.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
2
133,366
57
14
18
192
23
0
67
268
validate
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
https://github.com/ray-project/ray.git
def validate(self, val_iterator, info=None): if not hasattr(self, "model"): raise RuntimeError( "Either set self.model in setup function or " "override this method to implement a custom " "validation loop." ) info = info or {} model = self.model metric_meters = AverageMeterCollection() # switch to evaluate mode model.eval() with torch.no_grad(): for batch_idx, batch in enumerate(val_iterator): batch_info = {"batch_idx": batch_idx} batch_info.update(info) metrics = self.validate_batch(batch, batch_info) metric_meters.update(metrics, n=metrics.pop(NUM_SAMPLES, 1)) return metric_meters.summary()
112
training_operator.py
Python
python/ray/util/sgd/torch/training_operator.py
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
4
194,709
20
11
6
60
9
0
20
74
pause
Android Lifecycle convergence (#7989) * Android pause on back key/gesture Replaces previous divergence between Android and Kivy state machines. * Android exit app when on_pause returns False Corrects previous behavior where Kivy app stops, but Android does not stop. * stop() - set Android state Let Kivy app state follow Android state. * Add pause() * typo * Disambiguate It is unclear it "default case" refers to existence of a return statement, or existence of a method. * add versionadded * pep8 * pep8 * pep8 * Update kivy/app.py Co-authored-by: Mirko Galimberti <[email protected]> * Update app.py Co-authored-by: Mirko Galimberti <[email protected]>
https://github.com/kivy/kivy.git
def pause(self, *largs): if platform == 'android': from android import mActivity mActivity.moveTaskToBack(True) else: Logger.info('App.pause() is not available on this OS.')
32
app.py
Python
kivy/app.py
fd12906efa5b4eb3f56c6b3d2c737b79228aa48a
kivy
2
155,213
51
12
12
139
19
0
63
132
get_query_info
FEAT-#5053: Add pandas on unidist execution with MPI backend (#5059) Signed-off-by: Igoshev, Iaroslav <[email protected]>
https://github.com/modin-project/modin.git
def get_query_info(sql, con, partition_column): engine = create_engine(con) if is_table(engine, sql): table_metadata = get_table_metadata(engine, sql) query = build_query_from_table(sql) cols = get_table_columns(table_metadata) else: check_query(sql) query = sql.replace(";", "") cols = get_query_columns(engine, query) # TODO allow validation that takes into account edge cases of pandas e.g. "[index]" # check_partition_column(partition_column, cols) # TODO partition_column isn't used; we need to use it; cols_names = list(cols.keys()) return cols_names, query
82
sql.py
Python
modin/experimental/core/execution/unidist/implementations/pandas_on_unidist/io/sql.py
193505fdf0c984743397ba3df56262f30aee13a8
modin
2
292,555
30
12
11
119
17
0
35
137
_try_connect
Expose Samsung wrapper as async (#67042) Co-authored-by: epenet <[email protected]>
https://github.com/home-assistant/core.git
async def _try_connect(self) -> None: for method in SUPPORTED_METHODS: self._bridge = SamsungTVBridge.get_bridge(self.hass, method, self._host) result = await self._bridge.async_try_connect() if result == RESULT_SUCCESS: return if result != RESULT_CANNOT_CONNECT: raise data_entry_flow.AbortFlow(result) LOGGER.debug("No working config found") raise data_entry_flow.AbortFlow(RESULT_CANNOT_CONNECT)
72
config_flow.py
Python
homeassistant/components/samsungtv/config_flow.py
a60c37cdb8cc9d0b9bad1dedb92b6068cd9d1244
core
4
292,792
8
9
4
51
7
0
9
45
_handle_event
Add support for rfxtrx sirens and chimes (#66416) * Add support for sirens and chimes * Fixup testing * Fixup comments * Hook up existing off delay * Add docs for off delay. * Rename mixin
https://github.com/home-assistant/core.git
def _handle_event(self, event, device_id): if self._event_applies(event, device_id): self._apply_event(event) self.async_write_ha_state()
31
siren.py
Python
homeassistant/components/rfxtrx/siren.py
8a74295d6f9140e448c47d55aa3939c44630b2b7
core
2
196,926
9
9
10
42
6
0
9
62
_smat
Update the deprecation of the _mat and _smat Matrix properties
https://github.com/sympy/sympy.git
def _smat(self): sympy_deprecation_warning( , deprecated_since_version="1.9", active_deprecations_target="deprecated-private-matrix-attributes" ) return self.todok()
23
sparse.py
Python
sympy/matrices/sparse.py
0b4d5fa57d64b1102e51e03ed80013e16053bf96
sympy
1
86,530
83
14
22
355
28
0
119
320
as_dict
ref: use dict instead of OrderedDict since sentry is >python3.6 (#39695) partially automated (especially the fixtures) also via `\(([^]+), (.*)\),$` -> `\1: \2,`
https://github.com/getsentry/sentry.git
def as_dict(self) -> Mapping[str, Any]: data: MutableMapping[str, Any] = {} data["event_id"] = self.event_id data["project"] = self.project_id data["release"] = self.release data["dist"] = self.dist data["platform"] = self.platform data["message"] = self.message data["datetime"] = self.datetime data["tags"] = [(k.split("sentry:", 1)[-1], v) for (k, v) in self.tags] for k, v in sorted(self.data.items()): if k in data: continue if k == "sdk": v = {v_k: v_v for v_k, v_v in v.items() if v_k != "client_ip"} data[k] = v # for a long time culprit was not persisted. In those cases put # the culprit in from the group. if data.get("culprit") is None and self.group_id and self.group: data["culprit"] = self.group.culprit # Override title and location with dynamically generated data data["title"] = self.title data["location"] = self.location return data
213
models.py
Python
src/sentry/eventstore/models.py
286bf2ae7ecfdd6698d8fb1cd4753f107159d4d2
sentry
10
282,068
47
13
36
202
28
0
55
379
call_drdapps
Crypto features: Replace coingecko scrapping (#1156) * replaced cgcategories with api * added coingecko categories * refactoring commands to use api, added coins to cryptocontroller and merged find and coins * autocompletion for coins * removed unused vars * added dappradar features * refactoring commands position * refactoring commands position * adding visual commands and fixed report * skipped tests for now * lint notebook * correct report * black formatter keeps crying because notebook * removed unused imports * Fixed black * Keep kernel metadata 'cause it's required by papermill * Change jupyter cleanup hook to one based on nbconvert * Try fix the hook I just broke * Fix trailing commas in the crypto notebook * Change the jupyter hook to a one that's featured on pre-commit's page * Format report notebook and test new notebook hook * Black the notebook * Remove deleted functions from the crypto discovery API * Remove deleted functions from the crypto overview API * replaced print for console print and removed print from table * replaced print for console print and removed print from table * auto completion + sort for all discovery commands * replacing help messages * fix linting * added docs and removed unused commands * added todos and fixed help messages * lint * pr issues fixed * updated tests * tests merge * replaced with new rich table function Co-authored-by: Colin Delahunty <[email protected]> Co-authored-by: Theodore Aptekarev <[email protected]>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def call_drdapps(self, other_args): parser = argparse.ArgumentParser( prog="drdapps", add_help=False, formatter_class=argparse.ArgumentDefaultsHelpFormatter, description=, ) parser.add_argument( "-l", "--limit", dest="limit", type=check_positive, help="Number of records to display", default=15, ) parser.add_argument( "-s", "--sort", dest="sortby", nargs="+", help="Sort by given column. Default: Daily Volume [$]", default="Daily Volume [$]", ) ns_parser = parse_known_args_and_warn( parser, other_args, EXPORT_ONLY_RAW_DATA_ALLOWED ) if ns_parser: dappradar_view.display_top_dapps( sortby=" ".join(ns_parser.sortby), top=ns_parser.limit, export=ns_parser.export, )
124
discovery_controller.py
Python
gamestonk_terminal/cryptocurrency/discovery/discovery_controller.py
4501dfd442d371150b8785d379c5354095b6954b
OpenBBTerminal
2
32,824
11
11
4
65
7
0
11
35
clean_files_for
Use new huggingface_hub tools for download models (#18438) * Draft new cached_file * Initial draft for config and model * Small fixes * Fix first batch of tests * Look in cache when internet is down * Fix last tests * Bad black, not fixing all quality errors * Make diff less * Implement change for TF and Flax models * Add tokenizer and feature extractor * For compatibility with main * Add utils to move the cache and auto-do it at first use. * Quality * Deal with empty commit shas * Deal with empty etag * Address review comments
https://github.com/huggingface/transformers.git
def clean_files_for(file): for f in [file, f"{file}.json", f"{file}.lock"]: if os.path.isfile(f): os.remove(f)
35
hub.py
Python
src/transformers/utils/hub.py
5cd40323684c183c30b34758aea1e877996a7ac9
transformers
3
100,557
22
12
5
103
10
0
24
68
names
Refactor lib.gpu_stats (#1218) * inital gpu_stats refactor * Add dummy CPU Backend * Update Sphinx documentation
https://github.com/deepfakes/faceswap.git
def names(self) -> List[str]: return [f"{device.get('vendor', 'unknown')} - {device.get('name', 'unknown')} " f"({ 'supported' if idx in self._supported_indices else 'experimental'})" for idx, device in enumerate(self._device_details)]
29
amd.py
Python
lib/gpu_stats/amd.py
bdbbad4d310fb606b6f412aa81e9f57ccd994e97
faceswap
2
285,666
12
9
13
53
8
0
14
36
get_user_timezone
New path for styles and add timezone as environment variable (#2509) * add log path * add test to check if log file is in correct dir * env path * black * mypy fix * add styles folder and styles from repo * add timezone as env variable * fix changes with main * fix test * flake8 * fix linting * fix linting * fix issue with light mpl stylesheet * change timezone variable name * change names * change names * names * simplify paths.py * change some names * fix error in logic * remove 3.11 from testing for now
https://github.com/OpenBB-finance/OpenBBTerminal.git
def get_user_timezone() -> str: dotenv.load_dotenv(USER_ENV_FILE) user_tz = os.getenv("OPENBB_TIMEZONE") if user_tz: return user_tz return ""
28
helper_funcs.py
Python
openbb_terminal/helper_funcs.py
f18c44b0668ef8e40d14d79780558521b2c02304
OpenBBTerminal
2
209,856
40
17
15
193
24
0
45
205
svgdump
[Hinty] Core typing: windows (#3684) * Core typing: windows Co-authored-by: Pierre <[email protected]>
https://github.com/secdev/scapy.git
def svgdump(self, filename=None, **kargs): # type: (Optional[str], **Any) -> None from scapy.config import conf from scapy.utils import get_temp_file, ContextManagerSubprocess canvas = self.canvas_dump(**kargs) if filename is None: fname = get_temp_file(autoext=kargs.get("suffix", ".svg")) canvas.writeSVGfile(fname) if WINDOWS and not conf.prog.svgreader: os.startfile(fname) else: with ContextManagerSubprocess(conf.prog.svgreader): subprocess.Popen([conf.prog.svgreader, fname]) else: canvas.writeSVGfile(filename) print()
115
base_classes.py
Python
scapy/base_classes.py
a2b7a28faff1db058dd22ce097a268e0ad5d1d33
scapy
4
216,517
3
6
40
15
3
0
3
10
test_directory_max_depth
various changes and fixes needed to add PhotonOS into CICD.
https://github.com/saltstack/salt.git
def test_directory_max_depth(self, grains):
315
test_file.py
Python
tests/integration/states/test_file.py
00ee5eed1d75417faaaa185e27947b268239698e
salt
7
191,534
16
11
6
105
11
0
17
36
test_multi_input_errors
Harrison/sequential chains (#168) add support for basic sequential chains
https://github.com/hwchase17/langchain.git
def test_multi_input_errors() -> None: chain_1 = FakeChain(input_variables=["foo"], output_variables=["bar"]) chain_2 = FakeChain(input_variables=["bar", "foo"], output_variables=["baz"]) with pytest.raises(ValueError): SimpleSequentialChain(chains=[chain_1, chain_2])
59
test_sequential.py
Python
tests/unit_tests/chains/test_sequential.py
4a4dfbfbed5ca271fc74f61a0b3387314dda8703
langchain
1
19,149
36
13
14
243
21
0
50
176
load
Improve evaluation api (#5256) * init Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update doc Signed-off-by: Weichen Xu <[email protected]> * update doc Signed-off-by: Weichen Xu <[email protected]> * address comments Signed-off-by: Weichen Xu <[email protected]> * update doc Signed-off-by: Weichen Xu <[email protected]> * add shap limitation on value type Signed-off-by: Weichen Xu <[email protected]> * fix format Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]> * update Signed-off-by: Weichen Xu <[email protected]>
https://github.com/mlflow/mlflow.git
def load(cls, path): with open(os.path.join(path, "metrics.json"), "r") as fp: metrics = json.load(fp) with open(os.path.join(path, "artifacts_metadata.json"), "r") as fp: artifacts_metadata = json.load(fp) artifacts = {} artifacts_dir = os.path.join(path, "artifacts") for artifact_name, meta in artifacts_metadata.items(): uri = meta["uri"] ArtifactCls = _get_class_from_string(meta["class_name"]) artifact = ArtifactCls(uri=uri) artifact._load(os.path.join(artifacts_dir, artifact_name)) artifacts[artifact_name] = artifact return EvaluationResult(metrics=metrics, artifacts=artifacts)
144
base.py
Python
mlflow/models/evaluation/base.py
4c58179509e6f6047789efb0a95c2b0e20cb6c8f
mlflow
2
259,040
87
12
22
249
31
0
119
407
fit
MNT Drops Python 3.7 in CI, wheel building, and docs (#22617) * MNT Drops Python 3.7 * MNT Bump NumPy and SciPy * FIX Fix build * FIX Bump versions improved * DOC Fixes numpy version [pypy] * BLD [pypy] [icc-build] * Update docs * MAINT use scipy.optimize.LinearConstraint in test * MAINT scipy 1.1.0 related code clean-up * scipy>=1.3.2 in pyproject.toml's build deps * [cd build] * DOC Adds comment about pypy * MAINT remove _astype_copy_false * FIX Update check for python version in setup.py Co-authored-by: Olivier Grisel <[email protected]>
https://github.com/scikit-learn/scikit-learn.git
def fit(self, X, y=None): # large sparse data is not supported for 32bit platforms because # _document_frequency uses np.bincount which works on arrays of # dtype NPY_INTP which is int32 for 32bit platforms. See #20923 X = self._validate_data( X, accept_sparse=("csr", "csc"), accept_large_sparse=not _IS_32BIT ) if not sp.issparse(X): X = sp.csr_matrix(X) dtype = X.dtype if X.dtype in FLOAT_DTYPES else np.float64 if self.use_idf: n_samples, n_features = X.shape df = _document_frequency(X) df = df.astype(dtype, copy=False) # perform idf smoothing if required df += int(self.smooth_idf) n_samples += int(self.smooth_idf) # log+1 instead of log makes sure terms with zero idf don't get # suppressed entirely. idf = np.log(n_samples / df) + 1 self._idf_diag = sp.diags( idf, offsets=0, shape=(n_features, n_features), format="csr", dtype=dtype, ) return self
156
text.py
Python
sklearn/feature_extraction/text.py
f1d3417b086550be670cbfbb5b3c1760ac99203f
scikit-learn
4
314,328
42
14
23
398
8
0
72
298
async_update
Use attributes in keba locks and binary sensors (#73894) Co-authored-by: Franck Nijhof <[email protected]>
https://github.com/home-assistant/core.git
async def async_update(self) -> None: if self._key == "Online": self._attr_is_on = self._keba.get_value(self._key) elif self._key == "Plug": self._attr_is_on = self._keba.get_value("Plug_plugged") self._attributes["plugged_on_wallbox"] = self._keba.get_value( "Plug_wallbox" ) self._attributes["plug_locked"] = self._keba.get_value("Plug_locked") self._attributes["plugged_on_EV"] = self._keba.get_value("Plug_EV") elif self._key == "State": self._attr_is_on = self._keba.get_value("State_on") self._attributes["status"] = self._keba.get_value("State_details") self._attributes["max_charging_rate"] = str( self._keba.get_value("Max curr") ) elif self._key == "Tmo FS": self._attr_is_on = not self._keba.get_value("FS_on") self._attributes["failsafe_timeout"] = str(self._keba.get_value("Tmo FS")) self._attributes["fallback_current"] = str(self._keba.get_value("Curr FS")) elif self._key == "Authreq": self._attr_is_on = self._keba.get_value(self._key) == 0
224
binary_sensor.py
Python
homeassistant/components/keba/binary_sensor.py
186141ee4df74de2d0b6cb744652360e8b4cb558
core
6
92,239
15
14
2
54
5
0
15
21
percentile_fn
feat(dynamic-sampling): Adds endpoint that returns onboarding flow trace info [TET-176] (#36113) This PR adds an endpoint for the dynamic sampling onboarding flow that: - Does a query to the transactions table to fetch a random sampleSize over the last passed statsPeriod date range. - If distrubutedTracing mode is enabled, then it runs a subsequent query to fetch the project breakdown in the traces from the first query - Calculates distribution function values like p50, p90, p95, p99, avg, max, min on the client side sample rates returned from the first query - Returns the percentage of transactions that did not have a sample rate
https://github.com/getsentry/sentry.git
def percentile_fn(data, percentile): return data[int((len(data) - 1) * percentile)] if len(data) > 0 else None
34
project_dynamic_sampling.py
Python
src/sentry/api/endpoints/project_dynamic_sampling.py
923658b395545abc1b7f7a39cf64d198c9feea74
sentry
2
93,936
24
12
19
88
9
0
31
264
_get_string_indexer_log_records
ref(metrics_indexer): Improve typing, introduce more dataclasses, fix org_id namespacing bug in metadata [INGEST-1380] (#37170)
https://github.com/getsentry/sentry.git
def _get_string_indexer_log_records(caplog): return [ ( rec.message, { k: v for k, v in rec.__dict__.items() if k in ( "string_type", "is_global_quota", "num_global_quotas", "num_global_quotas", "org_batch_size", ) }, ) for rec in caplog.records ]
54
test_batch.py
Python
tests/sentry/sentry_metrics/test_batch.py
f31b57cbc5ec359c8ef9c6459d3d9d8ffcd6e8d9
sentry
4
269,587
35
16
18
245
21
1
69
232
normalize_batch_in_training
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def normalize_batch_in_training(x, gamma, beta, reduction_axes, epsilon=1e-3): if ndim(x) == 4 and list(reduction_axes) in [[0, 1, 2], [0, 2, 3]]: if not _has_nchw_support() and list(reduction_axes) == [0, 2, 3]: return _broadcast_normalize_batch_in_training( x, gamma, beta, reduction_axes, epsilon=epsilon ) return _fused_normalize_batch_in_training( x, gamma, beta, reduction_axes, epsilon=epsilon ) else: if sorted(reduction_axes) == list(range(ndim(x)))[:-1]: return _regular_normalize_batch_in_training( x, gamma, beta, reduction_axes, epsilon=epsilon ) else: return _broadcast_normalize_batch_in_training( x, gamma, beta, reduction_axes, epsilon=epsilon ) @keras_export("keras.backend.batch_normalization") @tf.__internal__.dispatch.add_dispatch_support @doc_controls.do_not_generate_docs
@keras_export("keras.backend.batch_normalization") @tf.__internal__.dispatch.add_dispatch_support @doc_controls.do_not_generate_docs
154
backend.py
Python
keras/backend.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
6
321,982
31
10
9
126
14
0
44
125
post_attention
use paddle.einsum instead of paddlenlp.ops.einsum (#1557)
https://github.com/PaddlePaddle/PaddleNLP.git
def post_attention(self, h, attn_vec, residual=True): # Post-attention projection (back to 'd_model') # Compute einsum4x4("ibnd,hnd->ibh", attn_vec, self.o) shape = attn_vec.shape attn_vec = attn_vec.reshape([shape[0], shape[1], -1]) attn_out = paddle.einsum("ibm,hm->ibh", attn_vec, self.o) attn_out = self.dropout(attn_out) if residual: attn_out = attn_out + h output = self.layer_norm(attn_out) return output
80
modeling.py
Python
paddlenlp/transformers/xlnet/modeling.py
3d3bc82c85da3df001d44d36a4caf0291a7d877f
PaddleNLP
2
213,059
42
12
12
162
15
0
62
182
fhash
fix: Py27hash fix (#2182) * Add third party py27hash code * Add Py27UniStr and unit tests * Add py27hash_fix utils and tests * Add to_py27_compatible_template and tests * Apply py27hash fix to wherever it is needed * Apply py27hash fix, all tests pass except api_with_any_method_in_swagger * apply py27hash fix in openapi + run black * remove py27 testing * remove other py27 references * black fixes * fixes/typos * remove py27 from tox.ini * refactoring * third party notice * black * Fix py27hash fix to deal with null events * Fix Py27UniStr repr for unicode literals * black reformat * Update _template_has_api_resource to check data type more defensively * Apply py27Dict in _get_authorizers * Apply Py27Dict to authorizers and gateway responses which will go into swagger * Update to_py27_compatible_template to handle parameter_values; Add Py27LongInt class * Rename _convert_to_py27_dict to _convert_to_py27_type * Apply Py27UniStr to path param name * Handle HttpApi resource under to_py27_compatible_template * Fix InvalidDocumentException to not sort different exceptions * black reformat * Remove unnecessary test files Co-authored-by: Wing Fung Lau <[email protected]>
https://github.com/aws/serverless-application-model.git
def fhash(value): fpart = math.modf(value) if fpart[0] == 0.0: return hash(int(fpart[1])) v, e = math.frexp(value) # 2**31 v *= 2147483648.0 # Top 32 bits hipart = int(v) # Next 32 bits v = (v - float(hipart)) * 2147483648.0 x = hipart + int(v) + (e << 15) if x == -1: x = -2 # Convert to C long type return ctypes.c_long(x).value
105
hash.py
Python
samtranslator/third_party/py27hash/hash.py
a5db070f446b7cfebdaa6ad2e3dcf78f6105a272
serverless-application-model
3
176,305
6
9
2
35
5
0
6
12
draw_kamada_kawai
Update `draw_<layout>` docstrings with usage examples (#5264) * Update descriptions, add Notes and SeeAlso sections. Notify users that the draw_ fns are shorthand for using the layouts explicitly. * Fix gh-5215. * Rm unused warnings. * Rm >>> to prevent doctest from running. * Fix planar layout example with planar graph. * Fix links to layout functions (cross-module. * Fix misnamed layout in example Co-authored-by: Mridul Seth <[email protected]> Co-authored-by: Mridul Seth <[email protected]>
https://github.com/networkx/networkx.git
def draw_kamada_kawai(G, **kwargs): draw(G, kamada_kawai_layout(G), **kwargs)
21
nx_pylab.py
Python
networkx/drawing/nx_pylab.py
69db45cdb6f56a3e337cdc2cc54386270ab18308
networkx
1
22,590
7
7
3
33
4
0
7
28
add
refactor: clean code Signed-off-by: slowy07 <[email protected]>
https://github.com/geekcomputers/Python.git
def add(self, group): group.add(self) self.__isInGroup = True
19
brickout-game.py
Python
brickout-game/brickout-game.py
f0af0c43340763724f139fa68aa1e5a9ffe458b4
Python
1
294,684
8
7
3
28
5
0
8
22
name_suffix
Use device properties for WeMo Insight sensors (#63525)
https://github.com/home-assistant/core.git
def name_suffix(self) -> str | None: return self.entity_description.name
16
sensor.py
Python
homeassistant/components/wemo/sensor.py
c6ba987995e4c726614ffbde2b31ce01a034aab3
core
1
216,094
55
22
38
317
23
1
70
435
test_filesystem_present_properties
Speed up zfs.filesystem_present Only look up the requested properties in ZFS, instead of all properties. This affects zfs.volume_present, too. Sponsored by: Axcient
https://github.com/saltstack/salt.git
def test_filesystem_present_properties(utils_patch): ret = { "name": "myzpool/filesystem", "result": True, "comment": "filesystem myzpool/filesystem is uptodate", "changes": {}, } mock_exists = MagicMock(return_value=True) mock_get = MagicMock( return_value=OrderedDict( [ ( "myzpool/filesystem", OrderedDict( [ ("type", OrderedDict([("value", "filesystem")])), ("compression", OrderedDict([("value", "lz4")])), ] ), ), ] ) ) with patch.dict(zfs.__salt__, {"zfs.exists": mock_exists}), patch.dict( zfs.__salt__, {"zfs.get": mock_get} ), patch.dict(zfs.__utils__, utils_patch): assert ret == zfs.filesystem_present( "myzpool/filesystem", properties={"type":"filesystem", "compression":"lz4" } ) mock_get.assert_called_with( "myzpool/filesystem", depth=0, properties="type,compression", fields="value", parsable=True, type="filesystem" ) @pytest.mark.slow_test
@pytest.mark.slow_test
177
test_zfs.py
Python
tests/pytests/unit/states/test_zfs.py
c51093a503b9c4e68fae71fd77c6deb6013c841c
salt
1
278,689
29
11
5
64
14
0
32
61
_in_functional_construction_mode
Remove pylint comments. PiperOrigin-RevId: 452353044
https://github.com/keras-team/keras.git
def _in_functional_construction_mode(layer, inputs, args, kwargs, input_list): # We are constructing a functional model if any of the inputs # are KerasTensors return any( isinstance(tensor, keras_tensor.KerasTensor) for tensor in tf.nest.flatten([inputs, args, kwargs]) )
43
base_layer.py
Python
keras/engine/base_layer.py
3613c3defc39c236fb1592c4f7ba1a9cc887343a
keras
2
297,739
17
10
6
90
9
0
22
40
test_create_area_with_id_already_in_use
Add aliases to area registry items (#84294) * Add aliases to area registry items * Update test * Fix WS API
https://github.com/home-assistant/core.git
async def test_create_area_with_id_already_in_use(registry): area1 = registry.async_create("mock") updated_area1 = registry.async_update(area1.id, name="New Name") assert updated_area1.id == area1.id area2 = registry.async_create("mock") assert area2.id == "mock_2"
50
test_area_registry.py
Python
tests/helpers/test_area_registry.py
1a42bd5c4cb51ffbfcaf8d5389b80a228712ac81
core
1
137,335
7
6
3
28
6
0
7
21
get_optimizers
[RLlib] New `RLOptimizer` API for local (torch+tf) optimizers and losses. Used in combination with RLModules. Initial PR. (#29737)
https://github.com/ray-project/ray.git
def get_optimizers(self) -> Mapping[str, Any]: return self._optimizers
17
rl_optimizer.py
Python
rllib/core/optim/rl_optimizer.py
ca3d89139afb887a01948106c2bceb7f02a944c0
ray
1
271,150
11
10
4
69
6
0
13
29
get_data_handler
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def get_data_handler(*args, **kwargs): if getattr(kwargs["model"], "_cluster_coordinator", None): return _ClusterCoordinatorDataHandler(*args, **kwargs) return DataHandler(*args, **kwargs)
41
data_adapter.py
Python
keras/engine/data_adapter.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
2
146,190
2
6
13
13
2
0
2
5
test_actor_autocomplete
[Hotfix]Fix test_actor failure caused by interface change (#23000)
https://github.com/ray-project/ray.git
def test_actor_autocomplete(ray_start_regular_shared):
135
test_actor.py
Python
python/ray/tests/test_actor.py
bc14512471736a95b08aa02311d00d99b25c0629
ray
7
156,105
17
8
5
62
9
0
18
57
compute_current_divisions
absolufy-imports - No relative - PEP8 (#8796) Conversation in https://github.com/dask/distributed/issues/5889
https://github.com/dask/dask.git
def compute_current_divisions(self, col=None): if col is None and self.known_divisions: return self.divisions from dask.dataframe.shuffle import compute_divisions return compute_divisions(self, col=col)
40
core.py
Python
dask/dataframe/core.py
cccb9d8d8e33a891396b1275c2448c352ef40c27
dask
3
248,525
24
11
12
131
15
0
25
122
test_get_state_event_cancellation
Test cancellation at every `await` during request handling (#12674) * Add tests for `/rooms/<room_id>/members` cancellation. * Add tests for `/rooms/<room_id>/state` cancellation. Signed-off-by: Sean Quah <[email protected]>
https://github.com/matrix-org/synapse.git
def test_get_state_event_cancellation(self) -> None: room_id = self.helper.create_room_as(self.user_id) channel = make_request_with_cancellation_test( "test_state_cancellation", self.reactor, self.site, "GET", "/rooms/%s/state/m.room.member/%s" % (room_id, self.user_id), ) self.assertEqual(200, channel.code, msg=channel.result["body"]) self.assertEqual(channel.json_body, {"membership": "join"})
80
test_rooms.py
Python
tests/rest/client/test_rooms.py
a10cc5f82480c4905979f753d3734e822a064669
synapse
1
100,555
19
12
12
86
11
0
20
66
_get_device_details
Refactor lib.gpu_stats (#1218) * inital gpu_stats refactor * Add dummy CPU Backend * Update Sphinx documentation
https://github.com/deepfakes/faceswap.git
def _get_device_details(self) -> List[dict]: details = [json.loads(d.details.decode("utf-8")) for d in self._all_devices if d.details] self._log("debug", f"Obtained Device details: {details}") return details
49
amd.py
Python
lib/gpu_stats/amd.py
bdbbad4d310fb606b6f412aa81e9f57ccd994e97
faceswap
3
286,317
37
13
23
303
15
0
81
231
lambda_long_number_format
[SDK] Allow silencing verbose output in commands that use stocks/load (#3180) * remove verbose on load * Revert implementation of the verbosity setting in stocks controller * Edit docstrings to comply with pydocstyle linting rules * Fix typos in variable names and help text * Add verbosity setting to forex load helper as it uses the stocks helper * Update docstrings to comply with pydocstyle linting rules * Update tests * Fix test relying on local sources settings * Remove old test cassettes * Add new test data * WIP: Fix futures tests * Clean up test file * Fix futures tests having a time component * Fix futures model tests Co-authored-by: James Maslek <[email protected]> Co-authored-by: Theodore Aptekarev <[email protected]>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def lambda_long_number_format(num, round_decimal=3) -> str: if isinstance(num, float): magnitude = 0 while abs(num) >= 1000: magnitude += 1 num /= 1000.0 string_fmt = f".{round_decimal}f" num_str = int(num) if num.is_integer() else f"{num:{string_fmt}}" return f"{num_str} {' KMBTP'[magnitude]}".strip() if isinstance(num, int): num = str(num) if isinstance(num, str) and num.lstrip("-").isdigit(): num = int(num) num /= 1.0 magnitude = 0 while abs(num) >= 1000: magnitude += 1 num /= 1000.0 string_fmt = f".{round_decimal}f" num_str = int(num) if num.is_integer() else f"{num:{string_fmt}}" return f"{num_str} {' KMBTP'[magnitude]}".strip() return num
159
helper_funcs.py
Python
openbb_terminal/helper_funcs.py
47549cbd9f52a436c06b040fda5b88a7d2bf700a
OpenBBTerminal
9
26,803
13
8
4
42
9
1
14
30
subscription_gift_card_updated_webhook
New events related to gift card changes (#9588) * GiftCards webhook events * Changes after review. * GIFT_CARD_STATUS_CHANGED enum value fix * Fix tests coverage * Revert last commit * Graphql schema update
https://github.com/saleor/saleor.git
def subscription_gift_card_updated_webhook(subscription_webhook): return subscription_webhook( GIFT_CARD_UPDATED_SUBSCRIPTION_QUERY, WebhookEventAsyncType.GIFT_CARD_UPDATED ) GIFT_CARD_DELETED_SUBSCRIPTION_QUERY = ( GIFT_CARD_DETAILS_FRAGMENT + ) @pytest.fixture
@pytest.fixture
14
fixtures.py
Python
saleor/plugins/webhook/tests/subscription_webhooks/fixtures.py
52adcd10d4e0a4d0026afc51b89a72bd0e53cc78
saleor
1
288,120
34
10
9
71
9
0
40
131
_check_if_became_offline
Refactor duplicate code in switchbee (#79209) * Refactored duplicate code * Update homeassistant/components/switchbee/entity.py Co-authored-by: epenet <[email protected]> * Update homeassistant/components/switchbee/entity.py Co-authored-by: epenet <[email protected]> * Update homeassistant/components/switchbee/entity.py Co-authored-by: epenet <[email protected]> * more Co-authored-by: epenet <[email protected]>
https://github.com/home-assistant/core.git
def _check_if_became_offline(self) -> None: # This specific call will refresh the state of the device in the CU self.hass.async_create_task(self.async_refresh_state()) if self._is_online: _LOGGER.warning( "%s device is not responding, check the status in the SwitchBee mobile app", self.name, ) self._is_online = False
41
entity.py
Python
homeassistant/components/switchbee/entity.py
5262f44b8101bc06389a7ccb9a188af57129b8fb
core
2
216,511
12
6
20
17
2
0
14
23
_yum
various changes and fixes needed to add PhotonOS into CICD.
https://github.com/saltstack/salt.git
def _yum(): # Do import due to function clonning to kernelpkg_linux_yum mod import os
126
yumpkg.py
Python
salt/modules/yumpkg.py
00ee5eed1d75417faaaa185e27947b268239698e
salt
7
7,221
45
13
19
223
32
0
53
186
run_test_gbm_multiple_outputs
feat: Added model type GBM (LightGBM tree learner), as an alternative to ECD (#2027)
https://github.com/ludwig-ai/ludwig.git
def run_test_gbm_multiple_outputs(backend_config): input_features = [number_feature(), category_feature(reduce_output="sum")] output_features = [ category_feature(vocab_size=3), binary_feature(), category_feature(vocab_size=3), ] with tempfile.TemporaryDirectory() as tmpdir: csv_filename = os.path.join(tmpdir, "training.csv") dataset_filename = generate_data(input_features, output_features, csv_filename, num_examples=100) config = { MODEL_TYPE: "gbm", "input_features": input_features, "output_features": output_features, TRAINER: {"num_boost_round": 2}, } model = LudwigModel(config, backend=backend_config) with pytest.raises(ValueError, match="Only single task currently supported"): model.train(dataset=dataset_filename, output_directory=tmpdir)
135
test_gbm.py
Python
tests/integration_tests/test_gbm.py
aa0c63bf2ed825eb3ca8eff8a002d5ccbe395173
ludwig
1
20,760
6
6
3
22
4
0
6
20
map
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def map(self) -> RenderMap: return self._render_map
12
layout.py
Python
pipenv/patched/notpip/_vendor/rich/layout.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
1
118,588
91
11
35
407
43
0
115
427
test_handle_save_request
Rename and refactor `Report` machinery (#4141) This refactor renames (almost) everything related to the outdated "report" concept with more precise concepts that we use throughout our code, primarily "script run", "session", and "app".
https://github.com/streamlit/streamlit.git
def test_handle_save_request(self, _1): # Create a AppSession with some mocked bits rs = AppSession( self.io_loop, SessionData("mock_report.py", ""), UploadedFileManager(), None ) rs._session_data.script_run_id = "TestReportID" orig_ctx = get_script_run_ctx() ctx = ScriptRunContext( "TestSessionID", rs._session_data.enqueue, "", SessionState(), UploadedFileManager(), ) add_script_run_ctx(ctx=ctx) rs._scriptrunner = MagicMock() storage = MockStorage() rs._storage = storage # Send two deltas: empty and markdown st.empty() st.markdown("Text!") yield rs.handle_save_request(_create_mock_websocket()) # Check the order of the received files. Manifest should be last. self.assertEqual(3, len(storage.files)) self.assertEqual("reports/TestReportID/0.pb", storage.get_filename(0)) self.assertEqual("reports/TestReportID/1.pb", storage.get_filename(1)) self.assertEqual("reports/TestReportID/manifest.pb", storage.get_filename(2)) # Check the manifest manifest = storage.get_message(2, StaticManifest) self.assertEqual("mock_report", manifest.name) self.assertEqual(2, manifest.num_messages) self.assertEqual(StaticManifest.DONE, manifest.server_status) # Check that the deltas we sent match messages in storage sent_messages = rs._session_data._master_queue._queue received_messages = [ storage.get_message(0, ForwardMsg), storage.get_message(1, ForwardMsg), ] self.assertEqual(sent_messages, received_messages) add_script_run_ctx(ctx=orig_ctx)
246
app_session_test.py
Python
lib/tests/streamlit/app_session_test.py
704eab3478cf69847825b23dabf15813a8ac9fa2
streamlit
1
165,332
63
16
12
254
23
0
79
132
test_rolling_max_gh6297
ENH: Rolling window with step size (GH-15354) (#45765)
https://github.com/pandas-dev/pandas.git
def test_rolling_max_gh6297(step): indices = [datetime(1975, 1, i) for i in range(1, 6)] # So that we can have 2 datapoints on one of the days indices.append(datetime(1975, 1, 3, 6, 0)) series = Series(range(1, 7), index=indices) # Use floats instead of ints as values series = series.map(lambda x: float(x)) # Sort chronologically series = series.sort_index() expected = Series( [1.0, 2.0, 6.0, 4.0, 5.0], index=DatetimeIndex([datetime(1975, 1, i, 0) for i in range(1, 6)], freq="D"), )[::step] x = series.resample("D").max().rolling(window=1, step=step).max() tm.assert_series_equal(expected, x)
177
test_rolling_functions.py
Python
pandas/tests/window/test_rolling_functions.py
6caefb19f4d7c05451fafca182c6eb39fe9901ed
pandas
3
295,626
7
10
4
47
8
0
8
40
check_interface
Motion Blinds auto interface (#68852) Co-authored-by: J. Nick Koston <[email protected]>
https://github.com/home-assistant/core.git
def check_interface(self): with contextlib.suppress(socket.timeout): return self.gateway_device.Check_gateway_multicast() return False
26
gateway.py
Python
homeassistant/components/motion_blinds/gateway.py
4dade9668ac3b9a50b0320ca6fc2b30de61db85e
core
1
260,443
13
9
4
60
8
0
13
45
partial_fit
MAINT validate parameters for MLPRregressor and MLPClassifier (#23789) Co-authored-by: jeremie du boisberranger <[email protected]>
https://github.com/scikit-learn/scikit-learn.git
def partial_fit(self, X, y): if not hasattr(self, "coefs_"): self._validate_params() return self._fit(X, y, incremental=True)
37
_multilayer_perceptron.py
Python
sklearn/neural_network/_multilayer_perceptron.py
0206d3e08c0f0917ba2f1c65cb55569b97d9a9ba
scikit-learn
2
37,482
7
10
2
37
5
0
7
13
require_timm
Update all require decorators to use skipUnless when possible (#16999)
https://github.com/huggingface/transformers.git
def require_timm(test_case): return unittest.skipUnless(is_timm_available(), "test requires Timm")(test_case)
20
testing_utils.py
Python
src/transformers/testing_utils.py
57e6464ac9a31156f1c93e59107323e6ec01309e
transformers
1
100,567
12
9
11
50
6
0
13
41
_get_device_names
Refactor lib.gpu_stats (#1218) * inital gpu_stats refactor * Add dummy CPU Backend * Update Sphinx documentation
https://github.com/deepfakes/faceswap.git
def _get_device_names(self) -> List[str]: names = [] self._log("debug", f"GPU Devices: {names}") return names
26
cpu.py
Python
lib/gpu_stats/cpu.py
bdbbad4d310fb606b6f412aa81e9f57ccd994e97
faceswap
1
100,307
52
17
21
284
25
0
80
358
get_data
Update code to support Tensorflow versions up to 2.8 (#1213) * Update maximum tf version in setup + requirements * - bump max version of tf version in launcher - standardise tf version check * update keras get_custom_objects for tf>2.6 * bugfix: force black text in GUI file dialogs (linux) * dssim loss - Move to stock tf.ssim function * Update optimizer imports for compatibility * fix logging for tf2.8 * Fix GUI graphing for TF2.8 * update tests * bump requirements.txt versions * Remove limit on nvidia-ml-py * Graphing bugfixes - Prevent live graph from displaying if data not yet available * bugfix: Live graph. Collect loss labels correctly * fix: live graph - swallow inconsistent loss errors * Bugfix: Prevent live graph from clearing during training * Fix graphing for AMD
https://github.com/deepfakes/faceswap.git
def get_data(self, session_id, metric): if session_id is None: raw = self._data else: data = self._data.get(session_id) if not data: return None raw = {session_id: data} dtype = "float32" if metric == "loss" else "float64" retval = {} for idx, data in raw.items(): val = {metric: np.frombuffer(zlib.decompress(data[metric]), dtype=dtype).reshape(data[f"{metric}_shape"])} if metric == "loss": val["labels"] = data["labels"] retval[idx] = val logger.debug("Obtained cached data: %s", {session_id: {k: v.shape if isinstance(v, np.ndarray) else v for k, v in data.items()} for session_id, data in retval.items()}) return retval
173
event_reader.py
Python
lib/gui/analysis/event_reader.py
c1512fd41d86ef47a5d1ce618d6d755ef7cbacdf
faceswap
9
248,639
14
10
10
60
9
0
14
79
_get_min_device_lists_changes_in_room
Use new `device_list_changes_in_room` table when getting device list changes (#13045)
https://github.com/matrix-org/synapse.git
async def _get_min_device_lists_changes_in_room(self) -> int: return await self.db_pool.simple_select_one_onecol( table="device_lists_changes_in_room", keyvalues={}, retcol="COALESCE(MIN(stream_id), 0)", desc="get_min_device_lists_changes_in_room", )
34
devices.py
Python
synapse/storage/databases/main/devices.py
5099b5ecc735b98ac9d559ef6191554bafff964b
synapse
1
76,941
40
10
19
125
19
0
55
179
test_custom_rendition_backend_setting
Allow specifying an alternative storage backend for image renditions - add setting `WAGTAILIMAGES_RENDITION_STORAGE` - add AbstractRendition file storage to use new setting - resolves #3183
https://github.com/wagtail/wagtail.git
def test_custom_rendition_backend_setting(self): # when setting is not set, instance.get_storage() returns DefaultStorage from django.conf import settings bkp = settings self.assertFalse(hasattr(settings, "WAGTAILIMAGES_RENDITION_STORAGE")) rendition1 = self.image.get_rendition("min-120x120") self.assertIsInstance(rendition1.image.file.storage, DefaultStorage) # when setting is set to a path setattr( settings, "WAGTAILIMAGES_RENDITION_STORAGE", "wagtail.images.tests.test_models.CustomStorage", ) backend = get_rendition_storage() self.assertIsInstance(backend, CustomStorage) # when setting is set directly, get_rendition_storage() returns the custom storage backend
106
test_models.py
Python
wagtail/images/tests/test_models.py
1c7c5cfc0b4a323acc79cd73ade4823621335a9b
wagtail
1
129,469
23
9
6
86
12
0
24
70
find_rel_checkpoint_dir
[tune] only sync up and sync down checkpoint folder for cloud checkpoint. (#21658) By default, ~/ray_results/exp_name/trial_name/checkpoint_name. Instead of the whole trial checkpoint (~/ray_results/exp_name/trial_name/) directory. Stuff like progress.csv, result.json, params.pkl, params.json, events.out etc are coming from driver process. This could also enable us to de-couple sync up and delete - they don't have to wait for each other to finish.
https://github.com/ray-project/ray.git
def find_rel_checkpoint_dir(logdir, checkpoint_path): assert checkpoint_path.startswith( logdir), "expecting `logdir` to be a prefix of `checkpoint_path`" rel_path = os.path.relpath(checkpoint_path, logdir) tokens = rel_path.split(os.sep) return os.path.join(tokens[0], "")
53
trainable.py
Python
python/ray/tune/utils/trainable.py
0abcd5eea529fc84c4398620f2808087e4d8c6b6
ray
1