status
stringclasses 1
value | repo_name
stringlengths 9
24
| repo_url
stringlengths 28
43
| issue_id
int64 1
104k
| updated_files
stringlengths 8
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 36,219 | ["airflow/www/static/js/dag/details/taskInstance/taskActions/MarkInstanceAs.tsx"] | "Mark state as..." button options grayed out | ### Apache Airflow version
2.7.3
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
Since a few versions ago, the button to mark a task state as success is grayed out when the task is in a success state. Conversely, whenever a task is in a failed state, the mark button as failed is grayed out.
![Screenshot 2023-12-14 at 11 15 31](https://github.com/apache/airflow/assets/5096835/d263c7d6-8a3f-4e81-a310-dcb790365a73)
### What you think should happen instead?
This is inconvenient. These buttons bring up another dialog where you may select past/future/downstream/upstream tasks. These tasks may not match the state of the task you currently have selected. Frequently it is useful to be able to set all downstream tasks of an already succeeded task to success.
![Screenshot 2023-12-14 at 11 21 01](https://github.com/apache/airflow/assets/5096835/b2d87cde-a7a6-48a1-8b64-73d4b6830546)
The current workaround is to first set the task to the opposite of the desired state, then to mark it as the desired state with added past/future/downstream/upstream tasks. This is clunky.
The buttons should not be grayed out depending on the current task state.
### How to reproduce
Mark a task as success. Then try to do it again.
### Operating System
n/a
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/36219 | https://github.com/apache/airflow/pull/36254 | a68b4194fe7201bba0544856b60c7d6724da60b3 | 20d547ecd886087cd89bcdf0015ce71dd0a12cef | "2023-12-14T10:26:39Z" | python | "2023-12-16T14:25:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 36,187 | ["airflow/io/__init__.py", "tests/io/test_path.py"] | Add unit tests to retrieve fsspec from providers including backwards compatibility | ### Body
We currently miss fsspec retrieval for providers and #36186 fixed compatibility issue with it, so we should likely add unit tests covering it.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/36187 | https://github.com/apache/airflow/pull/36199 | 97e8f58673769d3c06bce397882375020a139cee | 6c94ddf2bc123bfc7a59df4ce05f2b4e980f7a15 | "2023-12-12T15:58:07Z" | python | "2023-12-13T17:56:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 36,132 | ["airflow/providers/google/cloud/operators/cloud_run.py", "tests/providers/google/cloud/operators/test_cloud_run.py"] | Add overrides in the template field for the Google Cloud Run Jobs Execute operator | ### Description
The overrides parameter is not in the list of template field and it's impossible to pass runtime values to Cloud Run (date start/end, custom dag parameters,...)
### Use case/motivation
I would like to use Cloud Run Jobs with DBT and pass Airflow parameters (date start/end) to the Cloud Run jobs. For that, I need to use the the context (**kwargs) in a template field
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/36132 | https://github.com/apache/airflow/pull/36133 | df23df53155c7a3a9b30d206c962913d74ad3754 | 3dddfb4a4ae112544fd02e09a5633961fa725a36 | "2023-12-08T23:54:53Z" | python | "2023-12-11T15:27:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 36,102 | ["airflow/decorators/branch_external_python.py", "airflow/decorators/branch_python.py", "airflow/decorators/branch_virtualenv.py", "airflow/decorators/external_python.py", "airflow/decorators/python_virtualenv.py", "airflow/decorators/short_circuit.py", "airflow/models/abstractoperator.py", "tests/decorators/test_branch_virtualenv.py", "tests/decorators/test_external_python.py", "tests/decorators/test_python_virtualenv.py"] | Using requirements file in VirtualEnvPythonOperation appears to be broken | ### Discussed in https://github.com/apache/airflow/discussions/36076
<div type='discussions-op-text'>
<sup>Originally posted by **timc** December 5, 2023</sup>
### Apache Airflow version
2.7.3
### What happened
When creating a virtual env task and passing in a requirements file like this:
`@task.virtualenv(
use_dill=True,
system_site_packages=False,
requirements='requirements.txt')`
The result is that the contents of the requirements file using to populate the venv is
requirements.txt
Which is wrong. And you get this:
[2023-12-05, 12:33:06 UTC] {{process_utils.py:181}} INFO - Executing cmd: python3 /usr/local/***/.local/lib/python3.10/site-packages/virtualenv /tmp/venv3cdlqjlq
[2023-12-05, 12:33:06 UTC] {{process_utils.py:185}} INFO - Output:
[2023-12-05, 12:33:07 UTC] {{process_utils.py:189}} INFO - created virtual environment CPython3.10.9.final.0-64 in 397ms
[2023-12-05, 12:33:07 UTC] {{process_utils.py:189}} INFO - creator CPython3Posix(dest=/tmp/venv3cdlqjlq, clear=False, no_vcs_ignore=False, global=False)
[2023-12-05, 12:33:07 UTC] {{process_utils.py:189}} INFO - seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/usr/local/***/.local/share/virtualenv)
[2023-12-05, 12:33:07 UTC] {{process_utils.py:189}} INFO - added seed packages: pip==23.3.1, setuptools==69.0.2, wheel==0.42.0
[2023-12-05, 12:33:07 UTC] {{process_utils.py:189}} INFO - activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator
[2023-12-05, 12:33:07 UTC] {{process_utils.py:181}} INFO - Executing cmd: /tmp/venv3cdlqjlq/bin/pip install -r /tmp/venv3cdlqjlq/requirements.txt
[2023-12-05, 12:33:07 UTC] {{process_utils.py:185}} INFO - Output:
[2023-12-05, 12:33:09 UTC] {{process_utils.py:189}} INFO - ERROR: Could not find a version that satisfies the requirement requirements.txt (from versions: none)
[2023-12-05, 12:33:09 UTC] {{process_utils.py:189}} INFO - HINT: You are attempting to install a package literally named "requirements.txt" (which cannot exist). Consider using the '-r' flag to install the packages listed in requirements.txt
[2023-12-05, 12:33:09 UTC] {{process_utils.py:189}} INFO - ERROR: No matching distribution found for requirements.txt
[2023-12-05, 12:33:09 UTC] {{taskinstance.py:1824}} ERROR - Task failed with exception
The issue appears to be that the requirements parameter is added to a list on construction of the operator so the templating never happens.
### What you think should happen instead
The provided requirements file should be used in the pip command to set up the venv.
### How to reproduce
Create a dag:
```
from datetime import datetime
from airflow.decorators import dag, task
@dag(schedule_interval=None, start_date=datetime(2021, 1, 1), catchup=False, tags=['example'])
def virtualenv_task():
@task.virtualenv(
use_dill=True,
system_site_packages=False,
requirements='requirements.txt',
)
def extract():
import pandas
x = pandas.DataFrame()
extract()
dag = virtualenv_task()
```
And a requirements.txt file
```
pandas
```
Run AirFlow
### Operating System
Ubuntu 23.04
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==8.2.0
apache-airflow-providers-celery==3.2.1
apache-airflow-providers-common-sql==1.5.2
apache-airflow-providers-ftp==3.4.2
apache-airflow-providers-http==4.4.2
apache-airflow-providers-imap==3.2.2
apache-airflow-providers-postgres==5.5.1
apache-airflow-providers-sqlite==3.4.2
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
Everytime.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/36102 | https://github.com/apache/airflow/pull/36103 | 76d26f453000aa67f4e755c5e8f4ccc0eac7b5a4 | 3904206b69428525db31ff7813daa0322f7b83e8 | "2023-12-07T06:49:53Z" | python | "2023-12-07T09:19:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 36,070 | ["airflow/providers/airbyte/hooks/airbyte.py", "tests/providers/airbyte/hooks/test_airbyte.py"] | AirbyteTriggerSyncOperator should kill job upon timeout | ### Apache Airflow version
2.7.3
### What happened
When calling in not asyncronous way the AirbyteTriggerSyncOperator ([here](https://github.com/apache/airflow/blob/main/airflow/providers/airbyte/operators/airbyte.py#L79)) and timeout is reached [here](https://github.com/apache/airflow/blob/main/airflow/providers/airbyte/hooks/airbyte.py#L66) the job should be killed otherwise the airbyte will keep running,
is just a matter of calling the cancel job which is already there https://github.com/apache/airflow/blob/main/airflow/providers/airbyte/hooks/airbyte.py#L110C9-L110C9
### What you think should happen instead
I think that if the airbyte operator has not finished within the defined timeout then the airbyte should also stop. Otherwise the airbyte job may continue to operate and even finish (after the timeout). This way the airflow will have failed but airbyte will look successful, which is inconsistency among airflow and airbyte
### How to reproduce
Its very easy to reproduce by calling a connection with very small timeout
```
from airflow import DAG
from airflow.utils.dates import days_ago
from airflow.providers.airbyte.operators.airbyte import AirbyteTriggerSyncOperator
with DAG(dag_id='trigger_airbyte_job_example',
default_args={'owner': 'airflow'},
schedule_interval='@daily',
start_date=days_ago(1)
) as dag:
money_to_json = AirbyteTriggerSyncOperator(
task_id='airbyte_money_json_example',
airbyte_conn_id='airbyte_conn_example',
connection_id='1e3b5a72-7bfd-4808-a13c-204505490110', # change this to something that works
asynchronous=False, # important to have this to False
timeout=10, # something really small
wait_seconds=3
)
```
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-airbyte 3.4.0
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/36070 | https://github.com/apache/airflow/pull/36241 | a7a6a9d6ea69418755c6a0829e474580cc751f00 | ceab840f31e2dcf591390bbace0ff9d74c6fc8fd | "2023-12-05T13:50:31Z" | python | "2023-12-16T18:11:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 36,054 | ["airflow/auth/managers/fab/security_manager/override.py", "tests/www/views/test_views_custom_user_views.py"] | Password reset via flask fab reset-password raises "RuntimeError: Working outside of request context." | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Running this command to reset a password via the CLI raises an exception:
```
$ flask --app airflow.www.app fab reset-password --username myusername
Password:
Repeat for confirmation:
Traceback (most recent call last):
File "/home/airflow/.local/bin/flask", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.8/site-packages/flask/cli.py", line 1050, in main
cli.main()
File "/home/airflow/.local/lib/python3.8/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/home/airflow/.local/lib/python3.8/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/airflow/.local/lib/python3.8/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/airflow/.local/lib/python3.8/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/airflow/.local/lib/python3.8/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/click/decorators.py", line 33, in new_func
return f(get_current_context(), *args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/flask/cli.py", line 357, in decorator
return __ctx.invoke(f, *args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/flask_appbuilder/cli.py", line 157, in reset_password
current_app.appbuilder.sm.reset_password(user.id, password)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/auth/managers/fab/security_manager/override.py", line 245, in reset_password
self.reset_user_sessions(user)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/auth/managers/fab/security_manager/override.py", line 255, in reset_user_sessions
flash(
File "/home/airflow/.local/lib/python3.8/site-packages/flask/helpers.py", line 359, in flash
flashes = session.get("_flashes", [])
File "/home/airflow/.local/lib/python3.8/site-packages/werkzeug/local.py", line 316, in __get__
obj = instance._get_current_object()
File "/home/airflow/.local/lib/python3.8/site-packages/werkzeug/local.py", line 513, in _get_current_object
raise RuntimeError(unbound_message) from None
RuntimeError: Working outside of request context.
This typically means that you attempted to use functionality that needed
an active HTTP request. Consult the documentation on testing for
information about how to avoid this problem.
```
### What you think should happen instead
It should be possible to reset the password via the CLI. This is necessary for when you need to reset your own password without knowing your current password so you can't use the UI. I believe this means that the `reset_user_sessions` function can't unconditionally use `flash` without determining if it's running in a request context or not.
### How to reproduce
Run `flask --app airflow.www.app fab reset-password` with an existing username via the CLI.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/36054 | https://github.com/apache/airflow/pull/36056 | 61fd166a4662d67bc914949f9cf07ceab7d55686 | 7ececfdb2183516d9a30195ffcd76632167119c5 | "2023-12-04T17:10:20Z" | python | "2023-12-04T23:16:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,949 | ["airflow/dag_processing/manager.py", "airflow/dag_processing/processor.py", "airflow/migrations/versions/0133_2_8_0_add_processor_subdir_import_error.py", "airflow/models/errors.py", "airflow/utils/db.py", "docs/apache-airflow/img/airflow_erd.sha256", "docs/apache-airflow/img/airflow_erd.svg", "docs/apache-airflow/migrations-ref.rst", "tests/dag_processing/test_job_runner.py"] | dag processor deletes import errors of other dag processors thinking the files don't exist | ### Apache Airflow version
main (development)
### What happened
When dag processor starts with a sub directory to process then the import errors are recorded with that path. So when there is processor for airflow-dag-processor-0 folder in order to remove import errors it lists all files under airflow-dag-processor-0 folder and deletes those not present. This becomes an issue when there is airflow-dag-processor-1 that records import errors whose files won't be part of airflow-dag-processor-0 folder.
### What you think should happen instead
The fix would be to have processor_subdir stored in ImportError table so that during querying we only look at import errors relevant to the dag processor and don't delete other items. A fix similar to https://github.com/apache/airflow/pull/33357 needs to be applied for import errors as well.
### How to reproduce
1. create a dag file with import error at `~/airflow/dags/airflow-dag-processor-0/sample_sleep.py` . Start a dag processor with -S to process "~/airflow/dags/airflow-dag-processor-0/" . Import error should be present.
2. create a dag file with import error at `~/airflow/dags/airflow-dag-processor-1/sample_sleep.py` . Start a dag processor with -S to process "~/airflow/dags/airflow-dag-processor-1/". Import error for airflow-dag-processor-0 is deleted.
3.
```python
from datetime import datetime, timedelta
from airflow import DAG
from airflow.decorators import task
from datetime import timedelta, invalid
with DAG(
dag_id="task_duration",
start_date=datetime(2023, 1, 1),
catchup=True,
schedule_interval="@daily",
) as dag:
@task
def sleeper():
pass
sleeper()
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35949 | https://github.com/apache/airflow/pull/35956 | 9c1c9f450e289b40f94639db3f0686f592c8841e | 1a3eeab76cdb6d0584452e3065aee103ad9ab641 | "2023-11-29T11:06:51Z" | python | "2023-11-30T13:29:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,914 | ["airflow/models/dagrun.py", "tests/models/test_dagrun.py"] | Scheduler getting crashed when downgrading from 2.8.0b1 to 2.7.3 | ### Apache Airflow version
2.8.0b1
### What happened
The scheduler getting crashed when downgrading from 2.8.0b1 to 2.7.3
we had some running TIs when the downgrade happened, looks like Adopting tasks failing the scheduler.
could be due to this [PR](https://github.com/apache/airflow/pull/35096/files)
### What you think should happen instead
_No response_
### How to reproduce
create 2.8.0b1 deployment
execute a couple of dags
downgrade to 2.7.3
scheduler goes in crash loop
**Logs:**
```
[2023-11-28T07:14:26.927+0000] {process_utils.py:131} INFO - Sending 15 to group 32. PIDs of all processes in the group: [32]
[2023-11-28T07:14:26.927+0000] {process_utils.py:86} INFO - Sending the signal 15 to group 32
[2023-11-28T07:14:27.140+0000] {process_utils.py:79} INFO - Process psutil.Process(pid=32, status='terminated', exitcode=0, started='07:14:25') (32) terminated with exit code 0
[2023-11-28T07:14:27.140+0000] {scheduler_job_runner.py:874} INFO - Exited execute loop
[2023-11-28T07:14:27.145+0000] {scheduler_command.py:49} ERROR - Exception when running scheduler job
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/airflow/cli/commands/scheduler_command.py", line 47, in _run_scheduler_job
run_job(job=job_runner.job, execute_callable=job_runner._execute)
File "/usr/local/lib/python3.11/site-packages/airflow/utils/session.py", line 77, in wrapper
return func(*args, session=session, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/airflow/jobs/job.py", line 289, in run_job
return execute_job(job, execute_callable=execute_callable)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/airflow/jobs/job.py", line 318, in execute_job
ret = execute_callable()
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/astronomer/airflow/version_check/plugin.py", line 30, in run_before
fn(*args, **kwargs)
File "/usr/local/lib/python3.11/site-packages/airflow/jobs/scheduler_job_runner.py", line 845, in _execute
self._run_scheduler_loop()
File "/usr/local/lib/python3.11/site-packages/airflow/jobs/scheduler_job_runner.py", line 927, in _run_scheduler_loop
self.adopt_or_reset_orphaned_tasks()
File "/usr/local/lib/python3.11/site-packages/airflow/utils/session.py", line 77, in wrapper
return func(*args, session=session, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/airflow/jobs/scheduler_job_runner.py", line 1601, in adopt_or_reset_orphaned_tasks
for attempt in run_with_db_retries(logger=self.log):
File "/usr/local/lib/python3.11/site-packages/tenacity/__init__.py", line 347, in __iter__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
^^^^^^^^^^^^
File "/usr/local/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/usr/local/lib/python3.11/site-packages/airflow/jobs/scheduler_job_runner.py", line 1645, in adopt_or_reset_orphaned_tasks
tis_to_adopt_or_reset = session.scalars(tis_to_adopt_or_reset).all()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/result.py", line 1476, in all
return self._allrows()
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/result.py", line 401, in _allrows
rows = self._fetchall_impl()
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/result.py", line 1389, in _fetchall_impl
return self._real_result._fetchall_impl()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/result.py", line 1813, in _fetchall_impl
return list(self.iterator)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/loading.py", line 147, in chunks
fetch = cursor._raw_all_rows()
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/result.py", line 393, in _raw_all_rows
return [make_row(row) for row in rows]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/result.py", line 393, in <listcomp>
return [make_row(row) for row in rows]
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/sql/sqltypes.py", line 1870, in process
return loads(value)
^^^^^^^^^^^^
AttributeError: Can't get attribute 'ConfDict' on <module 'airflow.models.dagrun' from '/usr/local/lib/python3.11/site-packages/airflow/models/dagrun.py'>
```
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35914 | https://github.com/apache/airflow/pull/35959 | ab835c20b2e9bce8311d906d223ecca5e0f85627 | 4a7c7460bf1734b76497280f5a2adc3e30a7820c | "2023-11-28T09:56:54Z" | python | "2023-11-29T18:31:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,911 | ["airflow/providers/apache/spark/hooks/spark_submit.py", "airflow/providers/apache/spark/operators/spark_submit.py", "tests/providers/apache/spark/operators/test_spark_submit.py"] | Adding Support for Yarn queue and other extras in SparkSubmit Operator and Hook | ### Description
Spark-submit
--queue thequeue option specifies the YARN queue to which the application should be submitted.
more - https://spark.apache.org/docs/3.2.0/running-on-yarn.html
### Use case/motivation
The --queue option is particularly useful in a multi-tenant environment where different users or groups have allocated resources in specific YARN queues.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35911 | https://github.com/apache/airflow/pull/36151 | 4c73d613b11107eb8ee3cc70fe6233d5ee3a0b29 | 1b4a7edc545be6d6e9b8f00c243beab215e562b7 | "2023-11-28T09:05:59Z" | python | "2023-12-13T14:54:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,889 | ["airflow/www/static/js/dag/details/taskInstance/Logs/index.tsx"] | New logs tab is broken for tasks with high retries | ### Apache Airflow version
2.7.3
### What happened
One of our users had high number of retries around 600 and the operator was like a sensor that retries on failure till retry limit is reached. The new log page renders the log tab to the bottom making it unusable. In the old page there is still a display of buttons for all retry but scrolling is enabled. To fix this we had to change log from buttons to a drop down where attempt can be selected placing the dropdown before the element to select log level. This is an edge case but we thought to file anyway in case someone is facing this. We are happy to upstream to one of the selected below solutions :
1. Using dropdown on high number of attempts like after 50 and falling back to buttons. But this is a UX change to use button in one case and dropdown in another that user needs to be educated.
2. Always using dropdown despite low number of attempts with default of latest attempt.
Attaching sample dag code that could lead to this scenario.
Sample scenario :
![image](https://github.com/apache/airflow/assets/3972343/a46afe26-7b61-4e72-9f83-137b1cedae9c)
### What you think should happen instead
_No response_
### How to reproduce
```python
from datetime import datetime, timedelta
from airflow import DAG
from airflow.decorators import task
from airflow.models.param import Param
from airflow.operators.empty import EmptyOperator
from datetime import timedelta
with DAG(
dag_id="retry_ui_issue",
start_date=datetime(2023, 1, 1),
catchup=False,
schedule_interval="@once",
) as dag:
@task(retries=400, retry_delay=timedelta(seconds=1))
def fail_always():
raise Exception("fail")
fail_always()
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35889 | https://github.com/apache/airflow/pull/36025 | 9c168b76e8b0c518b75a6d4226489f68d7a6987f | fd0988369b3a94be01a994e46b7993e2d97b2028 | "2023-11-27T13:31:33Z" | python | "2023-12-03T01:09:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,888 | ["airflow/providers/cncf/kubernetes/executors/kubernetes_executor.py"] | Infinite loop on scheduler when kubernetes state event is None along with state in database also None | ### Apache Airflow version
2.7.3
### What happened
We are facing an issue using Kubernetes Executor where `process_watcher_task` that gets None state and is pushed to `result_queue`. On fetching the state from queue in `kubernetes_executor.py` it's passed to `_change_state` and if the state is None then state is fetched from database which when is also None due to some reason the `TaskInstanceState(state)` throws `ValueError` which is caught in the exception and the result is again added to the queue causing scheduler to go into infinite loop trying to set state. We need to restart the scheduler to make it run. If state is None database query too then we shouldn't set the state or to catch `ValueError` instead of generic exception handling to not retry by pushing the same result to queue. The validation was introduced by this change https://github.com/apache/airflow/commit/9556d6d5f611428ac8a3a5891647b720d4498ace#diff-11bb8713bf2f01502e66ffa91136f939cc8445839517187f818f044233414f7eR459
https://github.com/apache/airflow/blob/5d74ffb32095d534866f029d085198bc783d82c2/airflow/providers/cncf/kubernetes/executors/kubernetes_executor_utils.py#L453-L465
https://github.com/apache/airflow/blob/f3ddefccf610833dc8d6012431f372f2af03053c/airflow/providers/cncf/kubernetes/executors/kubernetes_executor.py#L379-L393
https://github.com/apache/airflow/blob/5d74ffb32095d534866f029d085198bc783d82c2/airflow/providers/cncf/kubernetes/executors/kubernetes_executor.py#L478-L485
### What you think should happen instead
scheduler should not retry infinitely
### How to reproduce
We are not sure of the exact scenario where this reproducible. We tried running a task that returns an event which k8s returns None in rare case when pod is deleted or killed and also delete the task instance to make sure db query also returns None but we are not able to consistently get to the case that causes this.
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35888 | https://github.com/apache/airflow/pull/35891 | 9a1dceb031aa0ab44a7c996c267128bd4c61a5bf | 623f9893291daa568563ff65433d797f96abc629 | "2023-11-27T12:55:08Z" | python | "2023-11-27T15:52:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,882 | ["airflow/providers/amazon/aws/sensors/emr.py"] | EmrContainerSensor On aws Using deferrable=True get exception | ### Apache Airflow version
2.7.3
### What happened
when the trigger is running the execute_complete and the status is not success getting error
```
def execute_complete(self, context, event=None):
if event["status"] != "success":
raise AirflowException(f"Error while running job: {event}")
else:
self.log.info(event["message"])
ip-10-5-55-118.eu-central-1.compute.internal
*** Reading remote log from Cloudwatch log_group: airflow-tf-stg-euc1-airflow-analytics-engine-Task log_stream: dag_id=execute_model/run_id=manual__2023-11-26T12_53_43.787114+00_00/task_id=submit_emr_task/attempt=1.log.
[2023-11-26, 12:58:32 UTC] {{taskinstance.py:1937}} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/airflow/.local/lib/python3.11/site-packages/airflow/models/baseoperator.py", line 1606, in resume_execution
return execute_callable(context)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/airflow/.local/lib/python3.11/site-packages/airflow/providers/amazon/aws/operators/emr.py", line 618, in execute_complete
self.log.info("%s", event["message"])
~~~~~^^^^^^^^^^^
KeyError: 'message'
[2023-11-26, 12:58:32 UTC] {{standard_task_runner.py:104}} ERROR - Failed to execute job 2858894 for task submit_emr_task ('message'; 2194)
```
### What you think should happen instead
not getting the rror
### How to reproduce
run EmrContainerSensor with deferrable true on aws
### Operating System
aws
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==8.4.0
apache-airflow-providers-apache-kafka==1.2.0
apache-airflow-providers-common-sql==1.6.1
apache-airflow-providers-ftp==3.4.2
apache-airflow-providers-http==4.5.0
apache-airflow-providers-imap==3.2.2
apache-airflow-providers-postgres==5.6.0
apache-airflow-providers-sqlite==3.4.3
### Deployment
Amazon (AWS) MWAA
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35882 | https://github.com/apache/airflow/pull/35892 | ae61a790a4517d0028aeba1eb02e2db89d90c084 | 0f5db49ec41b12d68c51f4409aa45edf4aba6a94 | "2023-11-27T05:35:02Z" | python | "2023-11-27T16:47:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,874 | ["airflow/providers/common/sql/doc/adr/0001-record-architecture-decisions.md", "airflow/providers/common/sql/doc/adr/0002-return-common-data-structure-from-dbapihook-derived-hooks.md", "scripts/ci/pre_commit/pre_commit_check_providers_subpackages_all_have_init.py"] | Document the purpose of having common.sql | ### Body
The Common.sql package was created in order to provide a common interface for DBApiHooks to return the data that will be universally used in a number of cases:
* CommonSQL Operators and Sensors
* (future) lineage data where returned hook results can follow the returned data for column lineage information
This should be better documentedi in common.sql that this is the goal that common.sql achieves
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/35874 | https://github.com/apache/airflow/pull/36015 | ef5eebdb26ca9ddb49c529625660b72b6c9b55b4 | 3bb5978e63f3be21a5bb7ae89e7e3ce9d06a4ab8 | "2023-11-26T23:11:48Z" | python | "2023-12-06T20:36:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,815 | ["chart/templates/_helpers.yaml"] | git-sync-init resources is too indented | ### Official Helm Chart version
1.10.0
### Apache Airflow version
2.6.3
### Kubernetes Version
1.27.7
### Helm Chart configuration
values.yaml
```yaml
# Git sync
dags:
gitSync:
enabled: true
repo: git@git/path/to/dag.git
branch: main
depth: 1
# subpath within the repo where dags are located
subPath: "dags"
# the number of consecutive failures allowed before aborting
maxFailures: 3
# credentialsSecret: airflow-github-credentials
sshKeySecret: airflow-ssh-secret
knownHosts: |
my-known-host
# interval between git sync attempts in seconds
# high values are more likely to cause DAGs to become out of sync between different components
# low values cause more traffic to the remote git repository
wait: 60
resources:
limits:
memory: 100Mi
requests:
cpu: 50m
memory: 100Mi
```
### Docker Image customizations
_No response_
### What happened
Resources get too much indented. This is due to this line https://github.com/apache/airflow/blob/a794e0d020f70aca4a0d81b953402a92a430635e/chart/templates/_helpers.yaml#L253
### What you think should happen instead
A simple change should be made to indent one level up the tree
```yaml
resources: {{ toYaml .Values.dags.gitSync.resources | nindent 4 }} # not 6
```
### How to reproduce
Inflate helm chart with given values.yaml and notice the extra indent everywhere gitsync is templated (e.g. scheduler)
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: airflow-scheduler
spec:
template:
spec:
initContainers:
- name: git-sync-init
resources:
limits:
memory: 100Mi
requests:
cpu: 50m
memory: 100Mi
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35815 | https://github.com/apache/airflow/pull/35824 | c068089c65dff0723432536d00019c119cf54a88 | 39107dfeb4bdddde6de7f71029de10860844a2be | "2023-11-23T12:31:40Z" | python | "2023-11-24T22:37:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,812 | ["docs/apache-airflow/howto/docker-compose/docker-compose.yaml", "docs/apache-airflow/howto/docker-compose/index.rst"] | Add path to airflow.cfg in docker-compose.yml | ### Description
Adding a commented line in compose file like `- ${AIRFLOW_PROJ_DIR:-.}/airflow.cfg:/opt/airflow/airflow.cfg ` would save new users tons of time when customizing the configuration file. Also the current default bind `- ${AIRFLOW_PROJ_DIR:-.}/config:/opt/airflow/config` is misleading where file airflow.cfg should be stored in the container.
Another solution is to simply add similar information here https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html
### Use case/motivation
I was setting up email notifications and didn’t understand why SMTP server configuration from airflow.cfg didn’t work
### Related issues
https://github.com/puckel/docker-airflow/issues/338
https://forum.astronomer.io/t/airflow-up-and-running-but-airflow-cfg-file-not-found/1931
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35812 | https://github.com/apache/airflow/pull/36289 | aed3c922402121c64264654f8dd77dbfc0168cbb | 36cb20af218919bcd821688e91245ffbe3fcfc16 | "2023-11-23T10:03:05Z" | python | "2023-12-19T12:49:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,805 | ["airflow/providers/amazon/aws/hooks/redshift_sql.py", "docs/apache-airflow-providers-amazon/connections/redshift.rst", "tests/providers/amazon/aws/hooks/test_redshift_sql.py"] | `RedshiftSQLHook` does not work with `iam=True` | ### Apache Airflow version
2.7.3
### What happened
When RedshiftSQLHook attempts to auto-fetch credentials when `iam=True`, it uses a cluster-specific approach to obtaining credentials, which fails for Redshift Serverless.
```
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/airflow/providers/common/sql/operators/sql.py", line 280, in execute
output = hook.run(
^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/airflow/providers/common/sql/hooks/sql.py", line 385, in run
with closing(self.get_conn()) as conn:
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/airflow/providers/amazon/aws/hooks/redshift_sql.py", line 173, in get_conn
conn_params = self._get_conn_params()
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/airflow/providers/amazon/aws/hooks/redshift_sql.py", line 84, in _get_conn_params
conn.login, conn.password, conn.port = self.get_iam_token(conn)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/airflow/providers/amazon/aws/hooks/redshift_sql.py", line 115, in get_iam_token
cluster_creds = redshift_client.get_cluster_credentials(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/botocore/client.py", line 535, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/botocore/client.py", line 980, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ClusterNotFoundFault: An error occurred (ClusterNotFound) when calling the GetClusterCredentials operation: Cluster *** not found.
```
### What you think should happen instead
The operator should establish a connection to the serverless workgroup using IAM-obtained credentials using `redshift_connector`.
### How to reproduce
Create a direct SQL connection to Redshift using IAM authentication, something like:
```
{"conn_type":"redshift","extra":"{\"db_user\":\"USER\",\"iam\":true,\"user\":\"USER\"}","host":"WORKGROUP_NAME.ACCOUNT.REGION.redshift-serverless.amazonaws.com","login":"USER","port":5439,"schema":"DATABASE"}
```
Then use this connection for any `SQLExecuteQueryOperator`. The crash should occur when establishing the connection.
### Operating System
Docker, `amazonlinux:2023` base
### Versions of Apache Airflow Providers
This report applies to apache-airflow-providers-amazon==8.7.1, and the relevant code appears unchange in the master branch. The code I'm using worked for Airflow 2.5.2 and version 7.1.0 of the provider.
### Deployment
Amazon (AWS) MWAA
### Deployment details
Local MWAA runner
### Anything else
The break seems to occur because the RedshiftSQLHook integrates the IAM -> credential conversion, which used to occur inside `redshift_connector.connect`. The logic is not as robust and assumes that the connection refers to a Redshift cluster rather than a serverless workgroup. It's not clear to me why this logic was pulled up and out of `redshift_connector`, but it seems like the easiest solution is just to let `redshift_connector` handle IAM authentication and not attempt to duplicate that logic in the airflow provider.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35805 | https://github.com/apache/airflow/pull/35897 | 3385113e277f86b5f163a3509ba61590cfe7d8cc | f6962a929b839215613d1b6f99f43511759c1e5b | "2023-11-22T20:41:47Z" | python | "2023-11-28T17:31:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,766 | ["airflow/providers/amazon/aws/hooks/s3.py"] | wildcard_match documented incorrectly in check_key_async method | ### What do you see as an issue?
The parameter is a boolean but is described as a string
https://github.com/apache/airflow/blob/1e95b069483f5f26a82946d2facc8f642f5ea389/airflow/providers/amazon/aws/hooks/s3.py#L526C1-L527C1
### Solving the problem
Update the docstring with a matching description as the `_check_key_async` operator
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35766 | https://github.com/apache/airflow/pull/35799 | 5588a956c02130b73a23ae85afdc433d737f5efd | bcb5eebd6247d4eec15bf5cce98ccedaad629661 | "2023-11-20T23:03:00Z" | python | "2023-11-22T16:54:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,757 | ["airflow/providers/google/cloud/hooks/pubsub.py", "airflow/providers/google/cloud/operators/pubsub.py", "tests/providers/google/cloud/hooks/test_pubsub.py", "tests/providers/google/cloud/operators/test_pubsub.py"] | Google Cloud PubSubCreateTopicOperator should support schema and message retention params | ### Description
Re: Google Cloud Provider -- PubSub
The PubSub [Topics Create API](https://cloud.google.com/pubsub/docs/reference/rest/v1/projects.topics/create#request-body) supports the SchemaSettings and messageRetentionDuration parameters. These should also be supported via the [PubSubCreateTopicOperator](https://github.com/apache/airflow/blob/9207e7d5e5c183d2e63c3030216b14709257668e/airflow/providers/google/cloud/operators/pubsub.py#L50). Thanks!
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35757 | https://github.com/apache/airflow/pull/35767 | f4e55713840fed023c03a4e4462789aa35c892db | 72ba63e0b97110a47c9882fd0a644cb0d74dcc20 | "2023-11-20T17:39:07Z" | python | "2023-11-22T09:21:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,705 | ["airflow/providers/google/cloud/transfers/mssql_to_gcs.py"] | Documentation Operator name diverges from real name | ### What do you see as an issue?
As can be seen in https://github.com/apache/airflow/blob/ce16963e9d69849309aa0a7cf978ed85ab741439/airflow/providers/google/cloud/transfers/mssql_to_gcs.py#L44
The name `MsSqlToGoogleCloudStorageOperator` should be the same as the class `MSSQLToGCSOperator`
### Solving the problem
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35705 | https://github.com/apache/airflow/pull/35715 | 429ca47b02fac6953520308f819bd9f8dba28d45 | ed6fe240c307bfadbd9856c9e435469ec9a409d8 | "2023-11-17T16:23:07Z" | python | "2023-11-18T06:56:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,703 | ["airflow/providers/amazon/aws/operators/ec2.py", "docs/apache-airflow-providers-amazon/operators/ec2.rst", "tests/providers/amazon/aws/operators/test_ec2.py", "tests/system/providers/amazon/aws/example_ec2.py"] | Add EC2RebootInstanceOperator and EC2HibernateInstanceOperator to Amazon Provider | ### Description
The Amazon Airflow Provider lacks operators for "Reboot Instance" and "Hibernate Instance," two states available in the AWS UI. Achieving feature parity would provide a seamless experience, aligning Airflow with AWS capabilities.
I'd like to see the EC2RebootInstanceOperator and EC2HibernateInstanceOperator added to Amazon Provider.
### Use case/motivation
This enhancement ensures users can manage EC2 instances in Airflow the same way they do in the AWS UI.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35703 | https://github.com/apache/airflow/pull/35790 | ca97feed1883dc8134404b017d7f725a4f1010f6 | ca1202fd31f0ea8c25833cf11a5f7aa97c1db87b | "2023-11-17T14:23:00Z" | python | "2023-11-23T17:58:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,699 | ["tests/conftest.py", "tests/providers/cncf/kubernetes/executors/test_kubernetes_executor.py", "tests/providers/openlineage/extractors/test_bash.py", "tests/providers/openlineage/extractors/test_python.py", "tests/serialization/test_serde.py", "tests/utils/log/test_secrets_masker.py"] | Flaky TestSerializers.test_params test | ### Body
Recently we started to have a flaky TestSerializers.test_params
This seems to be a problem in either the tests or implementation of `serde` - seems like discovery of classes that are serializable in some cases is not working well while the import of serde happens.
It happens rarely and it's not easy to reproduce locallly, by a quick look it might be a side effect from another test - I have a feeling that when tests are run, some other test might leave behind a thread that cleans the list of classes that have been registered with serde and that cleanup happens somewhat randomly.
cc: @bolkedebruin - maybe you can take a look or have an idea where it can come from - might be fastest for you as you know the discovery mechanism best and you wrote most of the tests there ? Maybe there are some specially crafted test cases somewhere that do a setup/teardown or just cleanup of the serde-registered classes that could cause such an effect?
Example error: https://github.com/apache/airflow/actions/runs/6898122803/job/18767848684?pr=35693#step:5:754
Error:
```
_________________________ TestSerializers.test_params __________________________
[gw3] linux -- Python 3.8.18 /usr/local/bin/python
self = <tests.serialization.serializers.test_serializers.TestSerializers object at 0x7fb113165550>
def test_params(self):
i = ParamsDict({"x": Param(default="value", description="there is a value", key="test")})
e = serialize(i)
> d = deserialize(e)
tests/serialization/serializers/test_serializers.py:173:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
o = {'__classname__': 'airflow.models.param.ParamsDict', '__data__': {'x': 'value'}, '__version__': 1}
full = True, type_hint = None
def deserialize(o: T | None, full=True, type_hint: Any = None) -> object:
"""
Deserialize an object of primitive type and uses an allow list to determine if a class can be loaded.
:param o: primitive to deserialize into an arbitrary object.
:param full: if False it will return a stringified representation
of an object and will not load any classes
:param type_hint: if set it will be used to help determine what
object to deserialize in. It does not override if another
specification is found
:return: object
"""
if o is None:
return o
if isinstance(o, _primitives):
return o
# tuples, sets are included here for backwards compatibility
if isinstance(o, _builtin_collections):
col = [deserialize(d) for d in o]
if isinstance(o, tuple):
return tuple(col)
if isinstance(o, set):
return set(col)
return col
if not isinstance(o, dict):
# if o is not a dict, then it's already deserialized
# in this case we should return it as is
return o
o = _convert(o)
# plain dict and no type hint
if CLASSNAME not in o and not type_hint or VERSION not in o:
return {str(k): deserialize(v, full) for k, v in o.items()}
# custom deserialization starts here
cls: Any
version = 0
value: Any = None
classname = ""
if type_hint:
cls = type_hint
classname = qualname(cls)
version = 0 # type hinting always sets version to 0
value = o
if CLASSNAME in o and VERSION in o:
classname, version, value = decode(o)
if not classname:
raise TypeError("classname cannot be empty")
# only return string representation
if not full:
return _stringify(classname, version, value)
if not _match(classname) and classname not in _extra_allowed:
> raise ImportError(
f"{classname} was not found in allow list for deserialization imports. "
f"To allow it, add it to allowed_deserialization_classes in the configuration"
)
E ImportError: airflow.models.param.ParamsDict was not found in allow list for deserialization imports. To allow it, add it to allowed_deserialization_classes in the configuration
airflow/serialization/serde.py:246: ImportError
```
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/35699 | https://github.com/apache/airflow/pull/35746 | 4d72bf1a89d07d34d29b7899a1f3c61abc717486 | 7e7ac10947554f2b993aa1947f8e2ca5bc35f23e | "2023-11-17T11:14:33Z" | python | "2023-11-20T08:24:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,698 | ["airflow/jobs/scheduler_job_runner.py", "docs/apache-airflow/core-concepts/tasks.rst"] | Enhance the docs on zombie tasks to elaborate on how they are detected | ### What do you see as an issue?
The documentation for zombie tasks https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/tasks.html#zombie-undead-tasks is a bit abstract at the moment. There is a scope for enhancing the document by explaining how Airflow detects tasks as zombies
### Solving the problem
We can enhance the documentation by translating this query https://github.com/astronomer/airflow/blob/main/airflow/jobs/scheduler_job_runner.py#L1721 to a layman readable text in our documentation.
### Anything else
It might also help to add developer contribution steps to reproduce zombies locally.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35698 | https://github.com/apache/airflow/pull/35825 | 7c2885d21ef3ee7684b391cb2e7a553ca6821c3d | 177da9016bbedcfa49c08256fdaf2fb537b97d6c | "2023-11-17T09:29:46Z" | python | "2023-11-25T17:46:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,678 | ["chart/values.yaml"] | Airflow metrics config on the Helm Chart mismatch when `fullnameOveride` is provided | ### Official Helm Chart version
1.11.0 (latest released)
### Apache Airflow version
2.6.1
### Kubernetes Version
1.25.X
### Helm Chart configuration
The default values assume the release name for statsd host, but if one uses the `fullnameOveride`, there's a mismatch of airflow metrics configuration ([here](https://github.com/apache/airflow/blob/5983506df370325f7b23a182798341d17d091a32/chart/values.yaml#L2312))
```
config:
metrics:
statsd_host: '{{ printf "%s-statsd" .Release.Name }}'
```
### Docker Image customizations
_No response_
### What happened
Statsd doesn't have any airflow metrics available.
### What you think should happen instead
Statsd should have airflow metrics available.
### How to reproduce
Set the [`fullnameOverride`](https://github.com/apache/airflow/blob/5983506df370325f7b23a182798341d17d091a32/chart/values.yaml#L23) to be different from the helm installation.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35678 | https://github.com/apache/airflow/pull/35679 | 3d77149cac688429e598dd3e8a80c4da65edad01 | 9a6094c0d74093eff63b42ac1d313d77ebee3e60 | "2023-11-16T11:07:12Z" | python | "2023-11-17T14:44:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,599 | ["airflow/providers/cncf/kubernetes/executors/kubernetes_executor.py", "tests/providers/cncf/kubernetes/executors/test_kubernetes_executor.py"] | Kubernetes Executor List Pods Performance Improvement | ### Apache Airflow version
main (development)
### What happened
_list_pods function uses kube list_namespaced_pod and list_pod_for_all_namespaces kube functions. Right now, these Kube functions will get the entire pod spec though we are interested in the pod metadata alone. This _list_pods is refered in clear_not_launched_queued_tasks. try_adopt_task_instances and _adopt_completed_pods functions.
When we run the airflow at large scale (with worker pods of more than > 500). The _list_pods function takes a significant amount of time (upto 15 - 30 seconds with 500 worker pods) due to unnecessary data transfer (V1PodList up to a few 10 MBs) and JSON deserialization overhead. This is blocking us from scaling the airflow to run at large scale
### What you think should happen instead
Request the Pod metadata instead of entire Pod payload. It will help to reduce significant network data transfer and JSON deserialization overhead.
### How to reproduce
I have reproduced the performance issue while running 500 concurrent jobs. Monitor kubernetes_executor.clear_not_launched_queued_tasks.duration and kubernetes_executor.adopt_task_instances.duration metrics.
### Operating System
CentOS 6
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes
### Deployment
Other Docker-based deployment
### Deployment details
Terraform based airflow deployment
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35599 | https://github.com/apache/airflow/pull/36092 | 8d0c5d900875ce3b9dda1a86f1de534759e9d7f6 | b9c574c61ae42481b9d2c9ce7c42c93dc44b9507 | "2023-11-13T12:06:28Z" | python | "2023-12-10T11:49:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,526 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/cli/commands/task_command.py", "airflow/jobs/local_task_job_runner.py", "airflow/models/taskinstance.py", "airflow/serialization/pydantic/dag.py", "airflow/serialization/pydantic/dag_run.py", "airflow/serialization/pydantic/taskinstance.py", "tests/serialization/test_pydantic_models.py"] | AIP-44 Migrate TaskInstance._run_task_by_local_task_job to Internal API | null | https://github.com/apache/airflow/issues/35526 | https://github.com/apache/airflow/pull/35527 | 054904bb9a68eb50070a14fe7300cb1e78e2c579 | 3c0a714cb57894b0816bf39079e29d79ea0b1d0a | "2023-11-08T12:23:53Z" | python | "2023-11-15T18:41:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,500 | ["airflow/www/static/js/dag/details/Dag.tsx"] | Numeric values in DAG details are incorrectly rendered as timestamps | ### Discussed in https://github.com/apache/airflow/discussions/35430
<div type='discussions-op-text'>
<sup>Originally posted by **Gollum999** November 3, 2023</sup>
### Apache Airflow version
2.7.2
### What happened
On the "Details" tab on a DAGs "Grid" page, all numeric DAG attributes are rendered as timestamps instead of numbers. For example:
![image](https://github.com/apache/airflow/assets/7269927/9a99ccab-2d20-4a57-9fa8-63447e14444b)
I have noticed this behavior with the following fields, though there may be more:
* Max active runs
* Max active tasks
* Concurrency
### What you think should happen instead
Numeric fields should be rendered as numbers.
### How to reproduce
Go to any DAG's Grid page. Don't select a DAG Run or Task Instance. Click the Details tab if it is not already selected.
### Operating System
CentOS Stream 8
### Versions of Apache Airflow Providers
N/A
### Deployment
Other
### Deployment details
Standalone + self-hosted
### Anything else
I think the bug is [here](https://github.com/apache/airflow/blob/0a257afd031289062c76e7b77678337e88e10b93/airflow/www/static/js/dag/details/Dag.tsx#L133):
```
// parse value for each key if date or not
const parseStringData = (value: string) =>
Number.isNaN(Date.parse(value)) ? value : <Time dateTime={value} />;
```
`Date.parse(1)` returns a number that is not NaN.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div>
---
Reopen after discussion, and [additional findings](https://github.com/apache/airflow/discussions/35430#discussioncomment-7473103):
- Seems like it is only affect WebKit-based browsers, and works fine in Firefox
- Not all integers transformed into the dates, at least in range 13..31 keep integer type, and default value for `Max active runs`, `Max active tasks` is 16 that might be the reason why this bug remained unnoticed
Example DAG for reproduce
```python
import pendulum
from airflow.models.dag import DAG
from airflow.operators.empty import EmptyOperator
num_to_dt = {
1: True,
8: True,
12: True,
# 13-31 shows fine
**{ix: False for ix in range(13, 32)},
32: True,
64: True,
128: True,
256: True,
}
for num, convert_to_dt in num_to_dt.items():
with DAG(
f"issue_35430_number_{num:03d}",
start_date=pendulum.datetime(2023, 6, 1, tz="UTC"),
schedule=None,
catchup=False,
max_active_runs=num,
max_active_tasks=num,
tags=["issue", "35430", "ui", f"int to dt: {convert_to_dt}", str(num)]
):
EmptyOperator(task_id="empty", retries=num)
```
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/35500 | https://github.com/apache/airflow/pull/35538 | 1e0a357252fb62bfc6a353df2499a35a8ca16beb | 76ceeb4e4a4c7cbb4f0ba7cfebca4c24d2f7c3e1 | "2023-11-07T10:10:44Z" | python | "2023-11-14T17:43:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,499 | ["airflow/providers/jdbc/CHANGELOG.rst", "airflow/providers/jdbc/hooks/jdbc.py", "airflow/providers/jdbc/provider.yaml", "docs/apache-airflow-providers-jdbc/configurations-ref.rst", "docs/apache-airflow-providers-jdbc/index.rst"] | Add documentation about "providers.jdbc" "allow_driver_path_in_extra" configuration to jdbc provider.yaml | ### Body
The definition and description of the configuraiton is missing in provider.yaml - it should be added as for all the other provider configuration
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/35499 | https://github.com/apache/airflow/pull/35580 | 1f76986b7ba19737aba77d63bbec1ce29aff55fc | 9cfb1a82f3b00fa70a7b5ce6818a9e618512de63 | "2023-11-07T08:16:28Z" | python | "2023-11-11T11:21:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,404 | ["airflow/providers/http/hooks/http.py"] | fix HttpAsyncHook PUTs with application/json | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
HttpAsyncHook with method='PUT' and data is not supported. As far as I understood PUT is not in the [list of available methods](https://github.com/apache/airflow/blob/main/airflow/providers/http/hooks/http.py#L368) for passing kwarg `json=data`
### What you think should happen instead
_No response_
### How to reproduce
generate some PUT async hook runs with some data and await them:
```python
http_async_hook = HttpAsyncHook(method='PUT', http_conn_id='some_conn_id')
hook_run_1 = http_async_hook.run(
endpoint=f'/some/endpoint/{some_data_1["id"]}',
data=some_data_1
)
hook_run_2 = http_async_hook.run(
endpoint=f'/some/endpoint/{some_data_2["id"]}',
data=some_data_2
)
tasks = [hook_run_1, hook_run_2]
responses = await asyncio.gather(*tasks)
```
### Operating System
NAME="Linux Mint" VERSION="21.2 (Victoria)" ID_LIKE="ubuntu debian"VERSION_ID="21.2" UBUNTU_CODENAME=jammy
### Versions of Apache Airflow Providers
apache-airflow==2.7.0
apache-airflow-providers-http==4.6.0
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35404 | https://github.com/apache/airflow/pull/35405 | 61a9ab7600a856bb2b1031419561823e227331da | fd789080971a49496da0a79f3c8489cc0c1424f0 | "2023-11-03T11:27:12Z" | python | "2023-11-03T18:45:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,341 | ["airflow/providers/amazon/aws/operators/emr.py", "tests/providers/amazon/aws/operators/test_emr_serverless.py"] | Would it be possible to add 'name' to the list of template fields for EmrServerlessStartJobOperator? | ### Description
We have a use case where we would like to run a job runs in EMR Serverless where the job name should contain the start date.
For example: `name="[{{ ds }}] testing"`.
The solution presented in [31711](https://github.com/apache/airflow/issues/31711) does not work, because command
[self.name = name or self.config.pop("name", f"emr_serverless_job_airflow_{uuid4()}")](https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/operators/emr.py#L1229)
removes the `name` parameter from the `config` when initializing the `EmrServerlessStartJobOperator` operator
### Use case/motivation
_No response_
### Related issues
https://github.com/apache/airflow/issues/31711
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35341 | https://github.com/apache/airflow/pull/35648 | 46c0f85ba6dd654501fc429ddd831461ebfefd3c | 03a0b7267215ea2ac1bce6c60eca1a41f747e84b | "2023-11-01T13:35:40Z" | python | "2023-11-17T09:38:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,335 | ["airflow/www/extensions/init_security.py", "tests/www/views/test_session.py"] | Infinite UI redirection loop when user is changed to "inactive" while having a session opened | ### Body
When user is logged in with a valid session and deactivated, refreshing the browser/reusing the session leads to an infinite redirection loop (which is stopped quickly by browser detecting the situation).
## Steps to Produce:
Make sure you are using two different browsers.
In Browser A:
Login as the normal user.
In Browser B:
1. Login as admin.
2. Go to Security > List Users
3. Disable a user by unchecking this box:
![image](https://github.com/apache/airflow/assets/595491/0c1af0c2-2203-466f-8329-cf03aa138695)
4. Now in browser A, refresh the page.
You'll see a message like this:
![image](https://github.com/apache/airflow/assets/595491/5d9292c7-1364-46c0-8d26-5426a095112e)
In the server logs, you'll see that a lot of requests have been made to the server.
![image](https://github.com/apache/airflow/assets/595491/0599869a-a2bd-48a4-9f68-5789f24bfa16)
# Expected behaviour
There should be no infinite redirection, but the request for the inactive user should be rejected and the user should be redirected to the login page.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/35335 | https://github.com/apache/airflow/pull/35486 | 3fbd9d6b18021faa08550532241515d75fbf3b83 | e512a72c334708ff5d839e16ba8dc5906c744570 | "2023-11-01T09:31:27Z" | python | "2023-11-07T19:59:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,288 | ["airflow/www/static/js/dag/details/gantt/GanttTooltip.tsx", "airflow/www/static/js/dag/details/gantt/Row.tsx"] | Incorrect queued duration for deferred tasks in gantt view | ### Apache Airflow version
main (development)
### What happened
Gantt view calculates the diff between start date and queued at values to show queued duration. In case of deferred tasks that tasks get re-queued when the triggerer returns an event causing queued at to be greater than start date. This causes incorrect values to be shown in the UI. I am not sure how to fix this. Maybe queued duration can be not shown on the tooltip when queued time is greater than start time.
![Screenshot from 2023-10-31 09-15-54](https://github.com/apache/airflow/assets/3972343/c65e2f56-0a68-4080-9fcd-7785ca23e882)
### What you think should happen instead
_No response_
### How to reproduce
1. Trigger the below dag
2. `touch /tmp/a` to ensure triggerer returns an event.
3. Check for queued duration value in gantt view.
```python
from __future__ import annotations
from datetime import datetime
from airflow import DAG
from airflow.models.baseoperator import BaseOperator
from airflow.triggers.file import FileTrigger
class FileCheckOperator(BaseOperator):
def __init__(self, filepath, **kwargs):
self.filepath = filepath
super().__init__(**kwargs)
def execute(self, context):
self.defer(
trigger=FileTrigger(filepath=self.filepath),
method_name="execute_complete",
)
def execute_complete(self, context, event=None):
pass
with DAG(
dag_id="file_trigger",
start_date=datetime(2021, 1, 1),
catchup=False,
schedule_interval=None,
) as dag:
t1 = FileCheckOperator(task_id="t1", filepath="/tmp/a")
t2 = FileCheckOperator(task_id="t2", filepath="/tmp/b")
t1
t2
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35288 | https://github.com/apache/airflow/pull/35984 | 1264316fe7ab15eba3be6c985a28bb573c85c92b | 0376e9324af7dfdafd246e31827780e855078d68 | "2023-10-31T03:52:56Z" | python | "2023-12-05T14:03:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,261 | ["airflow/providers/atlassian/jira/notifications/__init__.py", "airflow/providers/atlassian/jira/notifications/jira.py", "airflow/providers/atlassian/jira/provider.yaml", "docs/apache-airflow-providers-atlassian-jira/index.rst", "docs/apache-airflow-providers-atlassian-jira/notifications/index.rst", "docs/apache-airflow-providers-atlassian-jira/notifications/jira-notifier-howto-guide.rst", "tests/providers/atlassian/jira/notifications/__init__.py", "tests/providers/atlassian/jira/notifications/test_jira.py"] | Add `JiraNotifier` | ### Body
Similar to the [notifiers we already have](https://airflow.apache.org/docs/apache-airflow-providers/core-extensions/notifications.html) we want to add `JiraNotifier` to cut a Jira ticket.
This is very useful to be set with `on_failure_callback`.
You can view other PRs that added similar functionality: [ChimeNotifier](https://github.com/apache/airflow/pull/32222), [SmtpNotifier](https://github.com/apache/airflow/pull/31359)
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/35261 | https://github.com/apache/airflow/pull/35397 | ce16963e9d69849309aa0a7cf978ed85ab741439 | 110bb0e74451e3106c4a5567a00453e564926c50 | "2023-10-30T06:53:23Z" | python | "2023-11-17T16:22:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,254 | ["tests/jobs/test_triggerer_job.py"] | Quarantined test_trigger_logging_sensitive_info test | ### Body
The test `airfow/tests/jobs/test_triggerrer_job.py::test_trigger_logging_sensitive_info` has a weird and race'y behaviour - that got exposed when implementing #83221. As a result it's been quarantined until we diagnose/fix it.
It's very easy to reproduce the racy behaviour, but the root cause is not yet certain:
1) Enter Breeze (might be any Python version and DB but it has been confirmed with Python 3.8 and Sqlite, Postgres)
2) Run `pytest tests/jobs/test_triggerer_job.py::test_trigger_logging_sensitive_info`
3) The test fails because the logs that the test gets are empty
4) Run it again `tests/jobs/test_triggerer_job.py::test_trigger_logging_sensitive_info`
5) it succeds (and continues doing so until you restart breeze or delete `/root/airflow/.airflow_db_initialised`
6) When you delete the `/root/airflow/.airflow_db_initialised` the test fails again
The presence of `/root/airflow/.airflow_db_initialised` means that airflow DB has been "reset" at least once by the tests. We have pytest fixture that checks if the file is created at least once and in case it has not been created it will run `initial_db_init` while setting up the tests. In `tests/conftest.py`. This avoids the problem that if semeone never initialized the DB, they will get strange DB errors (missing columns/indexes) and it is very confusing for first time users or when you delete local test DB.
This is done in this code:
```python
@pytest.fixture(autouse=True, scope="session")
def initialize_airflow_tests(request):
...
,,,
elif not os.path.exists(lock_file):
print(
"Initializing the DB - first time after entering the container.\n"
"You can force re-initialization the database by adding --with-db-init switch to run-tests."
)
initial_db_init()
# Create pid file
with open(lock_file, "w+"):
pass
else:
```
In some cases / some machines just commenting out `db.resetdb()` that is run inside `initial_db_init` cause the test to suceed even the first time, but this behaviour is inconsistent - sometims it does not help, which suggest that this is some kind of "startup" race of triggerer log handler - where simply adding more intialization/CPU/disk usage at the startup of tests triggers the handler to either miss or loose the connection.
The error is
```
FAILED tests/jobs/test_triggerer_job.py::test_trigger_logging_sensitive_info - AssertionError: assert 'test_dag/test_run/sensitive_arg_task/-1/1 (ID 1) starting' in ''
```
And it is caused - likely - by the fact that either the log is printed too early (?) for capsys to capture it or (more likely) it is not propagated through the `handler -> in-memory -> log -> stdout` due to some race condition.
Also there is mysterious stacktrace printed (but it is printed in both cases - when test works and does not work, that sugggests that this is the case and that it is connected with some race in the in-memory handler for logs, either wiht not catching or dropping logs bacause of some race at startup. I tried to debug it but did not have much luck so far - except being able to narrow it down and produce a very esily reproducible scenario.
```python
tests/jobs/test_triggerer_job.py::test_trigger_logging_sensitive_info
/usr/local/lib/python3.8/site-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-3
Traceback (most recent call last):
File "/usr/local/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.8/logging/handlers.py", line 1487, in _monitor
self.handle(record)
File "/usr/local/lib/python3.8/logging/handlers.py", line 1468, in handle
handler.handle(record)
File "/opt/airflow/airflow/utils/log/trigger_handler.py", line 104, in handle
self.emit(record)
File "/opt/airflow/airflow/utils/log/trigger_handler.py", line 93, in emit
h = self._get_or_create_handler(record.trigger_id, record.task_instance)
File "/opt/airflow/airflow/utils/log/trigger_handler.py", line 89, in _get_or_create_handler
self.handlers[trigger_id] = self._make_handler(ti)
File "/opt/airflow/airflow/utils/log/trigger_handler.py", line 84, in _make_handler
h.set_context(ti=ti)
File "/opt/airflow/airflow/utils/log/file_task_handler.py", line 185, in set_context
local_loc = self._init_file(ti)
File "/opt/airflow/airflow/utils/log/file_task_handler.py", line 478, in _init_file
full_path = self.add_triggerer_suffix(full_path=full_path, job_id=ti.triggerer_job.id)
AttributeError: 'NoneType' object has no attribute 'id'
```
cc: @dstandish @hussein-awala
Also see https://github.com/apache/airflow/pull/35160#discussion_r1375463230
NIT: The test also fails when you run pytest with `-s` flag because in this case logs are printed to terminal and get coloured with ANSI colors, and the assert fails, regardless if the message is empty or good:
```
FAILED tests/jobs/test_triggerer_job.py::test_trigger_logging_sensitive_info - AssertionError: assert 'test_dag/test_run/sensitive_arg_task/-1/1 (ID 1) starting' in
'[\x1b[34m2023-10-29T17:17:12.086+0000\x1b[0m] {\x1b[34mtriggerer_job_runner.py:\x1b[0m171} INFO\x1b[0m - Setting up TriggererHandlerWrapper with handler \x1b[01m<FileTaskHandler
(NOTSET)>\x1b[22m\x1b[0m\n[\x1b[34m2023-10-29T17:17:12.087+0000\x1b[0m] {\x1b[34mtriggerer_job_runner.py:\x1b[0m227} INFO\x1b[0m
- Setting up logging queue listener with handlers \x1b[01m[<RedirectStdHandler (NOTSET)>,
- <TriggererHandlerWrapper (NOTSET)>]\x1b[22m\x1b[0m\n[\x1b[34m2023-10-29T17:17:12.102+0000\x1b[0m] {\x1b[34mtriggerer_job_runner.py:\x1b[0m595} INFO\x1b[0m -
- trigger \x1b[01mtest_dag/test_run/sensitive_arg_task/-1/1 (ID 1)\x1b[22m starting\x1b[0m\n'
========================================================= 1 failed, 1 warning in 2.40s ==========================================================
```
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/35254 | https://github.com/apache/airflow/pull/35427 | b30e7aef91737c6bab40dd8f35784160b56650f4 | d67e8e83fa543e9cfae6b096f3e9e6b6bd8ca025 | "2023-10-29T17:13:44Z" | python | "2023-11-03T23:03:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,204 | ["airflow/jobs/scheduler_job_runner.py", "tests/jobs/test_scheduler_job.py"] | Mysterious hanging of the test_retry_handling_job for sqlite on self-hosted/local env | ### Body
## Problem
The test in question:
```
tests/jobs/test_scheduler_job.py::TestSchedulerJob::test_retry_handling_job
```
Started to timeout - mysteriously - on October 18, 2023:
- only for self-hosted instances od ours (not for Public runners)
- only for sqlite not for Postgres / MySQL
- for local execution on Llinux it can be reproduced as well only with sqlite
- for local execution on MacOS it can be reproduced as well only with sqlite
## Successes in the (recent past)
The last time it's known to succeeded was
https://github.com/apache/airflow/actions/runs/6638965943/job/18039945807
This test toook just 2.77s
```
2.77s call tests/jobs/test_scheduler_job.py::TestSchedulerJob::test_retry_handling_job
```
Since then it is consistently handling for all runs on self-hosted runners of ours, while it consistenly succeeds on Public runnners.
## Reproducing locally
Reproducing is super easy with breeze:
```
pytest tests/jobs/test_scheduler_job.py::TestSchedulerJob::test_retry_handling_job -s --with-db-init
```
Pressing Ctrl-C (so sending INT to all processes in the group) "unhangs" the test and it succeeds quickly (????)
## What's so strange
It is super-mysterious:
* There does not seem to be any significant difference in the dependencies. there are a few dependencies beign upgraded in main - but going back to the versions they are coming from does not change anything:
```diff
--- /files/constraints-3.8/original-constraints-3.8.txt 2023-10-26 11:32:47.167610348 +0000
+++ /files/constraints-3.8/constraints-3.8.txt 2023-10-26 11:32:48.763610466 +0000
@@ -184 +184 @@
-asttokens==2.4.0
+asttokens==2.4.1
@@ -249 +249 @@
-confluent-kafka==2.2.0
+confluent-kafka==2.3.0
@@ -352 +352 @@
-greenlet==3.0.0
+greenlet==3.0.1
@@ -510 +510 @@
-pyOpenSSL==23.2.0
+pyOpenSSL==23.3.0
@@ -619 +619 @@
-spython==0.3.0
+spython==0.3.1
@@ -687 +687 @@
-yandexcloud==0.238.0
+yandexcloud==0.240.0
```
* Even going back the very same image that was used in the job that succeeded does not fix the problem. It still hangs.
Do this (020691f5cc0935af91a09b052de6122073518b4e is the image used in
```
docker pull ghcr.io/apache/airflow/main/ci/python3.8:020691f5cc0935af91a09b052de6122073518b4e
docker tag ghcr.io/apache/airflow/main/ci/python3.8:020691f5cc0935af91a09b052de6122073518b4e ghcr.io/apache/airflow/main/ci/python3.8:latest
breeze
pytest tests/jobs/test_scheduler_job.py::TestSchedulerJob::test_retry_handling_job -s --with-db-init
```
Looks like there is something very strange going on with the environment of the test - something is apparently triggering a very nasty race condition (kernel version ? - this is the only idea I have) that is not yet avaiale on public runners.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/35204 | https://github.com/apache/airflow/pull/35221 | 98e7f4cc538c11871c58547a2233bfda691184e1 | 6f3d294645153db914be69cd2b2a49f12a18280c | "2023-10-26T17:45:53Z" | python | "2023-10-27T19:31:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,199 | ["airflow/models/dag.py", "airflow/models/dagrun.py", "tests/models/test_dag.py", "tests/providers/google/cloud/sensors/test_gcs.py"] | Relax mandatory requirement for `start_date` when `schedule=None` | ### Body
Currently `start_date` is mandatory parameter.
For DAGs with `schedule=None` we can relax this requirement as no scheduling calculation needed so the `start_date` parameter isn't used.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/35199 | https://github.com/apache/airflow/pull/35356 | 16585b178fab53b7c6d063426105664e55b14bfe | 930f165db11e611887275dce17f10eab102f0910 | "2023-10-26T15:04:53Z" | python | "2023-11-28T06:14:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,186 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/www/static/js/types/api-generated.ts"] | Airflow REST API `Get tasks for DAG` returns error if DAG task has trigger rule `one_done` | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Experienced in version 2.6.1 but appears to be an issue in the latest version too.
When using the Airflow REST API `/api/v1/dags/{dag name}/tasks` to query the tasks of a DAG that contains a task with the trigger rule 'one_done' an error is returned:
```
{
"detail": "'one_done' is not one of ['all_success', 'all_failed', 'all_done', 'one_success', 'one_failed', 'none_failed', 'none_skipped', 'none_failed_or_skipped', 'none_failed_min_one_success', 'dummy']\n\nFailed validating 'enum' in schema['properties']['tasks']['items']['properties']['trigger_rule']:\n {'description': 'Trigger rule.\\n'\n '\\n'\n '*Changed in version 2.2.0*: '\n \"'none_failed_min_one_success' is added as a possible \"\n 'value.\\n',\n 'enum': ['all_success',\n 'all_failed',\n 'all_done',\n 'one_success',\n 'one_failed',\n 'none_failed',\n 'none_skipped',\n 'none_failed_or_skipped',\n 'none_failed_min_one_success',\n 'dummy'],\n 'type': 'string'}\n\nOn instance['tasks'][6]['trigger_rule']:\n 'one_done'",
"status": 500,
"title": "Response body does not conform to specification",
"type": "https://airflow.apache.org/docs/apache-airflow/2.6.1/stable-rest-api-ref.html#section/Errors/Unknown"
}
```
This appears to be an issue with the openapi spec, specifically the `trigger_rules` enum which is missing some valid trigger rules:
https://github.com/apache/airflow/blob/0bb56315e664875cd764486bb2090e0a2ef747d8/airflow/api_connexion/openapi/v1.yaml#L4756
https://github.com/apache/airflow/blob/8531396c7c8bf1e016db10c7d32e5e19298d67e5/airflow/utils/trigger_rule.py#L23
I believe the openapi spec needs to include `one_done`. It should also be updated to include `all_done_setup_success`, `always`, and `all_skipped`.
### What you think should happen instead
DAG tasks should be returned with the trigger rule `one_done`
### How to reproduce
Create a DAG, add a task with a trigger rule of `one_done`. Call the Get tasks API: https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/get_tasks
### Operating System
Ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35186 | https://github.com/apache/airflow/pull/35194 | 8e268940739154c21aaf40441d91dac806d21a60 | e3b3d786787597e417f3625c6e9e617e4b3e5073 | "2023-10-25T20:46:30Z" | python | "2023-10-26T10:55:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,166 | ["airflow/api_connexion/schemas/task_instance_schema.py", "tests/api_connexion/schemas/test_task_instance_schema.py"] | dry_run not optional in api for set task instances state | ### Apache Airflow version
2.7.2
### What happened
Sent request without dry_run parameter to dags/{dag_id}/updateTaskInstancesState and got a 500 error.
### What you think should happen instead
I should be able to send a request to update task instances state and get a valid response.
### How to reproduce
You can see this by commenting out line 215 in tests/api_connexion/schemas/test_task_instance_schema.py and running tests. This is a similar error to #34563.
### Operating System
Ubuntu 20.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35166 | https://github.com/apache/airflow/pull/35167 | 9059f72668fb85253b8b4e3e9fb5350d621b639d | 196a235358de21c62aedca1347b2527600e8ae87 | "2023-10-24T19:42:19Z" | python | "2023-11-25T08:15:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,137 | ["airflow/providers/amazon/aws/transfers/http_to_s3.py", "airflow/providers/amazon/provider.yaml", "docs/apache-airflow-providers-amazon/transfer/http_to_s3.rst", "tests/providers/amazon/aws/transfers/test_http_to_s3.py", "tests/system/providers/amazon/aws/example_http_to_s3.py"] | Add HttpToS3Operator | ### Description
This operator allows users to effortlessly transfer data from HTTP sources to Amazon S3, with minimal coding effort. Whether you need to ingest web-based content, receive data from external APIs, or simply move data from a web resource to an S3 bucket, the HttpToS3Operator simplifies the process, enabling efficient data flow and integration in a wide range of use cases.
### Use case/motivation
The motivation for introducing the HttpToS3Operator stems from the need to streamline data transfer and integration between HTTP sources and Amazon S3. While the SimpleHttpOperator has proven to be a valuable tool for executing HTTP requests, it has certain limitations, particularly in scenarios where users require data to be efficiently stored in an Amazon S3 bucket.
### Related issues
Only issue that mentions this operator is [here](https://github.com/apache/airflow/pull/22758#discussion_r849820953)
### Are you willing to submit a PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35137 | https://github.com/apache/airflow/pull/35176 | e3b3d786787597e417f3625c6e9e617e4b3e5073 | 86640d166c8d5b3c840bf98e5c6db0d91392fde3 | "2023-10-23T16:44:15Z" | python | "2023-10-26T10:56:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,131 | ["docs/apache-airflow/security/webserver.rst"] | Support for general OIDC providers or making it clear in document | ### Description
I tried hard to configure airflow with authentik OIDC but airflow kept complaining about empty userinfo. There are very limited tutorials online. After reading some source code of authlib, flask-appbuilder and airflow, I found in [airflow/airflow/auth/managers/fab/security_manager/override.py](https://github.com/apache/airflow/blob/ef497bc3412273c3a45f43f40e69c9520c7cc74c/airflow/auth/managers/fab/security_manager/override.py) that only a selection of providers are supported (github twitter linkedin google azure openshift okta keycloak). If the provider name is not within this list, it will always return an empty userinfo at [line 1475](https://github.com/apache/airflow/blob/ef497bc3412273c3a45f43f40e69c9520c7cc74c/airflow/auth/managers/fab/security_manager/override.py#L1475C22-L1475C22).
For others who try to integrate openid connect, I would recommend read the code in [airflow/airflow/auth/managers/fab/security_manager/override.py starting from line 1398](https://github.com/apache/airflow/blob/ef497bc3412273c3a45f43f40e69c9520c7cc74c/airflow/auth/managers/fab/security_manager/override.py#L1398)
### Use case/motivation
This behaviour should be documented. Otherwise, there should be a way to configure other OIDC providers like other projects that support OIDC in a general manner.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35131 | https://github.com/apache/airflow/pull/35237 | 554e3c9c27d76280d131d1ddbfa807d7b8006943 | 283fb9fd317862e5b375dbcc126a660fe8a22e11 | "2023-10-23T14:32:49Z" | python | "2023-11-01T23:35:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,095 | ["airflow/models/dagrun.py", "tests/models/test_dagrun.py"] | Assigning not json serializable value to dagrun.conf cause an error in UI | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Greetings!
Recently I’ve faced a problem. It seems that assigning object, which can’t be serialized to JSON, to the dag_run.conf dict cause critical errors with UI.
After executing code example in "How to reproduce":
Grid representation of the DAG breaks with following result:
<img width="800" alt="Pasted Graphic" src="https://github.com/apache/airflow/assets/79107237/872ccde5-500f-4484-a36c-dce6b7112286">
Browse -> DAG Runs also becomes unavailable.
<img width="800" alt="Pasted Graphic 1" src="https://github.com/apache/airflow/assets/79107237/b4e3df0c-5324-41dd-96f3-032e706ab7a9">
Dag itself continues to work correctly, this affects only UI graph and dagrun/list/
I suggest to use custom dictionary with restriction on setting non json values.
### What you think should happen instead
Raise an error
### How to reproduce
Execute following task.
Composer 2 version 2.4.2
Airflow version 2.5.3
```
@task
def test_task(**context):
context['dag_run'].conf["test"] = np.int64(1234)
```
### Operating System
Ubuntu 20.04.6 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Google Cloud Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35095 | https://github.com/apache/airflow/pull/35096 | 9ae57d023b84907c6c6ec62a7d43f2d41cb2ebca | 84c40a7877e5ea9dbee03b707065cb590f872111 | "2023-10-20T23:09:11Z" | python | "2023-11-14T20:46:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,074 | ["airflow/www/static/js/dag/grid/index.tsx"] | Grid UI Scrollbar / Cell Click Issue | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
This is in 2.5.2, we're in the midst of upgrading to to 2.7 but haven't tested thoroughly if this happens there as we don't have the volume of historic runs.
On the DAG main landing page, if you have a long DAG with multiple sub-groups, and a number of runs recorded, the UI in it's default listing of `25` previous runs, causes the scrollbar for the grid to overlay the right-most column, making it impossible to click on the cells for the right-most DAG run.
![image](https://github.com/apache/airflow/assets/120225/8d33d3a2-630c-4210-88b6-7a52be0e45df)
This is specifically in Firefox.
It's not world-ending, just a bit annoying at times.
### What you think should happen instead
The Grid area should have enough pad to the right of the rightmost column to clear the scrollbar area.
You can get around this by altering the number of runs down to 5 in the dropdown above the grid, this seems to fix the issue in order access the cells.
### How to reproduce
This seems to be the scenario it happens under:
- DAG with long list of tasks, including sub-groups, and moderately long labels
- Show 25 runs on the dag screen
![image](https://github.com/apache/airflow/assets/120225/118f3044-2e18-4678-9ee3-d249cb2c39c7)
### Operating System
Ubuntu 22.04 / macOS
### Versions of Apache Airflow Providers
N/A
### Deployment
Other
### Deployment details
Custom deployment.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35074 | https://github.com/apache/airflow/pull/35346 | bcb5eebd6247d4eec15bf5cce98ccedaad629661 | b06c4b0f04122b5f7d30db275a20f7f254c02bef | "2023-10-20T08:39:40Z" | python | "2023-11-22T16:58:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,062 | ["docs/apache-airflow/core-concepts/dags.rst"] | Task dependency upstream/downstream setting error | ### What do you see as an issue?
I'm using Airflow 2.7.2 and following the [documentation](https://airflow.apache.org/docs/apache-airflow/2.7.2/core-concepts/dags.html#task-dependencies) to define the dependency relationship relationship between tasks. I tried the explicit way suggested by the doc but it failed.
```
first_task.set_downstream(second_task, third_task)
third_task.set_upstream(fourth_task)
```
### Solving the problem
It seems it doesn't work if we want to attach multiple tasks downstream to one in a one-line manner. So I suggest currently we should break it down. Or resolve it.
```
first_task.set_downstream(second_task)
first_task.set_downstream(third_task)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35062 | https://github.com/apache/airflow/pull/35075 | 551886eb263c8df0b2eee847dd6725de78bc25fc | a4ab95abf91aaff0aaf8f0e393a2346f5529a6d2 | "2023-10-19T16:19:26Z" | python | "2023-10-20T09:27:27Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,015 | ["airflow/providers/ftp/hooks/ftp.py", "tests/providers/ftp/hooks/test_ftp.py"] | `FTPSHook.store_file()` change directory | ### Apache Airflow version
2.7.2
### What happened
`FTPSHook.store_file()` change current directory. And second call with same directory will raise `no such file or directory` error:
```
[2023-10-18, 14:40:08 MSK] {logging_mixin.py:149} INFO - content hash uploading to `test/daily/20230601_transactions.csv.MD5` ...
[2023-10-18, 14:40:08 MSK] {logging_mixin.py:149} INFO - content uploading to `test/daily/20230601_transactions.csv` ...
[2023-10-18, 14:40:08 MSK] {taskinstance.py:1824} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/decorators/base.py", line 220, in execute
return_value = super().execute(context)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/operators/python.py", line 181, in execute
return_value = self.execute_callable()
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/operators/python.py", line 198, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/opt/airflow/dags/repo/dags/integrations_alpharm_reporting_dag.py", line 59, in upload_external_shops_report
upload_report_to_ftp(task_id, f'test/daily/{logical_date:YYYYMMDD}_transactions.csv')
File "/opt/airflow/dags/repo/common/integrations_alpharm/utils.py", line 36, in upload_report_to_ftp
from_drive2_to_ftp(get_report_drive2_path(task_id), ftp_path)
File "/opt/airflow/dags/repo/common/integrations_alpharm/utils.py", line 32, in from_drive2_to_ftp
ftp_hook.store_file(ftp_path, BytesIO(content))
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/providers/ftp/hooks/ftp.py", line 220, in store_file
conn.cwd(remote_path)
File "/usr/local/lib/python3.10/ftplib.py", line 625, in cwd
return self.voidcmd(cmd)
File "/usr/local/lib/python3.10/ftplib.py", line 286, in voidcmd
return self.voidresp()
File "/usr/local/lib/python3.10/ftplib.py", line 259, in voidresp
resp = self.getresp()
File "/usr/local/lib/python3.10/ftplib.py", line 254, in getresp
raise error_perm(resp)
ftplib.error_perm: 550 test/daily: Нет такого файла или каталога
```
This happens because of this line in `store_file()` implementation:
```
conn.cwd(remote_path)
conn.storbinary(f'STOR {remote_file_name}', input_handle)
```
To get around this, you have to recreate the `FTPSHook` for each uploading. It would be more convenient to simply restore directory in the `FTPSHook.store_file()` method after `storbinary` call
### What you think should happen instead
_No response_
### How to reproduce
```
ftp_hook = FTPSHook()
ftp_hook.get_conn().prot_p() # https://stackoverflow.com/questions/65473257/ftpshook-airflow-522-ssl-tls-required-on-the-data-channel
ftp_hook.store_file(ftp_path, BytesIO(bytes)) # OK
ftp_hook.store_file(ftp_path, BytesIO(bytes)) # Raise "ftplib.error_perm: 550 test/daily: no such file or directory"
```
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35015 | https://github.com/apache/airflow/pull/35105 | 5f2999eed59fb61e32aa50ef042b9cc74c07f1bf | ff30dcc1e18abf267e4381bcc64a247da3c9af35 | "2023-10-18T11:57:34Z" | python | "2023-10-30T23:02:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,959 | ["chart/templates/_helpers.yaml", "chart/values.schema.json", "chart/values.yaml", "helm_tests/airflow_aux/test_airflow_common.py", "helm_tests/airflow_aux/test_configmap.py"] | Chart: Allow mounting DAGs to a custom path in airflow containers. | ### Description
Hi!
I think it would be useful to have a helm chart parameter that (if set) will allow to overwrite DAGs mount path in airflow containers. Mount path is already defined in `airflow_dags_mount` [here](https://github.com/apache/airflow/blob/main/chart/templates/_helpers.yaml#L475) , but currently mountPath is hardcoded to `{{ printf "%s/dags" .Values.airflowHome }}`.
### Use case/motivation
Setting this new mount path to a subfolder `/dags_folder/dags_recieved_from_git` will make it possible to:
* Add some rarely changing DAGs during image building to your `dags_folder` instead of receiving them from git.
* mount to your DAGs folder custom configmaps (for example, to `/dags_folder/my_custom_configmap`).
Let's say your `dags_folder` is `/opt/airflow/dags`. In this case overall it will look like:
```
/opt/airflow/
└── dags
├── dags_recieved_from_git
│ ├── my_frequently_changing_dag_1.py # Synced from git repo
│ └── my_frequently_changing_dag_2.py # Synced from git repo
├── my_custom_configmap
│ └── configmap_data.txt # Mounted from K8s config map
├── my_rarely_changing_dag_1.py # Added during image build process
└── my_rarely_changing_dag_2.py # Added during image build process
```
It was just an example and I think there might be other use cases for this parameter.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34959 | https://github.com/apache/airflow/pull/35083 | 95980a9bc50c1accd34166ba608bbe2b4ebd6d52 | ac53a9aaaba8d4250c8dfdf5e0b65b38a8a635b7 | "2023-10-16T05:33:05Z" | python | "2023-10-25T16:35:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,956 | ["airflow/example_dags/example_python_operator.py"] | What are these functions and variables declared for | ### What do you see as an issue?
https://github.com/apache/airflow/blob/main/airflow/example_dags/example_python_operator.py#L40C1-L44C9
<img width="257" alt="Screenshot 2023-10-16 at 10 23 26 AM" src="https://github.com/apache/airflow/assets/81360154/53030408-4e33-440e-8a39-4cf6f706700a">
this two function, variable are not used
### Solving the problem
I think it's ok to delete those statement
### Anything else
I might be wrong, so let me know the purpose of those statement
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34956 | https://github.com/apache/airflow/pull/35046 | ec6d945aa31af30726d8affaa8b30af330da1085 | 12f4d51ce50c16605bede57d79998409e4a3ac4a | "2023-10-16T01:32:16Z" | python | "2023-10-19T18:08:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,953 | ["airflow/operators/trigger_dagrun.py", "tests/operators/test_trigger_dagrun.py"] | TriggerDagRunOperator does not trigger dag on subsequent run even with reset_dag_run=True | ### Discussed in https://github.com/apache/airflow/discussions/24548
<div type='discussions-op-text'>
<sup>Originally posted by **don1uppa** June 16, 2022</sup>
### Apache Airflow version
2.2.5
### What happened
I have a dag that starts another dag with a conf.
I am attempting to start the initiating dag a second time with different configuration parameters. I it to start the other dag with the new parameters.
## What you think should happen instead
It should use the new conf when starting the called dag
### How to reproduce
See code in squsequent message
### Operating System
windows and ubuntu
### Versions of Apache Airflow Providers
N/A
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
**Work around for now is to delete the previous "child" dag runs.**
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/34953 | https://github.com/apache/airflow/pull/35429 | ea8eabc1e7fc3c5f602a42d567772567b4be05ac | 94f9d798a88e76bce3e42da9d2da7844ecf7c017 | "2023-10-15T14:49:00Z" | python | "2023-11-04T15:44:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,909 | ["airflow/providers/google/cloud/hooks/gcs.py"] | apache-airflow-providers-google 10.9.0 fails to list GCS objects | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
This affects Airflow 2.7.2. It appears that the 10.9.0 version of apache-airflow-providers-google fails to list objects in gcs.
Example to recreate:
```shell
pipenv --python 3.8
pipenv shell
pip install apache-airflow==2.7.2 apache-airflow-providers-google==10.9.0
export AIRFLOW_CONN_GOOGLE_CLOUD_DEFAULT='google-cloud-platform://'
```
Then create the following python test file:
```python
from airflow.providers.google.cloud.hooks.gcs import GCSHook
result = GCSHook().list(
bucket_name='a-test-bucket,
prefix="a/test/prefix",
delimiter='.csv'
)
result = list(result)
print(result)
```
The output if this is:
```
[]
```
In a different pipenv environment, this works when using Airflow 2.7.1 and the 10.7.0 version of the provider:
```shell
pipenv --python 3.8
pipenv shell
pip install apache-airflow==2.7.1 apache-airflow-providers-google==10.7.0
export AIRFLOW_CONN_GOOGLE_CLOUD_DEFAULT='google-cloud-platform://'
```
Use the same python test file as above. The output of this is a list of files as expected.
[this](https://github.com/apache/airflow/commit/3fa9d46ec74ef8453fcf17fbd49280cb6fb37cef#diff-82854006b5553665046db26d43a9dfa90bec78d4ba93e2d2ca7ff5bf632fa624R832) appears to be the commit which may have broken things.
The `hooks/gcs.py` file can be patched in the following way which appears to force the lazy loading to kick in:
```python
print("Forcing loading....")
all_blobs = list(blobs)
for blob in all_blobs:
print(blob.name)
if blobs.prefixes:
ids.extend(blobs.prefixes)
else:
ids.extend(blob.name for blob in all_blobs)
page_token = blobs.next_page_token
if page_token is None:
# empty next page token
break
```
Example patch file:
```
+++ gcs.py 2023-10-12 11:34:00.774206013 +0000
@@ -829,12 +829,19 @@
versions=versions,
)
+ print("Forcing loading....")
+ all_blobs = list(blobs)
+
+ for blob in all_blobs:
+ print(blob.name)
+
if blobs.prefixes:
ids.extend(blobs.prefixes)
else:
- ids.extend(blob.name for blob in blobs)
+ ids.extend(blob.name for blob in all_blobs)
page_token = blobs.next_page_token
+
if page_token is None:
# empty next page token
break
```
### What you think should happen instead
The provider should be able to list files in gcs.
### How to reproduce
Please see above for the steps to reproduce.
### Operating System
n/a
### Versions of Apache Airflow Providers
10.9.0 of the google provider.
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34909 | https://github.com/apache/airflow/pull/34919 | f3ddefccf610833dc8d6012431f372f2af03053c | 5d74ffb32095d534866f029d085198bc783d82c2 | "2023-10-13T08:17:50Z" | python | "2023-11-27T12:44:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,906 | ["airflow/utils/db_cleanup.py", "tests/utils/test_db_cleanup.py"] | Clear old Triggers when Triggerer is not running | ### Apache Airflow version
main (development)
### What happened
When a deferrable operator or sensor is run without a Triggerer process, the task gets stuck in the deferred state, and will eventually fail. A banner will show up in the home page saying that the Triggerer is not running. There is no way to remove this message. In the Triggers menu, the trigger that activated is listed there, and there is no way to remove that Trigger from that list.
### What you think should happen instead
If the trigger fails, the trigger should be removed from the Trigger menu, and the message should go away.
### How to reproduce
The bug occurs when no Triggerer is running.
In order to reproduce,
1) Run any DAG that uses a deferrable operator or sensor.
2) Allow the task to reach the deferred state.
3) Allow the task to fail on its own (i.e. timeout), or mark it as success or failure.
A message will show up on the DAGs page that the Triggerer is dead. This message does not go away
```
The triggerer does not appear to be running. Last heartbeat was received 6 minutes ago.
Triggers will not run, and any deferred operator will remain deferred until it times out and fails.
```
A Trigger will show up in the Triggers menu.
### Operating System
ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Using breeze for testing
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34906 | https://github.com/apache/airflow/pull/34908 | ebcb16201af08f9815124f27e2fba841c2b9cd9f | d07e66a5624faa28287ba01aad7e41c0f91cc1e8 | "2023-10-13T04:46:24Z" | python | "2023-10-30T17:09:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,889 | ["airflow/providers/amazon/aws/operators/ecs.py", "tests/providers/amazon/aws/operators/test_ecs.py"] | EcsRunTaskOperator -`date value out of range` on deferrable execution - default waiter_max_attempts | ### Apache Airflow version
2.7.1
### What happened
Trying to test **EcsRunTaskOperator** in deferrable mode resulted in an unexpected error at the `_start_task()` step of the Operator's `execute` method. The return error log was
`{standard_task_runner.py:104} ERROR - Failed to execute job 28 for task hello-world-defer (date value out of range; 77)`
After a lot of research to understand the `date value out of range` specific error, I found [this PR](https://github.com/apache/airflow/pull/33712) in which I found from the [change log](https://github.com/apache/airflow/pull/33712/files#diff-4dba25d07d7d8c4cb47ef85e814f123c9171072b240d605fffd59b29ee3b31eb) that the `waiter_max_attempts` was switched to `1000000 * 365 * 24 * 60 * 10` (Which results in 1M years). This change cannot work properly with an internal Airflow date calculation, related to the Waiter's retries.
### What you think should happen instead
Unfortunately, I haven't been able to track the error further but by changing to a lower limit of 100000 waiter_max_attempts it worked as expected.
My suggestion would be to decrease the default value of **waiter_max_attempts**, maybe 1000000 (1M) retries is a valid number of retries. These results will set the default value of the expected running attempt time to 1000000 * 6 ~ 70days
### How to reproduce
By keeping the default values of **EcsRunTaskOperator** while trying to use it in deferrable mode.
### Operating System
Debian
### Versions of Apache Airflow Providers
apache-airflow-providers-airbyte==3.3.2
apache-airflow-providers-amazon==8.7.1
apache-airflow-providers-celery==3.3.4
apache-airflow-providers-common-sql==1.7.2
apache-airflow-providers-docker==3.7.5
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-http==4.5.2
apache-airflow-providers-imap==3.3.0
apache-airflow-providers-postgres==5.6.1
apache-airflow-providers-redis==3.3.2
apache-airflow-providers-snowflake==4.4.2
apache-airflow-providers-sqlite==3.2.1
### Deployment
Other Docker-based deployment
### Deployment details
- Custom Deploy using ECS and Task Definition Services on EC2 for running AIrflow services.
- Extending Base Airflow Image to run on each Container Service (_apache/airflow:latest-python3.11_)
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34889 | https://github.com/apache/airflow/pull/34928 | b1196460db1a21b2c6c3ef2e841fc6d0c22afe97 | b392f66c424fc3b8cbc957e02c67847409551cab | "2023-10-12T12:29:50Z" | python | "2023-10-16T20:27:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,878 | ["chart/templates/redis/redis-statefulset.yaml", "chart/values.schema.json", "chart/values.yaml", "helm_tests/other/test_redis.py"] | [helm] Redis does not include priorityClassName key | ### Official Helm Chart version
1.11.0 (latest released)
### Apache Airflow version
2.x
### Kubernetes Version
1.25+
### What happened
There is no way to configure via parent values the priorityClassName for Redis, which is a workload with PV constraints that usually needs increased priority to be scheduled wherever its PV lives.
### What you think should happen instead
Able to include priorityClassName
### How to reproduce
Install via Helm Chart
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34878 | https://github.com/apache/airflow/pull/34879 | 6f3d294645153db914be69cd2b2a49f12a18280c | 14341ff6ea176f2325ebfd3f9b734a3635078cf4 | "2023-10-12T07:06:43Z" | python | "2023-10-28T07:51:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,877 | ["airflow/providers/cncf/kubernetes/executors/kubernetes_executor.py", "docs/apache-airflow/administration-and-deployment/logging-monitoring/metrics.rst"] | Scheduler is spending most of its time in clear_not_launched_queued_tasks function | ### Apache Airflow version
2.7.1
### What happened
Airflow running the clear_not_launched_queued_tasks function on a certain frequency (default 30 seconds). When we run the airflow on a large Kube cluster (pods more than > 5K). Internally the clear_not_launched_queued_tasks function loops through each queued task and checks the corresponding worker pod existence in the Kube cluster. Right this existence check using list pods Kube API. The API is taking more than 1s. if there are 120 queued tasks, then it will take ~ 120 seconds (1s * 120). So, this leads the scheduler to spend most of its time in this function rather than scheduling the tasks. It leads to none of the jobs being scheduled or degraded scheduler performance.
### What you think should happen instead
It would be nice to get all the airflow worker pods in a one/few batch calls rather than for each task. These batch calls helps to speed the processing of clear_not_launched_queued_tasks function call.
### How to reproduce
Run the airflow on large Kube clusters (> 5K pods). Simulate the airflow to run the 100 parallel DAG runs for every minute.
### Operating System
Cent OS 7
### Versions of Apache Airflow Providers
2.3.3, 2.7.1
### Deployment
Other Docker-based deployment
### Deployment details
Terraform based airflow deployment
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34877 | https://github.com/apache/airflow/pull/35579 | 5a6dcfd8655c9622f3838a0e66948dc3091afccb | cd296d2068b005ebeb5cdc4509e670901bf5b9f3 | "2023-10-12T06:03:28Z" | python | "2023-11-12T17:41:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,838 | ["airflow/providers/apache/spark/hooks/spark_submit.py", "airflow/providers/apache/spark/operators/spark_submit.py", "tests/providers/apache/spark/operators/test_spark_submit.py"] | Adding property files option in the Spark Submit command | ### Description
spark-submit command has one of the options to pass the properties file as argument. Instead of loading multiple key, value via --conf option, this will help to load extra properties from the file path. While we have support for most of the arguments supported in spark-submit command in SparkSubmitOperator, this one `property-files` is missing. Could that be included?
```[root@airflow ~]# spark-submit --help
Usage: spark-submit [options] <app jar | python file | R file> [app arguments]
Usage: spark-submit --kill [submission ID] --master [spark://...]
Usage: spark-submit --status [submission ID] --master [spark://...]
Usage: spark-submit run-example [options] example-class [example args]
Options:
--conf, -c PROP=VALUE Arbitrary Spark configuration property.
--properties-file FILE Path to a file from which to load extra properties. If not
specified, this will look for conf/spark-defaults.conf.
```
### Use case/motivation
Add the property-files as one of the options to pass in the SparkSubmitOperator to load the extra config properties as file
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34838 | https://github.com/apache/airflow/pull/36164 | 3dddfb4a4ae112544fd02e09a5633961fa725a36 | 195abf8f7116c9e37fd3dc69bfee8cbf546c5a3f | "2023-10-09T16:21:55Z" | python | "2023-12-11T16:32:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,816 | ["airflow/cli/commands/triggerer_command.py"] | Airflow 2.7.1 can not start Scheduler & trigger | ### Apache Airflow version
2.7.1
### What happened
After upgrade from 2.6.0 to 2.7.1 (try pip uninstall apache-airflow, and clear dir airflow - remove airflow.cfg), I can start scheduler & trigger with daemon.
I try start with command, it can start, but logout console it killed.
I try: airflow scheduler or airflow triggerer :done but kill when logout console
airflow scheduler --daemon && airflow triggerer --daemon: fail, can not start scheduler & triggerer (2.6.0 run ok). but start deamon with webserver & celery worker is fine
Help me
### What you think should happen instead
_No response_
### How to reproduce
1. run airflow 2.6.0 fine on ubuntu server 22.04.3 lts
2. install airflow 2.7.1
3. can not start daemon triggerer & scheduler
### Operating System
ubuntu server 22.04.3 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34816 | https://github.com/apache/airflow/pull/34931 | b067051d3bcec36187c159073ecebc0fc048c99b | 9c1e8c28307cc808739a3535e0d7901d0699dcf4 | "2023-10-07T17:18:30Z" | python | "2023-10-14T15:56:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,799 | ["airflow/providers/postgres/hooks/postgres.py", "docs/apache-airflow-providers-postgres/connections/postgres.rst"] | Airflow postgres connection field schema points to database name | ### Apache Airflow version
2.7.1
### What happened
Airflow's postgres connection configuration form has a field called 'schema' which is misguiding as values mentioned here is used to refer to the database name instead of the schema name. It should be correctly named to 'database' or 'dbname'
### What you think should happen instead
_No response_
### How to reproduce
create a connection on the web UI and choose connection type as postgres.
Have a dag connect to an postgres server with multiple databases
provide the database name in the 'schema' field of the connection form- this would work if nothing else is incorrect in the etl
now change the value in the schema field of the connection form to refer to a schema- this will fail unexpectedly as the schema name field actually points to the database name.
### Operating System
Windows and Linux
### Versions of Apache Airflow Providers
2.71
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34799 | https://github.com/apache/airflow/pull/34811 | 530ebb58b6b85444b62618684b5741b9d6dd715e | 39cbd6b231c75ec432924d8508f15a4fe3c68757 | "2023-10-06T09:24:48Z" | python | "2023-10-08T19:24:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,795 | ["airflow/www/jest-setup.js", "airflow/www/static/js/api/index.ts", "airflow/www/static/js/dag/nav/FilterBar.tsx", "airflow/www/static/js/dag/useFilters.test.tsx", "airflow/www/static/js/dag/useFilters.tsx", "airflow/www/views.py", "tests/www/views/test_views_grid.py"] | Support multi-select state filtering on grid view | ### Description
Replace the existing selects with multi-selects that allow you to filter multiple DAG run states and types at the same time, somewhat similar to my prototype:
![image](https://github.com/apache/airflow/assets/1842905/c4ec0793-1ccb-417d-989c-781997416f97)
### Use case/motivation
I'm not sure if it is just me, but sometimes I wish I was able to show multiple DAG run states, especially `running` and `failed`, at the same time.
This would be especially helpful for busy DAGs on which I want to clear a few failed runs. Without the multi-select, if I switch from `failed` to `running` DAG runs, I need to orient myself again to find the run I just cleared (assuming there are lots of other running DAG runs). _With_ the multi-select, the DAG run I just cleared stays in the same spot and I can check the logs, while clearing some other failed runs.
I'm not sure we need a multi-select for DAG run types as well. I'd tend to say no, but maybe someone else has a use-case for that.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34795 | https://github.com/apache/airflow/pull/35403 | 1c6bbe2841fe846957f7a1ce68eb978c30669896 | 9e28475402a3fc6cbd0fedbcb3253ebff1b244e3 | "2023-10-06T03:34:31Z" | python | "2023-12-01T17:38:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,767 | ["airflow/providers/google/cloud/hooks/dataflow.py", "tests/providers/google/cloud/hooks/test_dataflow.py"] | Dataflow job is failed when wait_until_finished=True although the state is JOB_STATE_DONE | ### Apache Airflow version
2.7.1
### What happened
We currently use the DataflowHook in tasks of Airflow DAG. If we upgrade the version of apache-airflow-google-providers to 10.9.0, we got the following error although the dataflow job is completed.
```
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1384, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1531, in _execute_task_with_callbacks
result = self._execute_task(context, task_orig)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1586, in _execute_task
result = execute_callable(context=context)
File "xxx", line 65, in execute
hook.wait_for_done(
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/common/hooks/base_google.py", line 475, in inner_wrapper
return func(self, *args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/dataflow.py", line 1203, in wait_for_done
job_controller.wait_for_done()
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/dataflow.py", line 439, in wait_for_done
while self._jobs and not all(self._check_dataflow_job_state(job) for job in self._jobs):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/dataflow.py", line 439, in <genexpr>
while self._jobs and not all(self._check_dataflow_job_state(job) for job in self._jobs):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/dataflow.py", line 430, in _check_dataflow_job_state
raise Exception(
Exception: Google Cloud Dataflow job <xxx> is in an unexpected terminal state: JOB_STATE_DONE, expected terminal state: JOB_STATE_DONE
```
### What you think should happen instead
The error message "an unexpected terminal state: JOB_STATE_DONE, expected terminal state: JOB_STATE_DONE" is strange. If the dataflow job is completed, I think it should not be failed even if the `expected_terminal_state` is not set as DataflowHook parameter.
### How to reproduce
Install airflow from apache-airflow-google-providers/10.9.0.
Pass wait_until_finished=True to DataflowHook and execute start_template_dataflow.
### Operating System
Ubuntu 20.04.6 LTS (Focal Fossa)
### Versions of Apache Airflow Providers
apache-airflow-google-providers===10.9.0
### Deployment
Google Cloud Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34767 | https://github.com/apache/airflow/pull/34785 | 3cb0870685bed221e711855cf5458c4580ec5199 | 8fd5ac6530df5ffd90577d3bd624ac16cdb15335 | "2023-10-04T18:09:25Z" | python | "2023-11-10T18:04:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,756 | ["airflow/cli/cli_config.py", "airflow/cli/commands/variable_command.py", "tests/cli/commands/test_variable_command.py"] | CLI: Variables set should allow to set description | ### Body
The CLI:
`airflow variables set [-h] [-j] [-v] key VALUE`
https://airflow.apache.org/docs/apache-airflow/stable/cli-and-env-variables-ref.html#set_repeat1
Doesn't support adding description though column exists and we support it from Rest API:
https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/post_variables
**The Task:**
Allow to set description from cli command
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/34756 | https://github.com/apache/airflow/pull/34791 | c70f298ec3ae65f510ea5b48c6568b1734b58c2d | 77ae1defd9282f7dd71a9a61cf7627162a25feb6 | "2023-10-04T14:13:52Z" | python | "2023-10-29T18:42:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,751 | ["airflow/www/templates/airflow/pool_list.html", "airflow/www/views.py", "tests/www/views/test_views_pool.py"] | Expose description columns of Pools in the UI | ### Body
In the Pools UI we don't show the description column though we do have it:
https://github.com/apache/airflow/blob/99eeb84c820c8a380721e5d40f5917a01616b943/airflow/models/pool.py#L56
and we have it in the API:
https://github.com/apache/airflow/pull/19841
The task:
Expose the column in the UI
![Screenshot 2023-10-04 at 17 08 05](https://github.com/apache/airflow/assets/45845474/4ed7ed5b-8d42-4a72-b271-07d07066f914)
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/34751 | https://github.com/apache/airflow/pull/34862 | 8d067129d5ba20a9847d5d70b368b3dffc42fe6e | 0583150aaca9452e02b8d15b613bfb2451b8e062 | "2023-10-04T11:00:17Z" | python | "2023-10-20T04:14:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,748 | ["airflow/providers/snowflake/provider.yaml", "docs/apache-airflow-providers-snowflake/index.rst", "generated/provider_dependencies.json"] | Upgrade the Snowflake Python Connector to version 2.7.8 or later | ### Description
As per the change made by Snowflake (affecting customers on GCP), kindly update the 'Snowflake' Python Connector version to version 2.7.8 or later.
Please note all recent versions of Snowflake SQL-alchemy connector have support for this change as they use the Python Connector more recent than above.
Here is the complete information on the change reasons and recommendations - https://community.snowflake.com/s/article/faq-2023-client-driver-deprecation-for-GCP-customers
### Use case/motivation
If this change is not made Airflow customers on GCP will not be able to perform PUT operations to their Snowflake account.
Soft Cutover enforced by Snowflake is Oct 30, 2023.
Hard Cutover enforced by Google is Jan 15, 2024
https://community.snowflake.com/s/article/faq-2023-client-driver-deprecation-for-GCP-customers
### Related issues
https://community.snowflake.com/s/article/faq-2023-client-driver-deprecation-for-GCP-customers
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34748 | https://github.com/apache/airflow/pull/35440 | 7352839e851cdbee0d15f0f8ff7ee26ed821b8e3 | a6a717385416a3468b09577dfe1d7e0702b5a0df | "2023-10-04T07:22:29Z" | python | "2023-11-04T18:43:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,740 | ["chart/templates/secrets/metadata-connection-secret.yaml", "helm_tests/other/test_keda.py"] | not using pgbouncer connection values still use pgbouncer ports/names for keda in helm chart | ### Apache Airflow version
2.7.1
### What happened
I missed the rc test window last week, sorry was out of town. When you use:
`values.keda.usePgbouncer: false`
The settings use re-use the pgbouncer port instead of port 5432 for postgres. You can work around this by overriding the:
`values.ports.pgbouncer: 5432` setting, but the database name is also incorrect and references the name of the database in the pgbouncer.ini file, which has an additional `-metadata` appended to the database connection name.
### What you think should happen instead
Not use the manipulated values for the non pgbouncer connection.
### How to reproduce
deploy the chart using the indicated values
### Operating System
gke
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34740 | https://github.com/apache/airflow/pull/34741 | 1f3525fd93554e66f6c3f2d965a0dbf6dcd82724 | 38e6607cc855f55666b817177103585f080d6173 | "2023-10-03T19:59:46Z" | python | "2023-10-07T17:53:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,726 | ["airflow/www/templates/airflow/trigger.html", "airflow/www/templates/appbuilder/dag_docs.html", "docs/apache-airflow/img/trigger-dag-tutorial-form.png"] | Hiding Run Id and Logical date from trigger DAG UI | ### Description
With the new more user friendly Trigger UI (`/dags/mydag/trigger`), one can direct more users to using Airflow UI directly.
However, the first two questions a user has to answer is **Logical date** and **Run ID**, which are very confusing and in most cases make no sense to override, even for administrators these values should be rare edge cases to override.
![image](https://github.com/apache/airflow/assets/89977373/201327a1-2a18-4cc3-8129-55057fa5e852)
**Is it possible to?**
* Make these inputs opt-in on a per DAG level?
* Global config to hide them from all DAGs?
* Hide under "advanced and optional" section?
Airflow version 2.7.1
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34726 | https://github.com/apache/airflow/pull/35284 | 9990443fa154e3e1e5576b68c14fe375f0f76645 | 62bdf11fdc49c501ccf5571ab765c51363fa1cc7 | "2023-10-03T07:43:51Z" | python | "2023-11-08T22:29:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,720 | ["docs/apache-airflow/security/webserver.rst"] | AUTH_REMOTE_USER is gone | ### Apache Airflow version
2.7.1
### What happened
We upgraded from 2.6.3 to 2.7.1 and the webserver stopped working due to our config with error:
```
ImportError: cannot import name 'AUTH_REMOTE_USER' from 'airflow.www.fab_security.manager'
```
There's a commit called [Fix inheritance chain in security manager (https://github.com/apache/airflow/pull/33901)](https://github.com/apache/airflow/commit/d3ce44236895e9e1779ea39d7681b59a25af0509) which sounds suspicious around magic security imports.
### What you think should happen instead
This is [still documented](https://airflow.apache.org/docs/apache-airflow/stable/security/webserver.html#other-methods) and it wasn't noted in the changelog as removed, so it shouldn't have broken our upgrade.
### How to reproduce
it's not there, so try to import it and... it's not there.
for now I just switched it to importing directly `from flask_appbuilder.const import AUTH_DB, AUTH_LDAP, AUTH_OAUTH, AUTH_OID, AUTH_REMOTE_USER`
### Operating System
all of them
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34720 | https://github.com/apache/airflow/pull/34721 | 08bfa08273822ce18e01f70f9929130735022583 | feaa5087e6a6b89d9d3ac7eaf9872d5b626bf1ce | "2023-10-02T20:58:31Z" | python | "2023-10-04T09:36:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,623 | ["airflow/www/static/js/api/useExtraLinks.ts", "airflow/www/static/js/dag/details/taskInstance/ExtraLinks.tsx", "airflow/www/static/js/dag/details/taskInstance/index.tsx"] | Extra Links not refresh by the "Auto-refresh" | ### Apache Airflow version
2.7.1
### What happened
the buttons extra links are not refreshed by the "auto refresh" feature
that mean if you clear a task , and the second run is in running state , the buttons under Extra Links are still linking to the first run of the task
### What you think should happen instead
_No response_
### How to reproduce
run a task with a Extra Links like the `GlueJobOperator` wait for the finish
clear the task , wait for it to be running , click again on the Extra link it open a new tab on the first run and not the new run
### Operating System
ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34623 | https://github.com/apache/airflow/pull/35317 | 9877f36cc0dc25cffae34322a19275acf5c83662 | be6e2cd0d42abc1b3099910c91982f31a98f4c3d | "2023-09-26T08:48:41Z" | python | "2023-11-16T15:30:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,612 | ["airflow/providers/celery/executors/celery_executor_utils.py"] | BROKER_URL_SECRET Not working as of Airflow 2.7 | ### Apache Airflow version
2.7.1
### What happened
Hi team,
In the past you can use `AIRFLOW__CELERY__BROKER_URL_SECRET` as a way to retrieve the credentials from a `SecretBackend` at runtime. However, as of Airflow 2.7, this technique appears to be broken. I believe this related to the discussion [34030 - Celery configuration elements not shown to be fetched with _CMD pattern](https://github.com/apache/airflow/discussions/34030). The irony is the pattern works when using the `config get-value` command, but does not work when using the actual `airflow celery command`. I suspect this has something to do with when the wrapper calls `ProvidersManager().initialize_providers_configuration()`.
```cmd
unset AIRFLOW__CELERY__BROKER_URL
AIRFLOW__CELERY__BROKER_URL_SECRET=broker_url_east airflow config get-value celery broker_url
```
This correct prints the secret from the backend!
```
redis://:<long password>@<my url>:6379/1
```
However actually executing celery with the same methodolgy results in the default Redis
```cmd
AIRFLOW__CELERY__BROKER_URL_SECRET=broker_url_east airflow celery worker
```
Relevant output
```
- ** ---------- [config]
- ** ---------- .> app: airflow.providers.celery.executors.celery_executor:0x7f4110be1e50
- ** ---------- .> transport: redis://redis:6379/0
```
Notice the redis/redis and default port with no password.
### What you think should happen instead
I would expect the airflow celery command to be able to leverage the `_secret` API similar to the `config` command.
### How to reproduce
You must use a secret back end to reproduce as described above. You can also do
```cmd
AIRFLOW__CELERY__BROKER_URL_CMD='/usr/bin/env bash -c "echo -n ZZZ"' airflow celery worker
```
And you will see the ZZZ is disregarded
```
- ** ---------- .> app: airflow.providers.celery.executors.celery_executor:0x7f0506d49e20
- ** ---------- .> transport: redis://redis:6379/0
```
It appears neither historical _CMD or _SECRET APIs work after the refactor to move celery to the providers.
### Operating System
ubuntu20.04
### Versions of Apache Airflow Providers
Relevant ones
apache-airflow-providers-celery 3.3.3
celery 5.3.4
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
I know this has something to do with when `ProvidersManager().initialize_providers_configuration()` is executed but I don't know the right place to put the fix.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34612 | https://github.com/apache/airflow/pull/34782 | d72131f952836a3134c90805ef7c3bcf82ea93e9 | 1ae9279346315d99e7f7c546fbcd335aa5a871cd | "2023-09-25T20:56:20Z" | python | "2023-10-17T17:58:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,595 | ["chart/templates/dag-processor/dag-processor-deployment.yaml", "chart/values.yaml", "helm_tests/airflow_core/test_dag_processor.py"] | helm chart doesn't support securityContext for airflow-run-airflow-migration and dag-processor init container | ### Official Helm Chart version
1.10.0 (latest released)
### Apache Airflow version
2.6.3
### Kubernetes Version
1.27
### Helm Chart configuration
helm chart doesn't support securityContext for airflow-run-airflow-migration and dag-processor init container
### Docker Image customizations
_No response_
### What happened
_No response_
### What you think should happen instead
_No response_
### How to reproduce
helm chart doesn't support securityContext for airflow-run-airflow-migration and dag-processor init container.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34595 | https://github.com/apache/airflow/pull/35593 | 1a5a272312f31ff8481b647ea1f4616af7e5b4fe | 0a93e2e28baa282e20e2a68dcb718e3708048a47 | "2023-09-25T10:47:28Z" | python | "2023-11-14T00:21:36Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,574 | ["docs/apache-airflow/start.rst"] | Quick start still says Python 3.11 is not supported | ### What do you see as an issue?
The quick start page https://airflow.apache.org/docs/apache-airflow/stable/start.html still says that Python 3.11 is not supported, even though README.md says it is.
### Solving the problem
docs/apache-airflow/start.rst should be updated: line 27 should include 3.11 as supported and line 28 removed.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34574 | https://github.com/apache/airflow/pull/34575 | 6a03870d1c1c5871dc9bcb8ea48039ec47676484 | 9b96f76ac820b3dc020286b685a236da842e407c | "2023-09-23T08:03:37Z" | python | "2023-09-24T19:26:57Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,563 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/task_instance_schema.py", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"] | DryRun is not optional for patch task instance | ## Summary
According to the [REST api docs](https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/patch_mapped_task_instance).
you can patch a task instance state. When you hit the api without sending a
"dry_run" variable, you get a KeyError (This is from a server running version 2.5.3):
```
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 2528, in wsgi_app
response = self.full_dispatch_request()
File "/home/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 1825, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 1823, in full_dispatch_request
rv = self.dispatch_request()
File "/home/airflow/.local/lib/python3.10/site-packages/flask/app.py", line 1799, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/home/airflow/.local/lib/python3.10/site-packages/connexion/decorators/decorator.py", line 68, in wrapper
response = function(request)
File "/home/airflow/.local/lib/python3.10/site-packages/connexion/decorators/uri_parsing.py", line 149, in wrapper
response = function(request)
File "/home/airflow/.local/lib/python3.10/site-packages/connexion/decorators/validation.py", line 196, in wrapper
response = function(request)
File "/home/airflow/.local/lib/python3.10/site-packages/connexion/decorators/validation.py", line 399, in wrapper
return function(request)
File "/home/airflow/.local/lib/python3.10/site-packages/connexion/decorators/response.py", line 112, in wrapper
response = function(request)
File "/home/airflow/.local/lib/python3.10/site-packages/connexion/decorators/parameter.py", line 120, in wrapper
return function(**kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/api_connexion/security.py", line 51, in decorated
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/api_connexion/endpoints/task_instance_endpoint.py", line 594, in patch_task_instance
if not data["dry_run"]:
KeyError: 'dry_run'
```
The API docs state that dry_run is not required and that it is defaulted to false.
This can be reproduced in `main` with the tests by commenting out line 1699 in
[test_task_instance_endpoint.py](https://github.com/apache/airflow/blob/5b0ce3db4d36e2a7f20a78903daf538bbde5e38a/tests/api_connexion/endpoints/test_task_instance_endpoint.py#L1695-L1709)
| https://github.com/apache/airflow/issues/34563 | https://github.com/apache/airflow/pull/34568 | f497b72fc021e33a4b8543bb0750bffbb8fe0055 | a4357ca25cc3d014e50968bac7858f533e6421e4 | "2023-09-22T19:00:01Z" | python | "2023-09-30T18:46:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,535 | ["airflow/www/views.py"] | Unable to retrieve logs for nested task group when parent is mapped | ### Apache Airflow version
2.7.1
### What happened
Unable to retrieve logs for task inside task group inside mapped task group.
Got `404 "TaskInstance not found"` in network requests
### What you think should happen instead
_No response_
### How to reproduce
```
from datetime import datetime
from airflow import DAG
from airflow.decorators import task, task_group
from airflow.operators.bash import BashOperator
from airflow.utils.task_group import TaskGroup
with DAG("mapped_task_group_bug", schedule=None, start_date=datetime(1970, 1, 1)):
@task
def foo():
return ["a", "b", "c"]
@task_group
def bar(x):
with TaskGroup("baz"):
# If child task group exists, logs 404
# "TaskInstance not found"
# http://localhost:8080/api/v1/dags/mapped_task_group_bug/dagRuns/manual__2023-09-21T22:31:56.863704+00:00/taskInstances/bar.baz.bop/logs/2?full_content=false
# if it is removed, logs appear
BashOperator(task_id="bop", bash_command="echo hi $x", env={"x": x})
bar.partial().expand(x=foo())
```
### Operating System
debian 11 / `astro dev start`
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34535 | https://github.com/apache/airflow/pull/34587 | 556791b13d4e4c10f95f3cb4c6079f548447e1b8 | 97916ba45ccf73185a5fbf50270a493369da0344 | "2023-09-21T22:48:50Z" | python | "2023-09-25T16:26:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,516 | ["airflow/providers/microsoft/azure/hooks/container_instance.py", "airflow/providers/microsoft/azure/operators/container_instances.py"] | Jobs in Azure Containers restart infinitely if logger crashes, despite retries being set to off. | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We are using Airflow 2.5.2 to deploy python scripts in Azure containers.
In cases where the logger breaks (in our case because someone used tqdm for progress bars which are known to break it), Airflow failing to find the log keeps re-provisioning the container and restarting the job infinitely, even if all is set to not retry in Airflow. This incurs costs on API calls for us and thus is an impactful problem.
The issue could be because of the handling in _monitor_logging() in [Azure cointainer_instances.py line 298](https://github.com/apache/airflow/blob/main/airflow/providers/microsoft/azure/operators/container_instances.py#L298) where it changes the state to provisioning, but then doesn't do anything with it when it continues to fail to get instance_view. Maybe some form of check like if state=="Provisioning" and last_state=="Running": return 1 if retries are off could help handle it?
Any insight would be appreciated. I am happy to help write a fix, if you can help me understand this flow a bit better.
### What you think should happen instead
The job should fail/exit code 1 instead of reprovisioning/retrying.
### How to reproduce
Run an airflow job in which a script is run in an Azure container, which employs tqdm progress bars, or otherwise overwhelms the logger and makes it fail.
### Operating System
Ubuntu 20.04
### Versions of Apache Airflow Providers
apache-airflow-providers-common-sql==1.3.4
apache-airflow-providers-ftp==2.1.1
apache-airflow-providers-http==2.1.1
apache-airflow-providers-imap==2.2.2
apache-airflow-providers-microsoft-azure==3.7.2
apache-airflow-providers-postgres==4.0.1
apache-airflow-providers-sqlite==2.1.2
apache-airflow-providers-ssh==3.1.0
### Deployment
Virtualenv installation
### Deployment details
Airflow running on a VM hosted in Azure.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34516 | https://github.com/apache/airflow/pull/34627 | d27d0bb60b08ed8550491d4801ba5bf3c0e3da9b | 546c850a43d8b00fafc11e02e63fa5caa56b4c07 | "2023-09-21T09:56:08Z" | python | "2023-10-13T12:05:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,498 | ["chart/templates/dag-processor/dag-processor-deployment.yaml", "chart/values.yaml", "helm_tests/security/test_security_context.py"] | Add securityContexts in dagProcessor.logGroomerSidecar | ### Official Helm Chart version
1.10.0 (latest released)
### Apache Airflow version
2.7.1
### Kubernetes Version
1.26.7
### Helm Chart configuration
_No response_
### Docker Image customizations
_No response_
### What happened
When enabling `dagProcessor.logGroomerSidecar`, our OPA gatekeeper flags the `dag-processor-log-groomer` container with the appropriate non-root permissions. There is no way to set the `securityContexts` for this sidecar as it is not even enabled.
### What you think should happen instead
The `securityContexts` setting for the `dag-processor-log-groomer` container should be configurable.
### How to reproduce
In the Helm values, set `dagProcessor.logGroomerSidecar` to `true`.
### Anything else
This problem occurs when there are OPA policies in place pertaining to strict `securityContexts` settings.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34498 | https://github.com/apache/airflow/pull/34499 | 6393b3515fbb7aabb1613f61204686e89479a5a0 | 92cc2ffd863b8925ed785d5e8b02ac38488e835e | "2023-09-20T09:42:39Z" | python | "2023-11-29T03:00:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,483 | ["airflow/serialization/serializers/datetime.py", "tests/serialization/serializers/test_serializers.py"] | `pendulum.DateTime` objects now being serialized as python objects with non-standard timezones | ### Apache Airflow version
2.7.0
### What happened
We recently updated our Airflow server to 2.7.0 (from 2.6.0) and moved from a local PostGres instance to one located in AWS RDS, as well as switched from X86_64 arch to ARM64 (Amazon Graviton2 processor). We had some DAGs that passed pendulum.DateTimes as XCOMs that used to work on the old server that now fail with the following error:
```
[2023-09-19, 15:28:23 UTC] {abstractoperator.py:696} ERROR - Exception rendering Jinja template for task 'apply_bonuses_to_new_shifts', field 'op_args'. Template: (XComArg(<Task(_PythonDecoratedOperator): update_early_bird_eligible_shifts>),)
Traceback (most recent call last):
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/pendulum/tz/zoneinfo/reader.py", line 50, in read_for
file_path = pytzdata.tz_path(timezone)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/pytzdata/__init__.py", line 74, in tz_path
raise TimezoneNotFound('Timezone {} not found at {}'.format(name, filepath))
pytzdata.exceptions.TimezoneNotFound: Timezone EDT not found at /home/airflow/dagger/venv/lib64/python3.9/site-packages/pytzdata/zoneinfo/EDT
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/models/abstractoperator.py", line 688, in _do_render_template_fields
rendered_content = self.render_template(
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/template/templater.py", line 162, in render_template
return tuple(self.render_template(element, context, jinja_env, oids) for element in value)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/template/templater.py", line 162, in <genexpr>
return tuple(self.render_template(element, context, jinja_env, oids) for element in value)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/template/templater.py", line 158, in render_template
return value.resolve(context)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/utils/session.py", line 77, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/models/xcom_arg.py", line 413, in resolve
result = ti.xcom_pull(
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/utils/session.py", line 74, in wrapper
return func(*args, **kwargs)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/models/taskinstance.py", line 2562, in xcom_pull
return XCom.deserialize_value(first)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/models/xcom.py", line 693, in deserialize_value
return BaseXCom._deserialize_value(result, False)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/models/xcom.py", line 686, in _deserialize_value
return json.loads(result.value.decode("UTF-8"), cls=XComDecoder, object_hook=object_hook)
File "/usr/lib64/python3.9/json/__init__.py", line 359, in loads
return cls(**kw).decode(s)
File "/usr/lib64/python3.9/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python3.9/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/utils/json.py", line 117, in object_hook
return deserialize(dct)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/serialization/serde.py", line 253, in deserialize
return _deserializers[classname].deserialize(classname, version, deserialize(value))
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/serialization/serializers/datetime.py", line 70, in deserialize
return DateTime.fromtimestamp(float(data[TIMESTAMP]), tz=timezone(data[TIMEZONE]))
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/pendulum/tz/__init__.py", line 37, in timezone
tz = _Timezone(name, extended=extended)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/pendulum/tz/timezone.py", line 40, in __init__
tz = read(name, extend=extended)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/pendulum/tz/zoneinfo/__init__.py", line 9, in read
return Reader(extend=extend).read_for(name)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/pendulum/tz/zoneinfo/reader.py", line 52, in read_for
raise InvalidTimezone(timezone)
pendulum.tz.zoneinfo.exceptions.InvalidTimezone: Invalid timezone "EDT"
[2023-09-19, 15:28:23 UTC] {taskinstance.py:1943} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/pendulum/tz/zoneinfo/reader.py", line 50, in read_for
file_path = pytzdata.tz_path(timezone)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/pytzdata/__init__.py", line 74, in tz_path
raise TimezoneNotFound('Timezone {} not found at {}'.format(name, filepath))
pytzdata.exceptions.TimezoneNotFound: Timezone EDT not found at /home/airflow/dagger/venv/lib64/python3.9/site-packages/pytzdata/zoneinfo/EDT
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/models/taskinstance.py", line 1518, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode, session=session)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/models/taskinstance.py", line 1646, in _execute_task_with_callbacks
task_orig = self.render_templates(context=context)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/models/taskinstance.py", line 2291, in render_templates
original_task.render_template_fields(context)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/models/baseoperator.py", line 1244, in render_template_fields
self._do_render_template_fields(self, self.template_fields, context, jinja_env, set())
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/utils/session.py", line 77, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/models/abstractoperator.py", line 688, in _do_render_template_fields
rendered_content = self.render_template(
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/template/templater.py", line 162, in render_template
return tuple(self.render_template(element, context, jinja_env, oids) for element in value)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/template/templater.py", line 162, in <genexpr>
return tuple(self.render_template(element, context, jinja_env, oids) for element in value)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/template/templater.py", line 158, in render_template
return value.resolve(context)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/utils/session.py", line 77, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/models/xcom_arg.py", line 413, in resolve
result = ti.xcom_pull(
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/utils/session.py", line 74, in wrapper
return func(*args, **kwargs)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/models/taskinstance.py", line 2562, in xcom_pull
return XCom.deserialize_value(first)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/models/xcom.py", line 693, in deserialize_value
return BaseXCom._deserialize_value(result, False)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/models/xcom.py", line 686, in _deserialize_value
return json.loads(result.value.decode("UTF-8"), cls=XComDecoder, object_hook=object_hook)
File "/usr/lib64/python3.9/json/__init__.py", line 359, in loads
return cls(**kw).decode(s)
File "/usr/lib64/python3.9/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python3.9/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/utils/json.py", line 117, in object_hook
return deserialize(dct)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/serialization/serde.py", line 253, in deserialize
return _deserializers[classname].deserialize(classname, version, deserialize(value))
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/airflow/serialization/serializers/datetime.py", line 70, in deserialize
return DateTime.fromtimestamp(float(data[TIMESTAMP]), tz=timezone(data[TIMEZONE]))
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/pendulum/tz/__init__.py", line 37, in timezone
tz = _Timezone(name, extended=extended)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/pendulum/tz/timezone.py", line 40, in __init__
tz = read(name, extend=extended)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/pendulum/tz/zoneinfo/__init__.py", line 9, in read
return Reader(extend=extend).read_for(name)
File "/home/airflow/dagger/venv/lib64/python3.9/site-packages/pendulum/tz/zoneinfo/reader.py", line 52, in read_for
raise InvalidTimezone(timezone)
pendulum.tz.zoneinfo.exceptions.InvalidTimezone: Invalid timezone "EDT"
```
However, the DAG itself makes no mention of EDT - it does the following:
```
last_datetime.in_tz("America/New_York")
```
I've identified the issue as how these XCOMs are being serialized - one our old server, they were being serialized as ISO timestamps:
![image](https://github.com/apache/airflow/assets/29555644/f0027b3a-2245-4529-aa52-7e9cff8cd009)
Now, however, they're being serialized like this:
![image](https://github.com/apache/airflow/assets/29555644/485ac5a8-eb29-4304-96f3-f6612b36d60c)
This is unexpected, and also causes problems because `EDT` is not an IANA timezone, which prevents `pendulum` from deserializing it in the task that accepts this XCOM.
### What you think should happen instead
_No response_
### How to reproduce
I think this is reproducible by creating a DAG that has a task that converts a `pendulum.DateTime` to `America/New_York` and passes it as an XCOM to another task.
```python
from datetime import datetime, timedelta
from typing import Dict, List, Optional
from airflow.decorators import dag, task
from airflow.utils.trigger_rule import TriggerRule
from pendulum.datetime import DateTime
@task()
def task_one(
data_interval_end: Optional[DateTime] = None,
) -> DateTime:
return data_interval_end.in_tz("America/New_York")
# this task will error out
@task()
def task_two(last_added_date: DateTime) -> None:
pass
@dag(
default_args=default_args,
schedule="*/5 * * * *",
start_date=datetime(2023, 7, 25, 18, 0, 0),
)
def dag() -> None:
last_added_datetime = task_one()
task_two(last_added_datetime)
dag()
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34483 | https://github.com/apache/airflow/pull/34492 | 19450e03f534f63399bf5db2df7690fdd47b09c8 | a3c06c02e31cc77b2c19554892b72ed91b8387de | "2023-09-19T15:36:06Z" | python | "2023-09-28T07:31:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,482 | ["airflow/providers/cncf/kubernetes/operators/pod.py", "tests/providers/cncf/kubernetes/operators/test_pod.py"] | KubernetesPodOperator shutting down istio sidecar but not deleting pod upon failure | ### Apache Airflow version
2.7.1
### What happened
I start a simple pod (one container and that's it) with the KubernetesPodOperator from the cncf.kubernetes provider.
The task in my container fails (exit code non-zero).
The istio sidecar is indeed terminated but the pod itself remain in an `Error` status even if the `on_finish_action` parameter is set at `"delete_pod"` and the pod is never terminated.
It is expected that the pod is fully deleted, istio or not.
I found that there is a difference in treatment [upon deleting the pod](https://github.com/apache/airflow/blob/5b85442fdc19947e125dcb0591bd59a53626a27b/airflow/providers/cncf/kubernetes/operators/pod.py#L824) so it might be some specific case I'm not aware of.
I'll be happy to help either on documentation or fixing this small issue but I would need confirmation on what is the expected behavior.
### What you think should happen instead
I think the `on_finish_action="delete_pod"` should terminate the pod, not let it hanging the `Error` state with both containers stopped.
### How to reproduce
here is the simplest dag on how to reproduce on my end, note that istio is not visible here since managed cluster-wide.
```python3
from airflow import DAG
from airflow.utils.dates import days_ago
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import (
KubernetesPodOperator,
)
with DAG(
dag_id="issue_reproduction",
schedule_interval=None,
start_date=days_ago(0),
is_paused_upon_creation=False,
max_active_runs=1,
) as dag:
pod = KubernetesPodOperator(
task_id="issue_reproduction",
image="ubuntu",
cmds=["bash", "-cx"],
arguments=["exit 1"],
dag=dag,
security_context={
"runAsNonRoot": True,
"runAsUser": 65534,
"seccompProfile": {"type": "RuntimeDefault"},
},
)
```
### Operating System
local=macos, deployments=k8s
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes==7.5.0
(Didn't see any fix that should change anything between 7.5.0 and 7.6.0 and the code I pointed to has not changed)
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
k8s version: v1.26.3
### Anything else
The issue should be 100% reproductible provided I didn't miss any specifics.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34482 | https://github.com/apache/airflow/pull/34500 | e81bb487796780705f6df984fbfed04f555943d7 | fb92ff8486f21b61a840ddc4414429c3a9adfc88 | "2023-09-19T14:59:50Z" | python | "2023-09-27T16:28:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,455 | ["airflow/www/static/js/dag/details/graph/Node.tsx", "docs/apache-airflow/img/demo_graph_view.png"] | Graph view task name & status visibility | ### Description
I have had complaints from coworkers that it is harder to visibly see the status of airflow tasks in the new graph view. They miss the colored border from the old graph view that made it very clear what the status of a task was. They have also mentioned that names of the tasks feel a lot smaller and harder to read without zooming in.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34455 | https://github.com/apache/airflow/pull/34486 | 404666ded04d60de050c0984d113b594aee50c71 | d0ae60f77e1472585d62a3eb44d64d9da974a199 | "2023-09-18T13:40:04Z" | python | "2023-09-25T18:06:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,450 | ["airflow/providers/databricks/hooks/databricks_base.py"] | DatabricksRunNowDeferrableOperator not working with OAuth | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Hi,
I noticed an issue with `DatabricksRunNowDeferrableOperator` and Databricks OAuth connecion.
I am using Airflow 2.7.0 and databricks provider 4.5.0 (latest).
I created a [Databricks connection](https://airflow.apache.org/docs/apache-airflow-providers-databricks/stable/connections/databricks.html) using [Databricks oauth](https://docs.databricks.com/en/dev-tools/authentication-oauth.html) (so with a usename, password + extra `service_principal_oauth: true`)
I ran a DAG with a `DatabricksRunNowDeferrableOperator`. My databricks job is started by airflow without any issue, but the task is immediatly in failure with the following stacktrace:
```
[2023-09-18, 09:33:51 UTC] {taskinstance.py:1720} ERROR - Trigger failed:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/jobs/triggerer_job_runner.py", line 527, in cleanup_finished_triggers
result = details["task"].result()
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/jobs/triggerer_job_runner.py", line 599, in run_trigger
async for event in trigger.run():
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/providers/databricks/triggers/databricks.py", line 83, in run
run_state = await self.hook.a_get_run_state(self.run_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/providers/databricks/hooks/databricks.py", line 341, in a_get_run_state
response = await self._a_do_api_call(GET_RUN_ENDPOINT, json)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/providers/databricks/hooks/databricks_base.py", line 632, in _a_do_api_call
token = await self._a_get_token()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/providers/databricks/hooks/databricks_base.py", line 540, in _a_get_token
return await self._a_get_sp_token(OIDC_TOKEN_SERVICE_URL.format(self.databricks_conn.host))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/providers/databricks/hooks/databricks_base.py", line 260, in _a_get_sp_token
async for attempt in self._a_get_retry_object():
File "/home/airflow/.local/lib/python3.11/site-packages/tenacity/_asyncio.py", line 71, in __anext__
do = self.iter(retry_state=self._retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
^^^^^^^^^^^^
File "/usr/local/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/providers/databricks/hooks/databricks_base.py", line 262, in _a_get_sp_token
async with self._session.post(
File "/home/airflow/.local/lib/python3.11/site-packages/aiohttp/client.py", line 1141, in __aenter__
self._resp = await self._coro
^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/aiohttp/client.py", line 508, in _request
req = self._request_class(
^^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 310, in __init__
self.update_auth(auth)
File "/home/airflow/.local/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 495, in update_auth
raise TypeError("BasicAuth() tuple is required instead")
TypeError: BasicAuth() tuple is required instead
[2023-09-18, 09:33:51 UTC] {taskinstance.py:1943} ERROR - Task failed with exception
airflow.exceptions.TaskDeferralError: Trigger failure
```
I tried the exact same DAG with `DatabricksRunNowOperator` and I have no errors. It seems like the triggerer has issue to create a connection with OAuth.
### What you think should happen instead
The `DatabricksRunNowDeferrableOperator` should work with a databricks connection using OAuth. It should work exactly like the `DatabricksRunNowOperator`
### How to reproduce
* Create a databricks service principal and create Client ID and client Secret
* Create a databricks connection with those ID/Secret + extra `service_principal_oauth: true`
* Create a DAG with a `DatabricksRunNowDeferrableOperator`
* Run the DAG and you should see the error
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==8.5.1
apache-airflow-providers-celery==3.3.2
apache-airflow-providers-cncf-kubernetes==7.4.2
apache-airflow-providers-common-sql==1.7.0
apache-airflow-providers-daskexecutor==1.0.0
apache-airflow-providers-databricks==4.5.0
apache-airflow-providers-docker==3.7.3
apache-airflow-providers-elasticsearch==5.0.0
apache-airflow-providers-ftp==3.5.0
apache-airflow-providers-google==10.6.0
apache-airflow-providers-grpc==3.2.1
apache-airflow-providers-hashicorp==3.4.2
apache-airflow-providers-http==4.5.0
apache-airflow-providers-imap==3.3.0
apache-airflow-providers-microsoft-azure==6.2.4
apache-airflow-providers-mysql==5.2.1
apache-airflow-providers-odbc==4.0.0
apache-airflow-providers-openlineage==1.0.1
apache-airflow-providers-postgres==5.6.0
apache-airflow-providers-redis==3.3.1
apache-airflow-providers-sendgrid==3.2.1
apache-airflow-providers-sftp==4.5.0
apache-airflow-providers-slack==7.3.2
apache-airflow-providers-snowflake==4.4.2
apache-airflow-providers-sqlite==3.4.3
apache-airflow-providers-ssh==3.7.1
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR! Why not, **but I have currently no idea of the reason**
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34450 | https://github.com/apache/airflow/pull/34590 | 966ce1ee47696c1725e854896601089bcc37818f | a1ef2322304ea6ff9bc9744668c011ad13fad056 | "2023-09-18T09:54:56Z" | python | "2023-09-25T07:47:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,425 | ["airflow/providers/amazon/aws/operators/emr.py"] | EMR operators doesn't use `AirflowProviderDeprecationWarning` | ### Body
Use `AirflowProviderDeprecationWarning` as warning source in EMR operators and change stacklevel to 2:
https://github.com/apache/airflow/blob/05036e619c0c6dafded1451daac4e07e20aee33f/airflow/providers/amazon/aws/operators/emr.py#L380-L384
https://github.com/apache/airflow/blob/05036e619c0c6dafded1451daac4e07e20aee33f/airflow/providers/amazon/aws/operators/emr.py#L373-L377
https://github.com/apache/airflow/blob/05036e619c0c6dafded1451daac4e07e20aee33f/airflow/providers/amazon/aws/operators/emr.py#L264-L268
https://github.com/apache/airflow/blob/05036e619c0c6dafded1451daac4e07e20aee33f/airflow/providers/amazon/aws/operators/emr.py#L257-L261
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/34425 | https://github.com/apache/airflow/pull/34453 | c55fd77f76aafc76463e3dd2a6ecaa29e56bd967 | 7de7149bc6d2d649b91cf902801b92300618db4a | "2023-09-16T20:54:58Z" | python | "2023-09-19T11:21:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,424 | ["airflow/providers/cncf/kubernetes/triggers/pod.py"] | Consolidate the warning stacklevel in KubernetesPodTrigger | ### Body
Use stacklevel 2 instead of the default 1:
https://github.com/apache/airflow/blob/1b122c15030e99cef9d4ff26d3781a7a9d6949bc/airflow/providers/cncf/kubernetes/triggers/pod.py#L103-L106
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/34424 | https://github.com/apache/airflow/pull/35079 | bc4a22c6bd8096e7b62147031035cb14896fe934 | 4c8c85ccc2e52436276f692964abff4a3dc8495d | "2023-09-16T20:52:04Z" | python | "2023-10-23T09:01:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,423 | ["airflow/providers/cncf/kubernetes/utils/pod_manager.py"] | Consolidate the waning stacklevel in K8S pod_manager | ### Body
Use stacklevel 2 instead of the default 1:
https://github.com/apache/airflow/blob/b5057e0e1fc6b7a47e38037a97cac862706747f0/airflow/providers/cncf/kubernetes/utils/pod_manager.py#L361-L365
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/34423 | https://github.com/apache/airflow/pull/34530 | 06965e604cef4d6a932258a5cd357d164a809730 | 08729eddbd7414b932a654763bf62c6221a0e397 | "2023-09-16T20:50:19Z" | python | "2023-09-21T18:31:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,422 | ["airflow/providers/ssh/hooks/ssh.py"] | Consolidate stacklevel in ssh hooks warning | ### Body
SSHHook uses the default stacklevel (1) in its deprecation warning, it should be 2:
https://github.com/apache/airflow/blob/b11525702c72cb53034aa29ccd6d0e1161ac475c/airflow/providers/ssh/hooks/ssh.py#L433-L439
https://github.com/apache/airflow/blob/b11525702c72cb53034aa29ccd6d0e1161ac475c/airflow/providers/ssh/hooks/ssh.py#L369-L374
https://github.com/apache/airflow/blob/b11525702c72cb53034aa29ccd6d0e1161ac475c/airflow/providers/ssh/hooks/ssh.py#L232-L238
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/34422 | https://github.com/apache/airflow/pull/34527 | a1bd8719581f2ef1fb25aeaa89e3520e8bc81172 | 06965e604cef4d6a932258a5cd357d164a809730 | "2023-09-16T20:48:03Z" | python | "2023-09-21T17:56:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,421 | ["airflow/providers/ssh/operators/ssh.py"] | Consolidate stacklevel in ssh operator warning | ### Body
SSHOperator uses the default `stacklevel` (1) in its deprecation warning, it should be 2:
https://github.com/apache/airflow/blob/a59076eaeed03dd46e749ad58160193b4ef3660c/airflow/providers/ssh/operators/ssh.py#L139-L143
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/34421 | https://github.com/apache/airflow/pull/35151 | 4767f48a3b4537092e62fc2f91ec832dd560db72 | 543db7004ee593605e250265b0722917cef296d3 | "2023-09-16T20:45:48Z" | python | "2023-10-24T23:09:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,394 | ["docs/apache-airflow/templates-ref.rst"] | var.json.get not working correctly with nested JSON objects | ### Apache Airflow version
2.7.1
### What happened
According to [Airflow Documentation on Airflow Variables in Templates](https://airflow.apache.org/docs/apache-airflow/stable/templates-ref.html#airflow-variables-in-templates) there are two ways of accessing the JSON variables in templates:
- using the direct `{{ var.json.my_dict_var.key1 }}`
- using a getter with a default fallback value `{{{{ var.json.get('my.dict.var', {'key1': 'val1'}) }}`
However, only the first approach works for nested variables, as demonstrated in the example.
Or is it me not understanding this documentation? Alternatively it could get updated to make it clearer.
### What you think should happen instead
_No response_
### How to reproduce
The following test demonstrates the issue:
```
from datetime import datetime, timezone
import pytest
from airflow import DAG
from airflow.models.baseoperator import BaseOperator
from airflow.models.dag import DAG
from airflow.utils import db
from airflow.utils.state import DagRunState
from airflow.utils.types import DagRunType
from pytest import MonkeyPatch
from airflow.models import Variable
class TemplatedBaseOperator(BaseOperator):
template_fields = BaseOperator.template_fields + (
"templated_direct", "templated_getter")
def __init__(
self,
*args,
**kwargs,
):
self.templated_direct = "{{ var.json.somekey.somecontent }}"
self.templated_getter = "{{ var.json.get('somekey.somecontent', false) }}"
super().__init__(
*args,
**kwargs,
)
@pytest.fixture()
def reset_db():
db.resetdb()
yield
@pytest.fixture
def dag() -> DAG:
with DAG(
dag_id="templating_dag",
schedule="@daily",
start_date=datetime(2023, 1, 1, tzinfo=timezone.utc),
render_template_as_native_obj=True,
) as dag:
TemplatedBaseOperator(
task_id="templating_task"
)
return dag
def test_templating(dag: DAG, reset_db: None, monkeypatch: MonkeyPatch):
"""Test if the templated values get intialized from environment variables when rendered"""
# setting env variables
monkeypatch.setenv(
"AIRFLOW_VAR_SOMEKEY",
'{"somecontent": true}',
)
dagrun = dag.create_dagrun(
state=DagRunState.RUNNING,
execution_date=datetime(2023, 1, 1, tzinfo=timezone.utc),
data_interval=(datetime(2023, 1, 1, tzinfo=timezone.utc), datetime(2023, 1, 7, tzinfo=timezone.utc)),
start_date=datetime(2023, 1, 7, tzinfo=timezone.utc),
run_type=DagRunType.MANUAL,
)
ti = dagrun.get_task_instance(task_id="templating_task")
ti.task = dag.get_task(task_id="templating_task")
rendered_template = ti.render_templates()
assert {'somecontent': True} == Variable.get("somekey", deserialize_json=True)
assert getattr(rendered_template, "templated_direct") == True
# the following test is failing, getting default "False" instead of Variable 'True'
assert getattr(rendered_template, "templated_getter") == True
```
### Operating System
Ubuntu 22.04.3 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34394 | https://github.com/apache/airflow/pull/34411 | b23d3f964b2699d4c7f579e22d50fabc9049d1b6 | 03db0f6b785a4983c09d6eec7433cf28f7759610 | "2023-09-15T14:02:52Z" | python | "2023-09-16T18:24:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,388 | ["airflow/providers/cncf/kubernetes/utils/pod_manager.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "tests/providers/cncf/kubernetes/utils/test_pod_manager.py"] | Kubernetes provider - Error parsing timestamp in logs | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We are experiencing issue with error log message being shown in every task that is KubernetesPodOperator. We get
```
{pod_manager.py:558} ERROR - Error parsing timestamp (no timestamp in message ''). Will continue execution but won't update timestamp
```
after execution is finished in every task.
### What you think should happen instead
No such error should be shown
### How to reproduce
```python
'''doc'''
from kubernetes.client import models as k8s
from airflow.providers.cncf.kubernetes.operators.pod import KubernetesPodOperator
from airflow.models import DAG
import pendulum
import datetime
DAG_ID = 'test-dag'
default_args = {
'owner': 'owner',
'depends_on_past': False,
'start_date': datetime.datetime(2023, 9, 15),
'sla': datetime.timedelta(hours = 1),
'execution_timeout': datetime.timedelta(hours = 1, minutes = 30),
'email_on_failure': False,
'email_on_retry': False,
'retries': 3,
'retry_delay': datetime.timedelta(minutes = 1),
'retry_exponential_backoff': True,
'max_retry_delay': datetime.timedelta(minutes = 10),
'max_active_tis_per_dag': 8,
'pool': 'SomePool',
'get_logs': True,
}
dag = DAG(
dag_id = DAG_ID,
doc_md = __doc__,
tags = ['tag'],
schedule = AfterWorkdayTimetable(pendulum.Time(2, 0)),
default_args = default_args,
catchup = True,
dagrun_timeout = datetime.timedelta(hours = 6),
max_active_runs = 5
)
task = KubernetesPodOperator(
affinity = {
'nodeAffinity': {
'requiredDuringSchedulingIgnoredDuringExecution': {
'nodeSelectorTerms': [
{'matchExpressions': [{'key': 'dataDriveSizeClass', 'operator': 'In', 'values': ['large', 'medium']}]}
]
}
}
},
priority_class_name = 'l3',
namespace = 'some--namespace',
in_cluster = True,
image_pull_policy = 'IfNotPresent',
container_resources = k8s.V1ResourceRequirements(
requests = {'memory': '150Mi', 'cpu': '1m'},
limits = {'memory': '512Mi', 'cpu': '500m'},
),
cmds = [
'sleep',
'10'
],
dag = dag,
task_id = 'kubernetes_task',
labels = {
'runner': 'airflow',
},
annotations = {'version': '1.0.0'},
image = 'ubuntu',
env_vars = [k8s.V1EnvVar(name = name, value = value) for name, value in {'name':'value'}.items()]
)
task
```
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==8.4.0
apache-airflow-providers-cncf-kubernetes==7.5.1
apache-airflow-providers-common-sql==1.6.1
apache-airflow-providers-ftp==3.4.2
apache-airflow-providers-http==4.5.0
apache-airflow-providers-imap==3.2.2
apache-airflow-providers-postgres==5.6.0
apache-airflow-providers-slack==7.3.2
apache-airflow-providers-sqlite==3.4.3
```
### Deployment
Other Docker-based deployment
### Deployment details
Kubernetes 1.26.5
airflow 2.6.3
python 3.10.12
LocalExecutor
Database: Postgres 14.6
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34388 | https://github.com/apache/airflow/pull/34412 | 90628e480a61e7b4991d42a536b79e634dee7c12 | 4234d8db7e4a51683f8236270c87375cf80ba3f4 | "2023-09-15T09:17:12Z" | python | "2023-10-03T23:42:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,332 | ["airflow/migrations/versions/0129_2_8_0_add_clear_number_to_dag_run.py", "airflow/models/dagrun.py", "docs/apache-airflow/img/airflow_erd.sha256", "docs/apache-airflow/img/airflow_erd.svg"] | Default value for clear_number column not applied via sqlalchemy | ### Apache Airflow version
main (development)
### What happened
The SQL query generated does not have a default integer as 0 in the migration causing failure.
```
INFO [alembic.runtime.migration] Running upgrade 405de8318b3a -> 375a816bbbf4, add new field 'clear_number' to dagrun
Traceback (most recent call last):
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1910, in _execute_context
self.dialect.do_execute(
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.NotNullViolation: column "clear_number" of relation "dag_run" contains null values
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/bin/airflow", line 8, in <module>
sys.exit(main())
^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/airflow/__main__.py", line 59, in main
args.func(args)
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/airflow/cli/cli_config.py", line 49, in command
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/airflow/utils/cli.py", line 114, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/airflow/utils/providers_configuration_loader.py", line 55, in wrapped_function
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/airflow/cli/commands/db_command.py", line 129, in migratedb
db.upgradedb(
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/airflow/utils/session.py", line 79, in wrapper
return func(*args, session=session, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/airflow/utils/db.py", line 1627, in upgradedb
command.upgrade(config, revision=to_revision or "heads")
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/alembic/command.py", line 382, in upgrade
script.run_env()
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/alembic/script/base.py", line 578, in run_env
util.load_python_file(self.dir, "env.py")
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/alembic/util/pyfiles.py", line 93, in load_python_file
module = load_module_py(module_id, path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/alembic/util/pyfiles.py", line 109, in load_module_py
spec.loader.exec_module(module) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/airflow/migrations/env.py", line 117, in <module>
run_migrations_online()
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/airflow/migrations/env.py", line 111, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/alembic/runtime/environment.py", line 922, in run_migrations
self.get_context().run_migrations(**kw)
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/alembic/runtime/migration.py", line 624, in run_migrations
step.migration_fn(**kw)
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/airflow/migrations/versions/0129_2_8_0_add_clear_number_to_dag_run.py", line 41, in upgrade
with op.batch_alter_table("dag_run") as batch_op:
File "/opt/homebrew/Cellar/[email protected]/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/contextlib.py", line 144, in __exit__
next(self.gen)
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/alembic/operations/base.py", line 375, in batch_alter_table
impl.flush()
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/alembic/operations/batch.py", line 113, in flush
fn(*arg, **kw)
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/alembic/ddl/impl.py", line 322, in add_column
self._exec(base.AddColumn(table_name, column, schema=schema))
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/alembic/ddl/impl.py", line 193, in _exec
return conn.execute( # type: ignore[call-overload]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/sqlalchemy/future/engine.py", line 280, in execute
return self._execute_20(
^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1710, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/sqlalchemy/sql/ddl.py", line 80, in _execute_on_connection
return connection._execute_ddl(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1477, in _execute_ddl
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1953, in _execute_context
self._handle_dbapi_exception(
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2134, in _handle_dbapi_exception
util.raise_(
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1910, in _execute_context
self.dialect.do_execute(
File "/Users/abhishekbhakat/Codes/Turbine/my_local_airflow/airflowenv/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (psycopg2.errors.NotNullViolation) column "clear_number" of relation "dag_run" contains null values
[SQL: ALTER TABLE dag_run ADD COLUMN clear_number INTEGER NOT NULL]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
```
SQLAlchemy version was 1.4.49.
### What you think should happen instead
The query generated should be `ALTER TABLE dag_run ADD COLUMN clear_number INTEGER NOT NULL DEFAULT 0;`
### How to reproduce
Upgraded from airflow 2.7.1 to main-branch and ran `airflow db migrate`
### Operating System
MacOS 13
### Versions of Apache Airflow Providers
Not applicable
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34332 | https://github.com/apache/airflow/pull/34344 | 43852d4efe32d9c56fff11749fa5a8eacd8d1a53 | 840c0d79e09533f7e786a5c496a9008363284da7 | "2023-09-13T09:19:41Z" | python | "2023-09-20T18:17:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,327 | ["airflow/datasets/manager.py", "airflow/listeners/listener.py", "airflow/listeners/spec/dataset.py", "airflow/models/dag.py", "docs/apache-airflow/administration-and-deployment/listeners.rst", "tests/datasets/test_manager.py", "tests/listeners/dataset_listener.py", "tests/listeners/test_dataset_listener.py"] | Listeners for Datasets | ### Description
Add listeners for Datasets (events)
### Use case/motivation
As Airflow administrators, we would like to trigger some external processes based on all datasets being created/updated by our users. We came across the listeners for the dag runs and task instances (which are also useful), but are still missing listeners for datasets.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34327 | https://github.com/apache/airflow/pull/34418 | 31450bbe3c91246f3eedd6a808e60d5355d81171 | 9439111e739e24f0e3751350186b0e2130d2c821 | "2023-09-13T07:16:27Z" | python | "2023-11-13T14:15:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,298 | ["chart/templates/cleanup/cleanup-cronjob.yaml", "chart/values.yaml", "helm_tests/security/test_security_context.py"] | securityContext value for the cleanup container is not set properly | ### Official Helm Chart version
1.10.0 (latest released)
### Apache Airflow version
2.6.3
### Kubernetes Version
1.24.10
### Helm Chart configuration
```yaml
cleanup:
enabled: true
securityContexts:
container:
allowPrivilegeEscalation: false
```
### Docker Image customizations
No
### What happened
Although securityContext value for cleanup container was set in values.yaml file, the value was not set in container.
### What you think should happen instead
_No response_
### How to reproduce
```bash
helm repo add apache-airflow https://airflow.apache.org
helm pull apache-airflow/airflow --version 1.10.0
tar -zxvf ./airflow-1.10.0.tgz
cd airflow
helm template airflow . \
--set cleanup.enabled=true \
--set cleanup.securityContexts.container.allowPrivilegeEscalation=false \
| grep "# Source: airflow/templates/cleanup/cleanup-cronjob.yaml" -A99
```
### Anything else
There is no <code>containerSecurityContext</code> in cleanup-cronjob.yaml.
https://github.com/apache/airflow/blob/eed2901e877b32a211e0e74bc9d69fc11e552f2a/chart/templates/cleanup/cleanup-cronjob.yaml#L23-L28
But we can see <code>containerSecurityContext</code> in webserver-deployment.yaml
https://github.com/apache/airflow/blob/eed2901e877b32a211e0e74bc9d69fc11e552f2a/chart/templates/webserver/webserver-deployment.yaml#L29
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34298 | https://github.com/apache/airflow/pull/34351 | de92a81f002e6c1b3e74ad9d074438b65acb87b6 | 847d2c3b37210113dce3cf5da0344a2fcfcd9d12 | "2023-09-12T08:35:13Z" | python | "2023-09-13T19:53:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,283 | ["airflow/providers/docker/operators/docker.py", "docs/spelling_wordlist.txt", "tests/providers/docker/operators/test_docker.py"] | DockerOperator Expose `ulimits` create host config parameter | ### Description
This issue should be resolved by simply adding `ulimits` parameter to `DockerOperator` that is directly passed to [`create_host_config`](https://docker-py.readthedocs.io/en/stable/api.html#docker.api.container.ContainerApiMixin.create_host_config).
### Use case/motivation
Currently applying custom `ulimits` is not possible with `DockerOperator`, making it necessary to do some hacky entrypoint workarounds instead.
By implementing this feature, one should be able to set Ulimits directly by giving a list of [`Ulimit`](https://docker-py.readthedocs.io/en/stable/api.html#docker.types.Ulimit)-instances to `DockerOperator`.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34283 | https://github.com/apache/airflow/pull/34284 | 3fa9d46ec74ef8453fcf17fbd49280cb6fb37cef | c668245b5740279c08cbd2bda1588acd44388eb3 | "2023-09-11T21:10:57Z" | python | "2023-09-12T21:25:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,265 | ["airflow/www/security.py"] | access control logs | ### Apache Airflow version
2.7.1
### What happened
After upgrading to 2.7.1 from 2.5.3 we encountered those logs in the scheduler logs:
```[2023-09-11T05:52:34.865+0000] {logging_mixin.py:151} INFO - [2023-09-11T05:52:34.865+0000] {security.py:708} INFO - Not syncing DAG-level permissions for DAG 'DAG:<dag_name>' as access control is unset.```
Since we have multiple DAGs for the same DAG file (>40), and the DAG processor is running on a high period, we have a lot of redundant logs, which creates more unnecessary data to store.
Adding the setting `access_control={}` did not change the logs.
We don't want to manage the access control for each DAG, as we have a high-level approach, but we also don't want these logs.
### What you think should happen instead
Add some settings to allow avoid printing this info logs about sync permissions
### How to reproduce
Airflow Version - 2.7.1
Python - 3.11.5
Create a DAg without setting access_control and watch logs. e.g. `XXX/_data/scheduler/2023-09-11/dag_name.py.log`
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34265 | https://github.com/apache/airflow/pull/34268 | f0467c9fd65e7146b44fc8f9fccb9ad750592371 | 8035aee8da3c4bb7b9c01cfdc7236ca01d658bae | "2023-09-11T07:02:25Z" | python | "2023-09-11T11:19:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,232 | ["airflow/cli/commands/connection_command.py"] | Standardise output of "airflow connections export" to those of "airflow variables export" and "airflow users export" | ### Description
The three commands:
```
airflow users export /opt/airflow/dags/users.json &&
airflow variables export /opt/airflow/dags/variables.json &&
airflow connections export /opt/airflow/dags/connections.json
```
give back:
```
7 users successfully exported to /opt/airflow/dags/users.json
36 variables successfully exported to /opt/airflow/dags/variables.json
Connections successfully exported to /opt/airflow/dags/connections.json.
```
### Use case/motivation
Standardise output of "airflow connections export" to those of "airflow variables export" and "airflow users export"
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34232 | https://github.com/apache/airflow/pull/34640 | 3189ebee181beecd5a1243a4998bc355b648dc6b | a5f5e2fc7f7b7f461458645c8826f015c1fa8d78 | "2023-09-09T07:33:50Z" | python | "2023-09-27T09:15:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,227 | ["airflow/models/dag.py"] | 100+ DAGs fail to import after update to 2.7.1 due to "params without default values" | ### Apache Airflow version
2.7.1
### What happened
120 DAGs broke when I updated to Airflow 2.7.1. I had to revert to 2.7.0 again, so the tests below are from a local deployment.
The error message is the same for all DAGs:
```python
Broken DAG: [/opt/airflow/dags/dag-name.py] Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/models/dag.py", line 640, in __init__
self.validate_schedule_and_params()
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/models/dag.py", line 3289, in validate_schedule_and_params
raise AirflowException(
airflow.exceptions.AirflowException: DAG Schedule must be None, if there are any required params without default values
```
After testing, I found that if I change the blob_prefix param to "" instead of None, the DAG imports successfully. I have various other params which are None that I will probably need to assign default values for (in reality, the function which Airflow runs handles cases if the value is None, but I guess I need to add a default in the DAG). All 120 DAGs with errors had a field like this in the "params" dictionary I pass to Airflow params.
```python
params = {
"exclude_pattern": "*errors*",
"blob_prefix": None,
```
### What you think should happen instead
These are legacy DAGs which have run for a long time without changes (old Airflow instance), so I did not expect an upgrade to break them.
### How to reproduce
I believe the root cause is passing a param with the value None to the DAG params:
```python
params = {
subfolder=None,
conn_id="gcs_conn_id",
}
@dag(
dag_id="dag-example",
start_date=datetime(2023, 2, 2, tz="America/New_York"),
schedule="41 2 * * *",
params=params,
)
```
### Operating System
Ubuntu 22.04
### Versions of Apache Airflow Providers
Providers info
apache-airflow-providers-celery | 3.3.3
apache-airflow-providers-common-sql | 1.7.1
apache-airflow-providers-docker | 3.7.4
apache-airflow-providers-ftp | 3.5.1
apache-airflow-providers-google | 10.7.0
apache-airflow-providers-http | 4.5.1
apache-airflow-providers-imap | 3.3.1
apache-airflow-providers-microsoft-azure | 6.3.0
apache-airflow-providers-mysql | 5.3.0
apache-airflow-providers-odbc | 4.0.0
apache-airflow-providers-openlineage | 1.0.2
apache-airflow-providers-postgres | 5.6.0
apache-airflow-providers-redis | 3.3.1
apache-airflow-providers-sftp | 4.6.0
apache-airflow-providers-sqlite | 3.4.3
apache-airflow-providers-ssh | 3.7.2
### Deployment
Other Docker-based deployment
### Deployment details
Docker Swarm using docker stack deploy
### Anything else
DAGs fail to import when deployed on 2.7.1
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34227 | https://github.com/apache/airflow/pull/34248 | 13d2f4a7f1e347607122b65d5b45ef0504a8640b | 3e340797ab98a06b51b2930610b0abb0ad20a750 | "2023-09-08T23:56:06Z" | python | "2023-09-10T07:03:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,210 | ["airflow/api_connexion/endpoints/event_log_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/endpoints/test_event_log_endpoint.py"] | Add filters to get event logs | ### Description
In the REST API, there is an endpoint to get event (audit) logs. It has sorting and pagination but not filtering. It would be useful to filter by `when` (before and after), `dag_id`, `task_id`, `owner`, `event`
<img width="891" alt="Screenshot 2023-09-08 at 1 30 35 PM" src="https://github.com/apache/airflow/assets/4600967/4280b9ed-2c73-4a9c-962c-4ede5cb140fe">
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34210 | https://github.com/apache/airflow/pull/34417 | db89a33b60f46975850e3f696a7e05e61839befc | 3189ebee181beecd5a1243a4998bc355b648dc6b | "2023-09-08T12:31:57Z" | python | "2023-09-27T07:58:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,177 | ["airflow/providers/amazon/aws/operators/emr.py", "airflow/providers/amazon/aws/triggers/emr.py", "airflow/providers/amazon/aws/waiters/emr.json", "tests/providers/amazon/aws/hooks/test_emr.py", "tests/providers/amazon/aws/triggers/test_emr.py", "tests/providers/amazon/aws/triggers/test_emr_trigger.py", "tests/providers/amazon/aws/waiters/test_custom_waiters.py"] | EmrAddStepsOperator does not work with `deferrable=True` | ### Apache Airflow version
2.7.0
### What happened
When using the `EmrAddStepsOperator`, trying to use `deferrable=True` causes an exception from the Amazon provider:
```
[2023-09-07, 16:02:56 UTC] {taskinstance.py:1720} ERROR - Trigger failed:
Traceback (most recent call last):
File "/home/***/.local/lib/python3.11/site-packages/***/jobs/triggerer_job_runner.py", line 527, in cleanup_finished_triggers
result = details["task"].result()
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/***/.local/lib/python3.11/site-packages/***/jobs/triggerer_job_runner.py", line 599, in run_trigger
async for event in trigger.run():
File "/home/***/.local/lib/python3.11/site-packages/***/providers/amazon/aws/triggers/emr.py", line 74, in run
for attempt in range(1, 1 + self.max_attempts):
~~^~~~~~~~~~~~~~~~~~~
TypeError: unsupported operand type(s) for +: 'int' and 'str'
```
The problem seems to stem from the `max_attempts` value being stored as an `int` when the `EmrAddStepsTrigger` class is instantiated, but when `run` is called later, it becomes a `str`. The simple fix is to wrap the access with `int()`, but I'm not sure if that's the _correct_ fix. For some reason, `EmrAddStepsTrigger` extends `BaseTrigger` from Airflow instead of `AwsBaseWaiterTrigger` like the rest of the EMR triggers even though it uses an AWS waiter. Perhaps the more appropriate fix would be to rewrite this trigger to use the AWS-specific base class.
### What you think should happen instead
The `EmrAddStepOperator` should work when deferred
### How to reproduce
Minimal DAG, run using the `apache/airflow:2.7.0-python3.11` image with the provided docker-compose.yaml file:
```python
from datetime import datetime
from airflow import DAG
from airflow.providers.amazon.aws.operators.emr import EmrAddStepsOperator
with DAG(
"emr_defer_test",
start_date=datetime(2021, 1, 1),
catchup=False,
schedule_interval=None,
) as dag:
# this will work on any EMR cluster, put your cluster ID here
job_flow_id = "j-1M0574ZKPRD9H"
EmrAddStepsOperator(
task_id="run_deferred",
job_flow_id=job_flow_id,
steps=[
{
"Name": "hello-world",
"ActionOnFailure": "CONTINUE",
"HadoopJarStep": {
"Jar": "command-runner.jar",
"Args": "sleep 60".split(),
}
}
],
deferrable=True,
)
```
### Operating System
Docker on MacOS 13.5.1 via colima
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==8.5.1
apache-airflow-providers-celery==3.3.2
apache-airflow-providers-common-sql==1.7.0
apache-airflow-providers-ftp==3.5.0
apache-airflow-providers-http==4.5.0
apache-airflow-providers-imap==3.3.0
apache-airflow-providers-postgres==5.6.0
apache-airflow-providers-sqlite==3.4.3
```
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34177 | https://github.com/apache/airflow/pull/34216 | 050a47add822cde6d9abcd609df59c98caae13b0 | f0467c9fd65e7146b44fc8f9fccb9ad750592371 | "2023-09-07T16:16:41Z" | python | "2023-09-11T11:00:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,170 | ["airflow/providers/google/cloud/operators/compute.py", "tests/providers/google/cloud/operators/test_compute.py"] | ComputeEngineInsertInstanceOperator doesn't respect jinja-templated instance name when given in body argument | ### Apache Airflow version
2.7.0
### What happened
when initializing the operator with a `body` that has a jinja-templated `name` field and without a `resource_id`, the creation fails due to the fact that the instance name doesn't fit the name format - but that is because it doesnt go realization/jinja-parsing.
### What you think should happen instead
full error log:
```
[2023-09-06, 09:22:41 UTC] {taskinstance.py:1768} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/sigal/.local/share/virtualenvs/ingest-airflow-rHPaIjXZ/lib/python3.8/site-packages/airflow/providers/google/cloud/operators/compute.py", line 210, in execute
existing_instance = hook.get_instance(
File "/home/sigal/.local/share/virtualenvs/ingest-airflow-rHPaIjXZ/lib/python3.8/site-packages/airflow/providers/google/common/hooks/base_google.py", line 468, in inner_wrapper
return func(self, *args, **kwargs)
File "/home/sigal/.local/share/virtualenvs/ingest-airflow-rHPaIjXZ/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/compute.py", line 350, in get_instance
instance = client.get(
File "/home/sigal/.local/share/virtualenvs/ingest-airflow-rHPaIjXZ/lib/python3.8/site-packages/google/cloud/compute_v1/services/instances/client.py", line 2812, in get
response = rpc(
File "/home/sigal/.local/share/virtualenvs/ingest-airflow-rHPaIjXZ/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py", line 113, in __call__
return wrapped_func(*args, **kwargs)
File "/home/sigal/.local/share/virtualenvs/ingest-airflow-rHPaIjXZ/lib/python3.8/site-packages/google/api_core/grpc_helpers.py", line 72, in error_remapped_callable
return callable_(*args, **kwargs)
File "/home/sigal/.local/share/virtualenvs/ingest-airflow-rHPaIjXZ/lib/python3.8/site-packages/google/cloud/compute_v1/services/instances/transports/rest.py", line 2485, in __call__
raise core_exceptions.from_http_response(response)
google.api_core.exceptions.BadRequest: 400 GET https://compute.googleapis.com/compute/v1/projects/ai2-israel/zones/us-west1-b/instances/%7B%7B%20params.get('dataset')%20%7D%7D: Invalid value for field 'instance': '{{ params.get('dataset') }}'. Must be a match of regex '[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?|[1-9][0-9]{0,19}'
```
### How to reproduce
```python
with DAG(
dag_id="blob",
schedule_interval=None,
start_date=datetime(2023, 1, 1),
catchup=False,
params={
"dataset": Param(default="test"),
},
render_template_as_native_obj=True,
) as dag:
dataset = "{{ params.get('dataset') }}"
create_vm_instance = CustomComputeEngineInsertInstanceOperator(
task_id="create_vm_instance",
project_id=OUR_PROJECT_ID,
zone=OUR_LOCATION,
body={
"name": dataset,
"machine_type": f"zones/{OUR_LOCATION}/machineTypes/n1-standard-96",
"source_machine_image": f"global/machineImages/{OUR_IMAGE}",
"disks": [
{
"boot": True,
"device_name": dataset,
"initialize_params": {"disk_size_gb": "10", "disk_type": f"zones/{OUR_LOCATION}/diskTypes/pd-ssd"},
}
],
"network_interfaces": [
{
"access_configs": [{"name": "External NAT", "network_tier": "PREMIUM"}],
"stack_type": "IPV4_ONLY",
"subnetwork": f"regions/{OUR_REGION}/subnetworks/default",
}
],
}
)
```
### Operating System
Ubuntu 20.04.6 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34170 | https://github.com/apache/airflow/pull/34171 | 8035aee8da3c4bb7b9c01cfdc7236ca01d658bae | 0110b22a603f86fbc6f1311ef1c9a23505ca6f87 | "2023-09-07T11:17:30Z" | python | "2023-09-11T13:12:50Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,162 | ["airflow/providers/google/cloud/hooks/bigquery.py", "airflow/providers/google/cloud/triggers/bigquery.py", "tests/providers/google/cloud/hooks/test_bigquery.py", "tests/providers/google/cloud/triggers/test_bigquery.py"] | `BigQueryInsertJobOperator` doesn't handle job errors caused by running tasks in deferrable mode | ### Apache Airflow version
2.7.0
### What happened
When a BigQuery job running in deferrable ends before completion - i.e. cancelled via the UI or terminated due a lack of resources - the precise error is not returned to the logs. Instead a key error from within the `execute_complete` function is returned.
```
[2023-09-07, 09:17:52 UTC] {taskinstance.py:1824} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/airflow/providers/google/cloud/operators/bigquery.py", line 2874, in execute_complete
raise AirflowException(event["message"])
KeyError: 'message'
```
https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/operators/bigquery.py#L2874-L2887
### What you think should happen instead
When not running in deferrable mode, when the job is cancelled/fails during execution, the `_handle_job_error` is called resulting in the following output.
```
[2023-09-07, 09:16:35 UTC] {taskinstance.py:1824} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/airflow/providers/google/cloud/operators/bigquery.py", line 2848, in execute
job.result(timeout=self.result_timeout, retry=self.result_retry)
File "/usr/local/lib/python3.10/site-packages/google/cloud/bigquery/job/query.py", line 1499, in result
do_get_result()
File "/usr/local/lib/python3.10/site-packages/google/cloud/bigquery/job/query.py", line 1489, in do_get_result
super(QueryJob, self).result(retry=retry, timeout=timeout)
File "/usr/local/lib/python3.10/site-packages/google/cloud/bigquery/job/base.py", line 728, in result
return super(_AsyncJob, self).result(timeout=timeout, **kwargs)
File "/usr/local/lib/python3.10/site-packages/google/api_core/future/polling.py", line 261, in result
raise self._exception
google.api_core.exceptions.GoogleAPICallError: 200 Job execution was cancelled: User requested cancellation
```
It would appear that, even though `_handle_job_error` should get called when in deferrable mode, this doesn't happen if the job was running when it terminates.
https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/operators/bigquery.py#L2860-L2872
### How to reproduce
The code below will replicate the issue, however you need to provide a query that runs for enough time that enables you to cancel the job in UI.
Queries that have syntax errors never get started and are not affected by this.
```
from datetime import datetime
from airflow import models
from airflow.providers.google.cloud.operators.bigquery import BigQueryInsertJobOperator
with models.DAG(
dag_id='bq_insert_job',
start_date=datetime(2023, 8, 31),
catchup=False,
schedule='0 0 * * *',
) as dag:
test1 = BigQueryInsertJobOperator(
task_id=f'test1',
sql='<INSERT LONG RUNNING QUERY>',
deferrable=False,
)
test2 = BigQueryInsertJobOperator(
task_id=f'test2',
sql=f'<INSERT LONG RUNNING QUERY>',
deferrable=True,
)
```
### Operating System
n/a
### Versions of Apache Airflow Providers
apache-airflow-providers-google==10.6.0
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34162 | https://github.com/apache/airflow/pull/34208 | f5c2748c3346bdebf445afd615657af8849345dd | 774125ae253611627229509e672518ce0a58cf2e | "2023-09-07T09:58:02Z" | python | "2023-09-09T06:56:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,147 | ["docs/apache-airflow/administration-and-deployment/cluster-policies.rst"] | Update docs to mark Pluggy interface as stable rather than experimental | ### What do you see as an issue?
[Pluggy interface has been listed as experimental](https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/cluster-policies.html#:~:text=By%20using%20a,experimental%20feature.) since Airflow 2.6. Requesting it be listed as stable
### Solving the problem
Change the documentation to say it is now stable
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34147 | https://github.com/apache/airflow/pull/34174 | d29c3485609d28e14a032ae5256998fb75e8d49f | 88623acae867c2a9d34f5030809102379080641a | "2023-09-06T22:37:12Z" | python | "2023-09-15T06:49:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,141 | ["airflow/config_templates/config.yml"] | Description needed for default resource configurations | ### What do you see as an issue?
Several of the [operator configuration options](https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#operators) don't have descriptions which is confusing.
### Solving the problem
This can be solved by adding a description to each of these fields, [for example default_ram](https://github.com/apache/airflow/blob/2.7.0/airflow/config_templates/config.yml#L1290)
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34141 | https://github.com/apache/airflow/pull/34438 | 03db0f6b785a4983c09d6eec7433cf28f7759610 | 3a4712e306e123c4e78251968b7f2975b73bdbaf | "2023-09-06T18:07:38Z" | python | "2023-09-17T12:27:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,132 | ["airflow/www/static/js/dag/Main.tsx", "airflow/www/static/js/dag/details/index.tsx"] | Always show details panel when user navigates to Graph tab | ### Description
When a user navigates to `/grid?tab=graph` we should show the details panel with the graph tab selected, even when the user hid the details panel previously.
### Use case/motivation
People already complained asking me where the graph view went after the latest Airflow upgrade (2.7.0). I can understand that this may be confusing, especially when they navigate to the graph from a DAG run link.
![Screen Capture on 2023-09-06 at 11-52-09](https://github.com/apache/airflow/assets/1842905/db935269-20de-4f77-8899-3ca993bf464f)
It would be more user friendly if we could auto-open the details panel when a user navigates to the graph tab, so they immediately get what they expect.
Happy to create the PR for this, I believe it should be just a tiny change. However, I'm opening this feature request first, just to make sure I'm not conflicting with any other plans ([AIP-39?](https://github.com/apache/airflow/projects/9)).
### Related issues
https://github.com/apache/airflow/issues/29852
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34132 | https://github.com/apache/airflow/pull/34136 | 5eea4e632c8ae50812e07b1d844ea4f52e0d6fe1 | 0b319e79ec97f6f4a8f8ce55119b6539138481cd | "2023-09-06T10:15:30Z" | python | "2023-09-07T09:58:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,099 | ["airflow/providers/elasticsearch/log/es_task_handler.py", "tests/providers/elasticsearch/log/test_es_task_handler.py"] | Elasticsearch.__init__() got an unexpected keyword argument 'use_ssl' | ### Apache Airflow version
2.7.0
### What happened
When upgrading apache-airflow-providers-elasticsearch to the newest provider (5.0.1), Airflow is unable to spin up. The scheudler and airflow-migrations both crash on following error:
```console
....................
ERROR! Maximum number of retries (20) reached.
Last check result:
$ airflow db check
Unable to load the config, contains a configuration error.
Traceback (most recent call last):
File "/usr/local/lib/python3.11/logging/config.py", line 573, in configure
handler = self.configure_handler(handlers[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/logging/config.py", line 758, in configure_handler
result = factory(**kwargs)
^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/providers/elasticsearch/log/es_task_handler.py", line 110, in __init__
self.client = elasticsearch.Elasticsearch(host, **es_kwargs) # type: ignore[attr-defined]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Elasticsearch.__init__() got an unexpected keyword argument 'use_ssl'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 5, in <module>
from airflow.__main__ import main
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/__init__.py", line 68, in <module>
settings.initialize()
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/settings.py", line 524, in initialize
LOGGING_CLASS_PATH = configure_logging()
^^^^^^^^^^^^^^^^^^^
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/logging_config.py", line 74, in configure_logging
raise e
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/logging_config.py", line 69, in configure_logging
dictConfig(logging_config)
File "/usr/local/lib/python3.11/logging/config.py", line 823, in dictConfig
dictConfigClass(config).configure()
File "/usr/local/lib/python3.11/logging/config.py", line 580, in configure
raise ValueError('Unable to configure handler '
ValueError: Unable to configure handler 'task'
Exception ignored in atexit callback: <function shutdown at 0x7fb5255f22a0>
Traceback (most recent call last):
File "/usr/local/lib/python3.11/logging/__init__.py", line 2193, in shutdown
h.close()
File "/home/airflow/.local/lib/python3.11/site-packages/airflow/providers/elasticsearch/log/es_task_handler.py", line 396, in close
if not self.mark_end_on_close or getattr(self, "ctx_task_deferred", None):
^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'ElasticsearchTaskHandler' object has no attribute 'mark_end_on_close'
2023-09-05T06:04:58.154163010Z
```
### What you think should happen instead
Deploy Airflow without any crashing schedulers
### How to reproduce
I'm currently running airflow on version 2.7.0 on AKS. I'm trying to upgrade elasticsearch to package 5.0.1.
Following config is present for elasticsearch:
```
[elasticsearch]
frontend = https://*redacted*.kb.westeurope.azure.elastic-cloud.com/app/discover#/?_a=(columns:!(message),filters:!(),index:'6551970b-aa72-4e2a-b255-d296a6fcdc09',interval:auto,query:(language:kuery,query:'log_id:"{log_id}"'),sort:!(log.offset,asc))&_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-1m,to:now))
json_format = True
log_id_template = {dag_id}_{task_id}_{execution_date}_{try_number}
[elasticsearch_configs]
max_retries = 3
retry_timeout = True
timeout = 30
```
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes==7.4.2
apache-airflow-providers-docker==3.7.4
apache-airflow-providers-microsoft-azure==4.3.0
apache-airflow-providers-elasticsearch==5.0.1
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34099 | https://github.com/apache/airflow/pull/34119 | 9babb065d077e89f779aaee07f0b673835f5700c | f7f3b675ecd40e32e458b71b5066864f866a60c8 | "2023-09-05T06:17:39Z" | python | "2023-09-07T00:43:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,092 | ["airflow/www/views.py"] | Can Delete on Task Instances permission is required to clear task | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
I was changing the roles to remove all delete access. But as soon as I remove permission "Can Delete on Task Instances" clear task takes to a blank page.
Shouldn't clear task access work without Delete access?
### What you think should happen instead
Can Delete Task Instance should not affect clear task access.
### How to reproduce
Copy the Op role and remove all Can Delete and assign that role to a user.
That user will not be able to clear task instance now.
add "Can Delete on Task Instances" permission to the role and user will now have access to clear task instances page.
### Operating System
Redhat 8
### Versions of Apache Airflow Providers
2.6.1
### Deployment
Docker-Compose
### Deployment details
airflow 2.6.1 running on Docker with postgres 13 and pgbouncer.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34092 | https://github.com/apache/airflow/pull/34123 | ba524ce4b2c80976fdb7ff710750416bc01d250d | 2f5777c082189e6495f0fea44bb9050549c0056b | "2023-09-05T00:48:13Z" | python | "2023-09-05T23:36:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,086 | [".github/workflows/ci.yml", "dev/breeze/SELECTIVE_CHECKS.md", "dev/breeze/src/airflow_breeze/utils/selective_checks.py", "dev/breeze/tests/test_selective_checks.py"] | Replace the usage of `--package-filter` with short package ids in CI | ### Body
Our CI generates `--package-filter` list in selective checks in order to pass them to `build-docs` and `publish-docs` commands. However after implementing #33876 and #34085 we could switch both commands to use short package ids which we already have (almost - there should be a slight modification for `apache-airflow`, `apache-airflow-providers` and `helm-chart` most likely. This should also simplify some of the unit tests we have.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/34086 | https://github.com/apache/airflow/pull/35068 | d87ca903742cc0f75df165377e5f8d343df6dc03 | 8d067129d5ba20a9847d5d70b368b3dffc42fe6e | "2023-09-04T19:31:19Z" | python | "2023-10-20T00:01:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,085 | ["BREEZE.rst", "dev/breeze/src/airflow_breeze/commands/release_management_commands.py", "dev/breeze/src/airflow_breeze/params/doc_build_params.py", "dev/breeze/src/airflow_breeze/utils/general_utils.py", "dev/breeze/src/airflow_breeze/utils/publish_docs_helpers.py", "images/breeze/output-commands-hash.txt", "images/breeze/output_release-management.svg", "images/breeze/output_release-management_publish-docs.svg"] | Replace --package-filter usage for `publish-docs` breeze command with short package names | ### Body
Same as https://github.com/apache/airflow/issues/33876 but for `publish-docs` command.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/34085 | https://github.com/apache/airflow/pull/34253 | 58cce7d88b1d57e815e04e5460eaaa71173af14a | 18eed9191dbcb84cd6099a97d5ad778ac561cd4d | "2023-09-04T19:25:13Z" | python | "2023-09-10T11:45:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,069 | ["airflow/www/views.py"] | Delete record of dag run is not being logged in audit logs when clicking on delete button in list dag runs | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
When doing a list dag runs and doing check box selection and in actions delete dags is performed in audit logs we get an event with "action_muldelete".
But if instead of using check box and actions we directly click on delete button provided there is no audit log for the deleted dag run.
### What you think should happen instead
There should be atleast and identifiable event for audit in both scenarios. For one it is present but for other there is no audit.
### How to reproduce
Go to List dag runs
1. Select one dag run using check box and go to actions and delete and confirm. Check audit logs there will be an event of action_muldelete
2. Go to list dag run and click on delete button which is present beside check box and adjacent to edit record. Check audit logs there will be no event for this action.
### Operating System
Redhat 8
### Versions of Apache Airflow Providers
airflow 2.6.1 running on Docker with postgres 13 and pgbouncer.
### Deployment
Docker-Compose
### Deployment details
airflow 2.6.1 running on Docker with postgres 13 and pgbouncer.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34069 | https://github.com/apache/airflow/pull/34090 | d81fe093c266ef63d4e3f0189eb8c867bff865f4 | 988632fd67abc10375ad9fe2cbd8c9edccc609a5 | "2023-09-04T08:07:37Z" | python | "2023-09-05T07:49:56Z" |