--- language: - en tags: - AI - assistants - domain pretty_name: Agentic data access benchmark configs: - config_name: default data_files: - split: test path: "questions.tsv" --- # Agentic Data Access Benchmark (ADAB) Agentic Data Access Benchmark is a set of real-world questions over few "closed domains" to illustrate the evaluation of closed domain AI assistants/agents. Closed domains are domains where data is not available implicitly in the LLM as they reside in secure or private systems e.g. enterprise databases, SaaS applications, etc and AI solutions require mechanisms to connect an LLM to such data. If you are evaluating an AI product or building your own AI architecture over closed domains, then you can use these questions/nature of questions to understand the capabilities of your system and qualitatively measure the performance of your assistants/agents. ADAB was created because of severe short-comings found in closed domain assistants in the wild. We found that apart from few basic canned questions or workflows, the assistants were struggling to do anything new. This was found to be because the assistant is not connected to sufficient data and is unable to perform complex or sequential operations over that data. We call the ability of an AI system, given the description of data, to agentically use and operate on that data as agentic data access.
## Learn more Learn more about agentic data access and the benchmark here: https://github.com/hasura/agentic-data-access-benchmark