ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discovery
Abstract
The advancements of language language models (LLMs) have piqued growing interest in developing LLM-based language agents to automate scientific discovery end-to-end, which has sparked both excitement and skepticism about the true capabilities of such agents. In this work, we argue that for an agent to fully automate scientific discovery, it must be able to complete all essential tasks in the workflow. Thus, we call for rigorous assessment of agents on individual tasks in a scientific workflow before making bold claims on end-to-end automation. To this end, we present ScienceAgentBench, a new benchmark for evaluating language agents for data-driven scientific discovery. To ensure the scientific authenticity and real-world relevance of our benchmark, we extract 102 tasks from 44 peer-reviewed publications in four disciplines and engage nine subject matter experts to validate them. We unify the target output for every task to a self-contained Python program file and employ an array of evaluation metrics to examine the generated programs, execution results, and costs. Each task goes through multiple rounds of manual validation by annotators and subject matter experts to ensure its annotation quality and scientific plausibility. We also propose two effective strategies to mitigate data contamination concerns. Using our benchmark, we evaluate five open-weight and proprietary LLMs, each with three frameworks: direct prompting, OpenHands, and self-debug. Given three attempts for each task, the best-performing agent can only solve 32.4% of the tasks independently and 34.3% with expert-provided knowledge. These results underscore the limited capacities of current language agents in generating code for data-driven discovery, let alone end-to-end automation for scientific research.
Community
AI agents will not replace human scientists, but they will become a powerful automation tool to assist scientists. I am proud to introduce ScienceAgentBench, a new benchmark carefully co-designed with subject matter experts to drive and track the progress of coding agents that directly assist scientists in their existing workflows!
Several highlights:
🌟 Scientific authenticity through co-design with subject matter experts
We ensure the authenticity of tasks in our benchmark by directly extracting them from peer-reviewed publications and engaging nine subject matter experts (incl. senior Ph.D. students and professors) from the respective disciplines to validate them. This approach also minimizes the sim2real gap for agents developed on our benchmark to real-world scenarios
🌟 Rigorous graded evaluation
Reliable evaluation for language agents is notably difficult due to the openendedness and complexity of data-driven discovery tasks. We first unify the target output for every task as a self-contained Python program, and then employ an array of evaluation metrics that examine the generated programs, execution results (e.g., rendered figures or test set predictions), and costs. We also provide step-by-step rubrics specific to each task to enable graded evaluation
🌟Careful multi-stage quality control
Each task goes through multiple rounds of manual validation by annotators and subject matter experts to ensure its quality and scientific plausibility. We also propose two effective strategies to mitigate data contamination concerns due to LLM pre-training.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- BLADE: Benchmarking Language Model Agents for Data-Driven Science (2024)
- DSBench: How Far Are Data Science Agents to Becoming Data Science Experts? (2024)
- The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery (2024)
- Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers (2024)
- SUPER: Evaluating Agents on Setting Up and Executing Tasks from Research Repositories (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper