DEVAI-benchmark tjpxiaoming commited on
Commit
a35b69e
1 Parent(s): 15168a1

Update README.md (#1)

Browse files

- Update README.md (e3f39e0ef2557fec8b60e1f49bbe57caa8df6df0)


Co-authored-by: mingchen zhuge <[email protected]>

Files changed (1) hide show
  1. README.md +13 -0
README.md CHANGED
@@ -6,6 +6,19 @@ configs:
6
  - split: main
7
  path: "instances/*.json"
8
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  # DEVAI dataset
10
  <p align="center" width="100%">
11
  <img src="dataset_stats.png" align="center" width="84%"/>
 
6
  - split: main
7
  path: "instances/*.json"
8
  ---
9
+
10
+ **GITHUB:** https://github.com/metauto-ai/agent-as-a-judge
11
+
12
+ > [!NOTE]
13
+ > Current evaluation techniques are often inadequate for advanced **agentic systems** due to their focus on final outcomes and labor-intensive manual reviews. To overcome this limitation, we introduce the **Agent-as-a-Judge** framework.
14
+
15
+
16
+ > [!IMPORTANT]
17
+ > As a **proof-of-concept**, we applied **Agent-as-a-Judge** to code generation tasks using **DevAI**, a benchmark consisting of 55 realistic AI development tasks with 365 hierarchical user requirements. The results demonstrate that **Agent-as-a-Judge** significantly outperforms traditional evaluation methods, delivering reliable reward signals for scalable self-improvement in agentic systems.
18
+ >
19
+ > Check out the dataset on [Hugging Face 🤗](https://huggingface.co/DEVAI-benchmark).
20
+ > See how to use this dataset in the [guidelines](benchmark/devai/README.md).
21
+
22
  # DEVAI dataset
23
  <p align="center" width="100%">
24
  <img src="dataset_stats.png" align="center" width="84%"/>