RichardErkhov commited on
Commit
ee73f65
β€’
1 Parent(s): 074f89d

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +119 -0
README.md ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ CodeActAgent-Llama-2-7b - GGUF
11
+ - Model creator: https://huggingface.co/xingyaoww/
12
+ - Original model: https://huggingface.co/xingyaoww/CodeActAgent-Llama-2-7b/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [CodeActAgent-Llama-2-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.Q2_K.gguf) | Q2_K | 2.36GB |
18
+ | [CodeActAgent-Llama-2-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
19
+ | [CodeActAgent-Llama-2-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.IQ3_S.gguf) | IQ3_S | 2.75GB |
20
+ | [CodeActAgent-Llama-2-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
21
+ | [CodeActAgent-Llama-2-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.IQ3_M.gguf) | IQ3_M | 2.9GB |
22
+ | [CodeActAgent-Llama-2-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.Q3_K.gguf) | Q3_K | 3.07GB |
23
+ | [CodeActAgent-Llama-2-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
24
+ | [CodeActAgent-Llama-2-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
25
+ | [CodeActAgent-Llama-2-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
26
+ | [CodeActAgent-Llama-2-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.Q4_0.gguf) | Q4_0 | 3.56GB |
27
+ | [CodeActAgent-Llama-2-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
28
+ | [CodeActAgent-Llama-2-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
29
+ | [CodeActAgent-Llama-2-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.Q4_K.gguf) | Q4_K | 3.8GB |
30
+ | [CodeActAgent-Llama-2-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
31
+ | [CodeActAgent-Llama-2-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.Q4_1.gguf) | Q4_1 | 3.95GB |
32
+ | [CodeActAgent-Llama-2-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.Q5_0.gguf) | Q5_0 | 4.33GB |
33
+ | [CodeActAgent-Llama-2-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
34
+ | [CodeActAgent-Llama-2-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.Q5_K.gguf) | Q5_K | 4.45GB |
35
+ | [CodeActAgent-Llama-2-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
36
+ | [CodeActAgent-Llama-2-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.Q5_1.gguf) | Q5_1 | 4.72GB |
37
+ | [CodeActAgent-Llama-2-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.Q6_K.gguf) | Q6_K | 5.15GB |
38
+ | [CodeActAgent-Llama-2-7b.Q8_0.gguf](https://huggingface.co/RichardErkhov/xingyaoww_-_CodeActAgent-Llama-2-7b-gguf/blob/main/CodeActAgent-Llama-2-7b.Q8_0.gguf) | Q8_0 | 6.67GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: llama2
46
+ datasets:
47
+ - xingyaoww/code-act
48
+ language:
49
+ - en
50
+ tags:
51
+ - llm-agent
52
+ pipeline_tag: text-generation
53
+ ---
54
+
55
+ <h1 align="center"> Executable Code Actions Elicit Better LLM Agents </h1>
56
+
57
+ <p align="center">
58
+ <a href="https://github.com/xingyaoww/code-act">πŸ’» Code</a>
59
+ β€’
60
+ <a href="https://arxiv.org/abs/2402.01030">πŸ“ƒ Paper</a>
61
+ β€’
62
+ <a href="https://huggingface.co/datasets/xingyaoww/code-act" >πŸ€— Data (CodeActInstruct)</a>
63
+ β€’
64
+ <a href="https://huggingface.co/xingyaoww/CodeActAgent-Mistral-7b-v0.1" >πŸ€— Model (CodeActAgent-Mistral-7b-v0.1)</a>
65
+ β€’
66
+ <a href="https://chat.xwang.dev/">πŸ€– Chat with CodeActAgent!</a>
67
+ </p>
68
+
69
+ We propose to use executable Python **code** to consolidate LLM agents’ **act**ions into a unified action space (**CodeAct**).
70
+ Integrated with a Python interpreter, CodeAct can execute code actions and dynamically revise prior actions or emit new actions upon new observations (e.g., code execution results) through multi-turn interactions.
71
+
72
+ ![Overview](https://github.com/xingyaoww/code-act/blob/main/figures/overview.png?raw=true)
73
+
74
+ ## Why CodeAct?
75
+
76
+ Our extensive analysis of 17 LLMs on API-Bank and a newly curated benchmark [M<sup>3</sup>ToolEval](docs/EVALUATION.md) shows that CodeAct outperforms widely used alternatives like Text and JSON (up to 20% higher success rate). Please check our paper for more detailed analysis!
77
+
78
+ ![Comparison between CodeAct and Text/JSON](https://github.com/xingyaoww/code-act/blob/main/figures/codeact-comparison-table.png?raw=true)
79
+ *Comparison between CodeAct and Text / JSON as action.*
80
+
81
+ ![Comparison between CodeAct and Text/JSON](https://github.com/xingyaoww/code-act/blob/main/figures/codeact-comparison-perf.png?raw=true)
82
+ *Quantitative results comparing CodeAct and {Text, JSON} on M<sup>3</sup>ToolEval.*
83
+
84
+
85
+
86
+ ## πŸ“ CodeActInstruct
87
+
88
+ We collect an instruction-tuning dataset CodeActInstruct that consists of 7k multi-turn interactions using CodeAct. Dataset is release at [huggingface dataset πŸ€—](https://huggingface.co/datasets/xingyaoww/code-act). Please refer to the paper and [this section](#-data-generation-optional) for details of data collection.
89
+
90
+
91
+ ![Data Statistics](https://github.com/xingyaoww/code-act/blob/main/figures/data-stats.png?raw=true)
92
+ *Dataset Statistics. Token statistics are computed using Llama-2 tokenizer.*
93
+
94
+ ## πŸͺ„ CodeActAgent
95
+
96
+ Trained on **CodeActInstruct** and general conversaions, **CodeActAgent** excels at out-of-domain agent tasks compared to open-source models of the same size, while not sacrificing generic performance (e.g., knowledge, dialog). We release two variants of CodeActAgent:
97
+ - **CodeActAgent-Mistral-7b-v0.1** (recommended, [model link](https://huggingface.co/xingyaoww/CodeActAgent-Mistral-7b-v0.1)): using Mistral-7b-v0.1 as the base model with 32k context window.
98
+ - **CodeActAgent-Llama-7b** ([model link](https://huggingface.co/xingyaoww/CodeActAgent-Llama-2-7b)): using Llama-2-7b as the base model with 4k context window.
99
+
100
+ ![Model Performance](https://github.com/xingyaoww/code-act/blob/main/figures/model-performance.png?raw=true)
101
+ *Evaluation results for CodeActAgent. ID and OD stand for in-domain and out-of-domain evaluation correspondingly. Overall averaged performance normalizes the MT-Bench score to be consistent with other tasks and excludes in-domain tasks for fair comparison.*
102
+
103
+
104
+ Please check out [our paper](TODO) and [code](https://github.com/xingyaoww/code-act) for more details about data collection, model training, and evaluation.
105
+
106
+
107
+ ## πŸ“š Citation
108
+
109
+ ```bibtex
110
+ @misc{wang2024executable,
111
+ title={Executable Code Actions Elicit Better LLM Agents},
112
+ author={Xingyao Wang and Yangyi Chen and Lifan Yuan and Yizhe Zhang and Yunzhu Li and Hao Peng and Heng Ji},
113
+ year={2024},
114
+ eprint={2402.01030},
115
+ archivePrefix={arXiv},
116
+ primaryClass={cs.CL}
117
+ }
118
+ ```
119
+