facat commited on
Commit
45bd89b
1 Parent(s): e903c71

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +16 -13
README.md CHANGED
@@ -114,19 +114,22 @@ multilingual tasks. Compared to larger models, SUS-Chat-34B remains
114
  highly competitive and has achieved state-of-the-art performance in our
115
  comprehensive evaluations.
116
 
117
- SUS-Chat-34B model has the following highlights: 1. Large-scale complex
118
- instruction following data: Trained with 1.4 billion tokens of
119
- high-quality complex instruction data, covering Chinese and English,
120
- multi-turn dialogues, mathematics, reasoning, and various other types of
121
- instruction data; 2. Strong performance in general tasks: The
122
- SUS-Chat-34B model excels in numerous mainstream Chinese and English
123
- tasks, surpassing other open-source instruction fine-tuned models of the
124
- same parameter scale. It also competes well against models with larger
125
- parameter scales; 3. Longer context window and excellent multi-turn
126
- dialogue capabilities: Currently, SUS-Chat-34B supports an 8K context
127
- window, and is trained with a large amount of multi-turn instruction and
128
- single-multi-turn mixed data, demonstrating remarkable capabilities in
129
- long-text dialogue information focus and instruction follow-up.
 
 
 
130
 
131
  SUS-Chat powerfully demonstrates that through the right instruction
132
  fine-tuning, academic institutions can achieve better performance
 
114
  highly competitive and has achieved state-of-the-art performance in our
115
  comprehensive evaluations.
116
 
117
+ SUS-Chat-34B model has the following highlights:
118
+
119
+ 1. Large-scale complex instruction following data: Trained with 1.4
120
+ billion tokens of high-quality complex instruction data, covering
121
+ Chinese and English, multi-turn dialogues, mathematics, reasoning,
122
+ and various other types of instruction data;
123
+ 2. Strong performance in general tasks: The SUS-Chat-34B model excels
124
+ in numerous mainstream Chinese and English tasks, surpassing other
125
+ open-source instruction fine-tuned models of the same parameter
126
+ scale. It also competes well against models with larger parameter
127
+ scales;
128
+ 3. Longer context window and excellent multi-turn dialogue
129
+ capabilities: Currently, SUS-Chat-34B supports an 8K context window,
130
+ and is trained with a large amount of multi-turn instruction and
131
+ single-multi-turn mixed data, demonstrating remarkable capabilities
132
+ in long-text dialogue information focus and instruction follow-up.
133
 
134
  SUS-Chat powerfully demonstrates that through the right instruction
135
  fine-tuning, academic institutions can achieve better performance