liuyongq commited on
Commit
5de5b0c
1 Parent(s): b630d13

Update README_en.md

Browse files
Files changed (1) hide show
  1. README_en.md +5 -5
README_en.md CHANGED
@@ -1,6 +1,6 @@
1
  <!-- markdownlint-disable first-line-h1 -->
2
  <!-- markdownlint-disable html -->
3
- ![](./assets/imgs/orion_start.png)
4
 
5
  <div align="center">
6
  <h1>
@@ -14,7 +14,7 @@
14
  <h4 align="center">
15
  <p>
16
  <b>🌐English</b> |
17
- <a href="https://huggingface.co/OrionStarAI/Orion-14B-Base">🇨🇳中文</a><br><br>
18
  🤗 <a href="https://huggingface.co/OrionStarAI" target="_blank">HuggingFace Mainpage</a> | 🤖 <a href="https://modelscope.cn/organization/OrionStarAI" target="_blank">ModelScope Mainpage</a> | 🎬 <a href="https://modelscope.cn/studios/OrionStarAI/Orion-14B/summary" target="_blank">Online Demo</a>
19
  <p>
20
  </h4>
@@ -42,8 +42,8 @@
42
  - **Orion-14B-Base:** A multilingual large language foundational model with 14 billion parameters, pretrained on a diverse dataset of 2.5 trillion tokens.
43
  - **Orion-14B-Chat:** A chat-model fine-tuned on a high-quality corpus aims to provide an excellence interactive experience for users in the large model community.
44
  - **Orion-14B-LongChat:** This model is optimized for long context lengths more than 200k tokens and demonstrates performance comparable to proprietary models on long context evaluation sets.
45
- - **Orion-14B-RAG:** A chat-model fine-tuned on a custom retrieval augmented generation dataset, achieving superior performance in retrieval augmented generation tasks.
46
- - **Orion-14B-PlugIn:** A chat-model specifically tailored for plugin and function calling tasks, ideal for agent-related scenarios where the LLM acts as a plugin and function call system.
47
  - **Orion-14B-Base-Int4:** A quantized base model utilizing 4-bit integer weights. It significantly reduces the model size by 70% and increases the inference speed by 30% while incurring a minimal performance loss of only 1%.
48
  - **Orion-14B-Chat-Int4:** A quantized chat model utilizing 4-bit integer weights.
49
 
@@ -274,4 +274,4 @@ the [Apache 2.0](https://github.com/OrionStarAI/Orion-14B/blob/main/LICENSE).
274
 
275
  Email: [email protected]
276
 
277
- WhatsApp Group: https://chat.whatsapp.com/J30ig8Dx4ja5jc0cfx2nVs
 
1
  <!-- markdownlint-disable first-line-h1 -->
2
  <!-- markdownlint-disable html -->
3
+ ![](./assets/imgs/orion_start.PNG)
4
 
5
  <div align="center">
6
  <h1>
 
14
  <h4 align="center">
15
  <p>
16
  <b>🌐English</b> |
17
+ <a href="http://git.ainirobot.com/llm/Orion/blob/master/README.MD">🇨🇳中文</a><br><br>
18
  🤗 <a href="https://huggingface.co/OrionStarAI" target="_blank">HuggingFace Mainpage</a> | 🤖 <a href="https://modelscope.cn/organization/OrionStarAI" target="_blank">ModelScope Mainpage</a> | 🎬 <a href="https://modelscope.cn/studios/OrionStarAI/Orion-14B/summary" target="_blank">Online Demo</a>
19
  <p>
20
  </h4>
 
42
  - **Orion-14B-Base:** A multilingual large language foundational model with 14 billion parameters, pretrained on a diverse dataset of 2.5 trillion tokens.
43
  - **Orion-14B-Chat:** A chat-model fine-tuned on a high-quality corpus aims to provide an excellence interactive experience for users in the large model community.
44
  - **Orion-14B-LongChat:** This model is optimized for long context lengths more than 200k tokens and demonstrates performance comparable to proprietary models on long context evaluation sets.
45
+ - **Orion-14B-Chat-RAG:** A chat-model fine-tuned on a custom retrieval augmented generation dataset, achieving superior performance in retrieval augmented generation tasks.
46
+ - **Orion-14B-Chat-Plugin:** A chat-model specifically tailored for plugin and function calling tasks, ideal for agent-related scenarios where the LLM acts as a plugin and function call system.
47
  - **Orion-14B-Base-Int4:** A quantized base model utilizing 4-bit integer weights. It significantly reduces the model size by 70% and increases the inference speed by 30% while incurring a minimal performance loss of only 1%.
48
  - **Orion-14B-Chat-Int4:** A quantized chat model utilizing 4-bit integer weights.
49
 
 
274
 
275
  Email: [email protected]
276
 
277
+ ![](./assets/imgs/wechat_group.jpg)