AlexTian commited on
Commit
785af45
1 Parent(s): 590bc0c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +134 -0
README.md ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - zh
6
+ tags:
7
+ - rk3588
8
+ - rkllm
9
+ - Rockchip
10
+ - internlm2
11
+ ---
12
+
13
+ # internlm2_rkLLM
14
+
15
+ - [中文](#书生浦语-18b)
16
+ - [English](#internlm2-18b)
17
+
18
+ ## 书生·浦语-1.8B
19
+
20
+ ### 介绍
21
+
22
+ internlm2_rkLLM 是从 [internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b) 转换而来的 RKLLM 模型,专为 Rockchip 设备优化。该模型运行于 RK3588 的 NPU 上。
23
+
24
+ - **模型名称**: internlm2-chat-1_8b
25
+ - **模型架构**: 与 internlm2-chat-1_8b 相同
26
+ - **发布者**: FydeOS
27
+ - **日期**: 2024-06-03
28
+
29
+ ### 模型详情
30
+
31
+ 书生·浦语-1.8B (InternLM2-1.8B) 是第二代浦语模型系列的18亿参数版本。通过在线 RLHF 在 InternLM2-Chat-1.8B-SFT 之上进一步对齐。 InternLM2-Chat-1.8B表现出更好的指令跟随、聊天体验和函数调用,推荐下游应用程序使用。
32
+
33
+ InternLM2 模型具备以下的技术特点:
34
+ - 有效支持20万字超长上下文:模型在20万字长输入中几乎完美地实现长文“大海捞针”,而且在 LongBench 和 L-Eval 等长文任务中的表现也达到开源模型中的领先水平。
35
+ - 综合性能全面提升:各能力维度相比上一代模型全面进步,在推理、数学、代码等方面的能力提升显著。
36
+ ### 使用指南
37
+
38
+ > 此模型仅支持搭载 Rockchip RK3588/s 芯片的设备。请确认设备信息并确保 NPU 可用。
39
+
40
+ #### openFyde 系统
41
+
42
+ > 请确保你已将系统升级到最新版本。
43
+
44
+ 1. 下载模型文件 `XXX.rkllm`。
45
+ 2. 新建文件夹 `model/`,将模型文件放置于该文件夹内。
46
+ 3. 启动 FydeOS AI,在设置页面进行相关配置。
47
+
48
+ #### 其它系统
49
+ > 请确保已完成 RKLLM 的 NPU 相关内核更新。
50
+
51
+ 1. 下载模型文件 `XXX.rkllm`。
52
+ 2. 按照官方文档进行配置:[官方文档](https://github.com/airockchip/rknn-llm)。
53
+
54
+ ### 常见问题(FAQ)
55
+
56
+ 如遇到问题,请先查阅 issue 区,若问题仍未解决,再提交新的 issue。
57
+
58
+ ### 限制与注意事项
59
+
60
+ - 模型在某些情况下可能存在性能限制
61
+ - 使用时请遵循相关法律法规
62
+ - 可能需要进行适当的参数调优以达到最佳效果
63
+
64
+ ### 许可证
65
+
66
+ 本模型采用与 [internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b) 相同的许可证。
67
+
68
+ ### 联系方式
69
+
70
+ 如需更多信息,请联系:
71
+
72
+ - **电子邮件**: [email protected]
73
+ - **主页**: [FydeOS AI](https://fydeos.ai/zh/)
74
+
75
+
76
+ ## InternLM2-1.8B
77
+
78
+ ### Introduction
79
+
80
+ internlm2_rkLLM is a RKLLM model derived from [internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b), specifically optimized for Rockchip devices. This model operates on the NPU of the RK3588 chip.
81
+
82
+ - **Model Name**: internlm2-chat-1_8b
83
+ - **Architecture**: Identical to internlm2-chat-1_8b
84
+ - **Publisher**: FydeOS
85
+ - **Release Date**: 3 June 2024
86
+
87
+ ### Model Details
88
+
89
+ InternLM2-1.8B is the 1.8 billion parameter version of the second generation InternLM series. Further aligned on top of InternLM2-Chat-1.8B-SFT through online RLHF. InternLM2-Chat-1.8B exhibits better instruction following, chat experience, and function calling, which is recommended for downstream applications.
90
+
91
+ The InternLM2 has the following technical features:
92
+
93
+ - Effective support for ultra-long contexts of up to 200,000 characters: The model nearly perfectly achieves "finding a needle in a haystack" in long inputs of 200,000 characters. It also leads among open-source models in performance on long-text tasks such as LongBench and L-Eval.
94
+ - Comprehensive performance enhancement: Compared to the previous generation model, it shows significant improvements in various capabilities, including reasoning, mathematics, and coding.
95
+
96
+ ### User Guide
97
+
98
+ This model is only supported on devices with the Rockchip RK3588/s chip. Please verify your device's chip information and ensure the NPU is operational.
99
+
100
+ #### openFyde System
101
+
102
+ > Ensure you have upgraded to the latest version of openFyde.
103
+
104
+ 1. Download the model file `XXX.rkllm`.
105
+ 2. Create a folder named `model/` and place the model file inside this folder.
106
+ 3. Launch FydeOS AI and configure the settings on the settings page.
107
+
108
+ #### Other Systems
109
+
110
+ > Ensure you have updated the NPU kernel related to RKLLM.
111
+
112
+ 1. Download the model file `XXX.rkllm`.
113
+ 2. Follow the configuration guidelines provided in the [official documentation](https://github.com/airockchip/rknn-llm).
114
+
115
+ ### FAQ
116
+
117
+ If you encounter issues, please refer to the issue section first. If your problem remains unresolved, submit a new issue.
118
+
119
+ ### Limitations and Considerations
120
+
121
+ - The model may have performance limitations in certain scenarios.
122
+ - Ensure compliance with relevant laws and regulations during usage.
123
+ - Parameter tuning might be necessary to achieve optimal performance.
124
+
125
+ ### License
126
+
127
+ This model is licensed under the same terms as [internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b).
128
+
129
+ ### Contact Information
130
+
131
+ For more information, please contact:
132
+
133
+ - **Email**: [email protected]
134
+ - **Homepage**: [FydeOS AI](https://fydeos.ai/en/)