Text Generation
Transformers
PyTorch
English
llama
Eval Results
text-generation-inference
Inference Endpoints
pankajmathur commited on
Commit
ac5e24a
1 Parent(s): fec86e3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -6
README.md CHANGED
@@ -114,10 +114,19 @@ model-index:
114
  ---
115
  # orca_mini_7b
116
 
117
- An [OpenLLaMa-7B model](https://github.com/openlm-research/open_llama) model trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.
118
 
 
 
119
 
120
- # Dataset
 
 
 
 
 
 
 
121
 
122
  We build explain tuned [WizardLM dataset ~70K](https://github.com/nlpxucan/WizardLM), [Alpaca dataset ~52K](https://crfm.stanford.edu/2023/03/13/alpaca.html) & [Dolly-V2 dataset ~15K](https://github.com/databrickslabs/dolly) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707).
123
 
@@ -127,7 +136,7 @@ This helps student model aka this model to learn ***thought*** process from teac
127
 
128
  Please see below example usage how the **System** prompt is added before each **instruction**.
129
 
130
- # Training
131
 
132
  The training configurations are provided in the table below.
133
 
@@ -149,7 +158,7 @@ Here are some of params used during training:
149
 
150
 
151
 
152
- # Example Usage
153
 
154
  Below shows an example on how to use this model
155
 
@@ -222,8 +231,6 @@ Sincerely,
222
 
223
  ```
224
 
225
- **P.S. I am #opentowork and #collaboration, if you can help, please reach out to me at www.linkedin.com/in/pankajam**
226
-
227
  **
228
 
229
  Next Goals:
 
114
  ---
115
  # orca_mini_7b
116
 
117
+ <img src="https://huggingface.co/pankajmathur/orca_mini_v5_8b/resolve/main/orca_minis_small.jpeg" width="auto" />
118
 
119
+ <strong>
120
+ Passionate about Generative AI? I help companies to privately train and deploy custom LLM/MLLM affordably. For startups, I can even assist with securing GPU grants to get you started. Let's chat!
121
 
122
+ <a href="https://www.linkedin.com/in/pankajam" target="_blank">https://www.linkedin.com/in/pankajam</a> Looking forward to connecting!
123
+ </strong>
124
+
125
+ <br>
126
+
127
+ **An [OpenLLaMa-7B model](https://github.com/openlm-research/open_llama) model trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches.**
128
+
129
+ ### Dataset
130
 
131
  We build explain tuned [WizardLM dataset ~70K](https://github.com/nlpxucan/WizardLM), [Alpaca dataset ~52K](https://crfm.stanford.edu/2023/03/13/alpaca.html) & [Dolly-V2 dataset ~15K](https://github.com/databrickslabs/dolly) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707).
132
 
 
136
 
137
  Please see below example usage how the **System** prompt is added before each **instruction**.
138
 
139
+ ### Training
140
 
141
  The training configurations are provided in the table below.
142
 
 
158
 
159
 
160
 
161
+ ### Example Usage
162
 
163
  Below shows an example on how to use this model
164
 
 
231
 
232
  ```
233
 
 
 
234
  **
235
 
236
  Next Goals: