Update README.md
Browse files
README.md
CHANGED
@@ -10,3 +10,81 @@ tags:
|
|
10 |
- 7b
|
11 |
- llm
|
12 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
- 7b
|
11 |
- llm
|
12 |
---
|
13 |
+
|
14 |
+
## Try it
|
15 |
+
|
16 |
+
### C#
|
17 |
+
Code for [use form .Net CSharp on CPU](https://github.com/NethermindEth/Mpt-Instruct-DotNet-S)
|
18 |
+
|
19 |
+
### Python
|
20 |
+
```python
|
21 |
+
import torch
|
22 |
+
import transformers
|
23 |
+
from transformers import AutoTokenizer
|
24 |
+
|
25 |
+
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
|
26 |
+
tokenizer.pad_token = tokenizer.eos_token
|
27 |
+
|
28 |
+
device = torch.device("cuda")
|
29 |
+
model_name = "Nethermind/Mpt-Instruct-DotNet-S"
|
30 |
+
config = transformers.AutoConfig.from_pretrained(model_name, trust_remote_code=True)
|
31 |
+
config.init_device = device
|
32 |
+
config.max_seq_len = 1024
|
33 |
+
config.attn_config['attn_impl'] = 'torch'
|
34 |
+
config.use_cache = False
|
35 |
+
|
36 |
+
model = transformers.AutoModelForCausalLM.from_pretrained(
|
37 |
+
model_name,
|
38 |
+
config=config,
|
39 |
+
torch_dtype=torch.bfloat16,
|
40 |
+
trust_remote_code=True,
|
41 |
+
ignore_mismatched_sizes=True,
|
42 |
+
# load_in_8bit=True # when low on GPU memory
|
43 |
+
)
|
44 |
+
model.eval()
|
45 |
+
|
46 |
+
INSTRUCTION_KEY = "### Instruction:"
|
47 |
+
RESPONSE_KEY = "### Response:"
|
48 |
+
PROMPT_FOR_GENERATION_FORMAT = """{system}
|
49 |
+
{instruction_key}
|
50 |
+
{instruction}
|
51 |
+
{response_key}
|
52 |
+
""".format(
|
53 |
+
system="{system}",
|
54 |
+
instruction_key=INSTRUCTION_KEY,
|
55 |
+
instruction="{instruction}",
|
56 |
+
response_key=RESPONSE_KEY
|
57 |
+
)
|
58 |
+
|
59 |
+
def give_answer(instruction="Create a loop over [0, 6, 7 , 77] that prints its contentrs", system="You are an experienced .Net C# developer. Below is an instruction that describes a task. Write a response that completes the request providing detailed explanations with code examples.", ):
|
60 |
+
question = PROMPT_FOR_GENERATION_FORMAT.format(system=system, instruction=instruction)
|
61 |
+
input_tokens = tokenizer.encode(question ,return_tensors='pt')
|
62 |
+
model.generate(input_tokens.to(device), max_new_tokens=min(512, 1024 - input_tokens.shape[1]), do_sample=False, top_k=1, top_p=0.95)
|
63 |
+
outputs = output_loop(tokenized_question)
|
64 |
+
answer = tokenizer.batch_decode(outputs, skip_special_tokens=True)
|
65 |
+
print(answer[0])
|
66 |
+
|
67 |
+
```
|
68 |
+
|
69 |
+
|
70 |
+
## Training
|
71 |
+
Finetuned for CSharp [mosaicml/mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct). Max context length is restricted to 1024 tokens.
|
72 |
+
|
73 |
+
'Loss': 0.256045166015625 on 300k CSharp-related records
|
74 |
+
'Loss': 0.095714599609375 on 50k specific short prompts
|
75 |
+
|
76 |
+
## Sources
|
77 |
+
data contained (most data was around 500 tokens long < 1000, except large code files):
|
78 |
+
- codeparrot/github-code C# ("mit", "Apache-2.0", "Bsd-3-clause", "Bsd-2-clause", "Cc0-1.0", "Unlicense", "isc")
|
79 |
+
- raw data Plain .cs files randomly cut at the 60-80% in the instruction, and we ask the network to continue last 40-20% (76k)
|
80 |
+
- documented static functions 72k
|
81 |
+
- SO 5q_5answer + 5q_5best (CC BY-SA 4.0) 70k
|
82 |
+
- Dotnet wiki (30k, rendered out from [github repo](https://github.com/microsoft/dotnet), see also removed, GPT-4 generated short question to each file)
|
83 |
+
- All NM Static Functions and Tests (from [nethermind client repo](https://github.com/NethermindEth/nethermind) documented and described via GPT-4 (4k)
|
84 |
+
- GPT-4 questions, GPT-3.5 answers for CSharp: Short Q->Code, Explain Code X > Step-By-Step (35k)
|
85 |
+
- GPT-4 questions, GPT-3.5 answers for nethermind client interface `IEthRpcModule `: Short Q->Code, Explain Code X -> Step-By-Step (7k)
|
86 |
+
|
87 |
+
## Contents
|
88 |
+
- HF compatible model
|
89 |
+
- GGML compatible quantisations (f16, q8, q5)
|
90 |
+
|