Spaces:
Configuration error
Configuration error
Update README.md
Browse files
README.md
CHANGED
@@ -9,6 +9,7 @@ Now, let's start
|
|
9 |
2. Some students may not have a foundation in machine learning, but not need to be nervous.
|
10 |
If you just want to know how to use large models, it's still easy.
|
11 |
3. follow the step, and you will have a basic understanding of the use of large models.<br>
|
|
|
12 |
"Tool": ||-python-||-pytorch-||-cuda-||-anaconda(miniconda)-||-pycharm(vscode)-||. I think it is easy for you, and there are many course on bilibili.<br>
|
13 |
"usage":<br>
|
14 |
>>first --- download "Transformer library","Tokenizer","Pretrained Model",and you can use Tsinghua-source(清华源) and hf-mirror to download them. <br>
|
@@ -20,6 +21,6 @@ Now, let's start
|
|
20 |
>>>>||----text = "Replace me by any text you'd like."------------------||<br>
|
21 |
>>>>||----encoded_input = tokenizer(text, return_tensors='pt')---||<br>
|
22 |
>>>>||----output = model.generate(encoded_input)------------------||<br>
|
23 |
-
"customized": It's not a easy job. But I can give a tips that you can start with Lora. Lora as PEFT is friendly for students. And there are other ways to fine-tune the model like prefix-tuning,P-tuning,RLHF,etc. Also you can try Data mounting
|
24 |
}
|
25 |
Nothing is difficult to the man who will try!
|
|
|
9 |
2. Some students may not have a foundation in machine learning, but not need to be nervous.
|
10 |
If you just want to know how to use large models, it's still easy.
|
11 |
3. follow the step, and you will have a basic understanding of the use of large models.<br>
|
12 |
+
|
13 |
"Tool": ||-python-||-pytorch-||-cuda-||-anaconda(miniconda)-||-pycharm(vscode)-||. I think it is easy for you, and there are many course on bilibili.<br>
|
14 |
"usage":<br>
|
15 |
>>first --- download "Transformer library","Tokenizer","Pretrained Model",and you can use Tsinghua-source(清华源) and hf-mirror to download them. <br>
|
|
|
21 |
>>>>||----text = "Replace me by any text you'd like."------------------||<br>
|
22 |
>>>>||----encoded_input = tokenizer(text, return_tensors='pt')---||<br>
|
23 |
>>>>||----output = model.generate(encoded_input)------------------||<br>
|
24 |
+
"customized": It's not a easy job. But I can give a tips that you can start with Lora. Lora as PEFT is friendly for students. And there are other ways to fine-tune the model like prefix-tuning,P-tuning,RLHF,etc. Also you can try Data mounting.<br>
|
25 |
}
|
26 |
Nothing is difficult to the man who will try!
|