RonanMcGovern
commited on
Commit
•
b3591eb
1
Parent(s):
055776a
add tips on best models
Browse files
README.md
CHANGED
@@ -43,14 +43,20 @@ Latest Models:
|
|
43 |
Other Models:
|
44 |
- Llama-13B-chat with function calling ([Base Model](https://huggingface.co/Trelis/Llama-2-13b-chat-hf-function-calling-v2)), ([PEFT Adapters](https://huggingface.co/Trelis/Llama-2-13b-chat-hf-function-calling-adapters-v2)) - Paid, [purchase here](https://buy.stripe.com/9AQ7te3lHdmbdZ68wz)
|
45 |
|
46 |
-
##
|
47 |
-
|
48 |
1. Larger models are better at handling function calling. The cross entropy training losses are approximately 0.5 for 7B, 0.4 for 13B, 0.3 for 70B. The absolute numbers don't mean anything but the relative values offer a sense of relative performance.
|
49 |
1. Provide very clear function descriptions, including whether the arguments are required or what the default values should be.
|
50 |
1. Make sure to post-process the language model's response to check that all necessary information is provided by the user. If not, prompt the user to let them know they need to provide more info (e.g. their name, order number etc.)
|
51 |
|
52 |
Check out this video overview of performance [here](https://www.loom.com/share/8d7467de95e04af29ff428c46286946c?sid=683c970e-6063-4f1e-b184-894cc1d96115)
|
53 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
## Licensing
|
55 |
|
56 |
Llama-7B with function calling is licensed according to the Meta Community license.
|
|
|
43 |
Other Models:
|
44 |
- Llama-13B-chat with function calling ([Base Model](https://huggingface.co/Trelis/Llama-2-13b-chat-hf-function-calling-v2)), ([PEFT Adapters](https://huggingface.co/Trelis/Llama-2-13b-chat-hf-function-calling-adapters-v2)) - Paid, [purchase here](https://buy.stripe.com/9AQ7te3lHdmbdZ68wz)
|
45 |
|
46 |
+
## Which model is best for what?
|
|
|
47 |
1. Larger models are better at handling function calling. The cross entropy training losses are approximately 0.5 for 7B, 0.4 for 13B, 0.3 for 70B. The absolute numbers don't mean anything but the relative values offer a sense of relative performance.
|
48 |
1. Provide very clear function descriptions, including whether the arguments are required or what the default values should be.
|
49 |
1. Make sure to post-process the language model's response to check that all necessary information is provided by the user. If not, prompt the user to let them know they need to provide more info (e.g. their name, order number etc.)
|
50 |
|
51 |
Check out this video overview of performance [here](https://www.loom.com/share/8d7467de95e04af29ff428c46286946c?sid=683c970e-6063-4f1e-b184-894cc1d96115)
|
52 |
|
53 |
+
Some short tips based on models as of November 2023:
|
54 |
+
- DeepSeek Coder (all sizes) = best coding model.
|
55 |
+
- Yi 34B = best for long context.
|
56 |
+
- Llama 70B = strongest overall model (4k context).
|
57 |
+
- Mistral 7B = Best model if you have only 8 GB of VRAM (run with quantization).
|
58 |
+
Zephyr is better than Mistral 7B but is not openly licensed for commercial use.
|
59 |
+
|
60 |
## Licensing
|
61 |
|
62 |
Llama-7B with function calling is licensed according to the Meta Community license.
|