Update README.md
Browse files
README.md
CHANGED
@@ -75,9 +75,7 @@ The attention mechanism in a transformer model is designed to capture global dep
|
|
75 |
|
76 |
## Finetuning details
|
77 |
The finetuning scripts will be available in our [RAIL Github Repository](https://github.com/vmware-labs/research-and-development-artificial-intelligence-lab/tree/main/instruction-tuning)
|
78 |
-
## Evaluation
|
79 |
|
80 |
-
<B>TODO</B>
|
81 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
82 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_VMware__open-llama-7b-open-instruct)
|
83 |
|
|
|
75 |
|
76 |
## Finetuning details
|
77 |
The finetuning scripts will be available in our [RAIL Github Repository](https://github.com/vmware-labs/research-and-development-artificial-intelligence-lab/tree/main/instruction-tuning)
|
|
|
78 |
|
|
|
79 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
80 |
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_VMware__open-llama-7b-open-instruct)
|
81 |
|