justsomerandomdude264
commited on
Commit
•
fb4613c
1
Parent(s):
04fd021
Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,7 @@ This is a Large Language Model (LLM) fine-tuned to solve math problems with deta
|
|
23 |
- **Fine-tuning Method**: PEFT (Parameter-Efficient Fine-Tuning) with QLoRA
|
24 |
- **Quantization**: 4-bit quantization for reduced memory usage
|
25 |
- **Training Framework**: Unsloth, optimized for efficient fine-tuning of large language models
|
26 |
-
- **Training Environment**: Google Colab (free tier), NVIDIA T4 GPU (
|
27 |
- **Dataset Used**: justsomerandomdude264/ScienceQA-Dataset, 560 selected rows
|
28 |
|
29 |
## Capabilities
|
|
|
23 |
- **Fine-tuning Method**: PEFT (Parameter-Efficient Fine-Tuning) with QLoRA
|
24 |
- **Quantization**: 4-bit quantization for reduced memory usage
|
25 |
- **Training Framework**: Unsloth, optimized for efficient fine-tuning of large language models
|
26 |
+
- **Training Environment**: Google Colab (free tier), NVIDIA T4 GPU (16GB VRAM), 12GB RAM
|
27 |
- **Dataset Used**: justsomerandomdude264/ScienceQA-Dataset, 560 selected rows
|
28 |
|
29 |
## Capabilities
|