justsomerandomdude264
commited on
Commit
•
c34fa85
1
Parent(s):
8072b90
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,102 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
license: apache-2.0
|
6 |
+
tags:
|
7 |
+
- text-generation-inference
|
8 |
+
- transformers
|
9 |
+
- unsloth
|
10 |
+
- llama
|
11 |
+
- trl
|
12 |
+
datasets:
|
13 |
+
- TIGER-Lab/MathInstruct
|
14 |
+
library_name: transformers
|
15 |
+
---
|
16 |
+
|
17 |
+
# Model Card: Math Homework Solver
|
18 |
+
|
19 |
+
This is a Large Language Model (LLM) fine-tuned to solve math problems with detailed, step-by-step explanations and accurate answers. The base model used is Llama 3.1 with 8 billion parameters, which has been quantized to 4-bit using QLoRA (Quantized Low-Rank Adaptation) and PEFT (Parameter-Efficient Fine-Tuning) through the Unsloth framework.
|
20 |
+
|
21 |
+
## Model Details
|
22 |
+
|
23 |
+
- **Base Model**: Llama 3.1 (8 Billion parameters)
|
24 |
+
- **Fine-tuning Method**: PEFT (Parameter-Efficient Fine-Tuning) with QLoRA
|
25 |
+
- **Quantization**: 4-bit quantization for reduced memory usage
|
26 |
+
- **Training Framework**: Unsloth, optimized for efficient fine-tuning of large language models
|
27 |
+
- **Training Environment**: Google Colab (free tier), NVIDIA T4 GPU (12GB VRAM), 12GB RAM
|
28 |
+
- **Dataset Used**: TIGER-Lab/MathInstruct (Yue, X., Qu, X., Zhang, G., Fu, Y., Huang, W., Sun, H., Su, Y., & Chen, W. (2023). MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning. *arXiv preprint arXiv:2309.05653*.
|
29 |
+
), 560 selected math problems and solutions
|
30 |
+
|
31 |
+
## Capabilities
|
32 |
+
|
33 |
+
The Math Homework Solver model is designed to assist with a broad spectrum of mathematical problems, from basic arithmetic to advanced calculus. It provides clear and detailed explanations, making it an excellent resource for students, educators, and anyone looking to deepen their understanding of mathematical concepts.
|
34 |
+
|
35 |
+
By leveraging the Llama 3.1 base model and fine-tuning it using PEFT and QLoRA, this model achieves high-quality performance while maintaining a relatively small computational footprint, making it accessible even on limited hardware.
|
36 |
+
|
37 |
+
## Getting Started
|
38 |
+
|
39 |
+
To start using the Math Homework Solver model, follow these steps:
|
40 |
+
|
41 |
+
1. **Clone the repo**
|
42 |
+
```bash
|
43 |
+
git clone https://huggingface.co/justsomerandomdude264/Math_Homework_Solver-Llama3.18B
|
44 |
+
```
|
45 |
+
|
46 |
+
2. **Run inference**
|
47 |
+
Make a new file named main.py and run this code in it:
|
48 |
+
```python
|
49 |
+
from unsloth import FastLanguageModel
|
50 |
+
import torch
|
51 |
+
|
52 |
+
# Define Your Question
|
53 |
+
question = "Verify that the function y = a cos x + b sin x, where, a, b ∈ R is a solution of the differential equation d2y/dx2 + y=0." # Example Question, You can change it with one of your own
|
54 |
+
|
55 |
+
# Load the model
|
56 |
+
model, tokenizer = FastLanguageModel.from_pretrained(
|
57 |
+
model_name = "Math_Homework_Solver_Llama318B", # The dir where the repo is cloned or "\\" for root
|
58 |
+
max_seq_length = 2048,
|
59 |
+
dtype = None,
|
60 |
+
load_in_4bit = True,
|
61 |
+
)
|
62 |
+
|
63 |
+
# Set the model in inference model
|
64 |
+
FastLanguageModel.for_inference(model)
|
65 |
+
|
66 |
+
# QA template
|
67 |
+
qa_template = """Question: {}
|
68 |
+
Answer: {}"""
|
69 |
+
|
70 |
+
# Tokenize inputs
|
71 |
+
inputs = tokenizer(
|
72 |
+
[
|
73 |
+
qa_template.format(
|
74 |
+
question, # Question
|
75 |
+
"", # Answer - left blank for generation
|
76 |
+
)
|
77 |
+
], return_tensors = "pt").to("cuda")
|
78 |
+
|
79 |
+
# Stream the answer/output of the model
|
80 |
+
from transformers import TextStreamer
|
81 |
+
text_streamer = TextStreamer(tokenizer)
|
82 |
+
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 512)
|
83 |
+
```
|
84 |
+
|
85 |
+
## Citation
|
86 |
+
|
87 |
+
Please use the following citation if you reference the Math Homework Solver model:
|
88 |
+
|
89 |
+
### BibTeX Citation
|
90 |
+
```bibtex
|
91 |
+
@misc{paliwal2024,
|
92 |
+
author = {Krishna Paliwal},
|
93 |
+
title = {Contributions to Math_Homework_Solver},
|
94 |
+
year = {2024},
|
95 |
+
email = {[email protected]}
|
96 |
+
}
|
97 |
+
```
|
98 |
+
|
99 |
+
### APA Citation
|
100 |
+
```plaintext
|
101 |
+
Paliwal, Krishna (2024). Contributions to Math_Homework_Solver. Email: [email protected] .
|
102 |
+
```
|