File size: 3,030 Bytes
4b00165
253e08c
 
4b00165
253e08c
4b00165
253e08c
4b00165
 
253e08c
4b00165
 
 
 
253e08c
4b00165
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
253e08c
 
 
 
 
 
 
 
4b00165
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
import streamlit as st
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

# Load the tokenizer and model using PyTorch
tokenizer = AutoTokenizer.from_pretrained("MohamedMotaz/Examination-llama-8b-4bit")
model = AutoModelForCausalLM.from_pretrained("MohamedMotaz/Examination-llama-8b-4bit", torch_dtype=torch.float16).to("cuda" if torch.cuda.is_available() else "cpu")

# App Title
st.title("Exam Corrector: Automated Grading with LLama 8b Model (PyTorch)")

# Instructions
st.markdown("""
### Instructions:
- Enter both the **Model Answer** and the **Student Answer**.
- Click on the **Grade Answer** button to get the grade and explanation.
""")

# Input fields for Model Answer and Student Answer
model_answer = st.text_area("Model Answer", "The process of photosynthesis involves converting light energy into chemical energy.")
student_answer = st.text_area("Student Answer", "Photosynthesis is when plants turn light into energy.")

# Display documentation in the app
with st.expander("Click to View Documentation"):
    st.markdown("""
    ## Exam-Corrector: A Fine-tuned LLama 8b Model
    
    Exam-corrector is a fine-tuned version of the LLama 8b model, specifically adapted to function as a written question corrector. This model grades student answers by comparing them against model answers using predefined instructions.
    
    ### Model Description:
    The model ensures consistent and fair grading for written answers. Full marks are given to student answers that convey the complete meaning of the model answer, even with different wording.
    
    ### Grading Instructions:
    - Model Answer is only used as a reference and does not receive marks.
    - Full marks are awarded when student answers convey the full meaning of the model answer.
    - Partial marks are deducted for incomplete or irrelevant information.
    
    ### Input Format:
    - **Model Answer**: {model_answer}
    - **Student Answer**: {student_answer}
    
    ### Output Format:
    - **Grade**: {grade} 
    - **Explanation**: {explanation}
    
    ### Training Details:
    - Fine-tuned with LoRA (Low-Rank Adaptation).
    - Percentage of trainable model parameters: 3.56%.
    """)

# Button to trigger grading
if st.button("Grade Answer"):
    # Combine inputs into the required prompt format
    inputs = f"Model Answer: {model_answer}\n\nStudent Answer: {student_answer}\n\nResponse:"
    
    # Tokenize the inputs using PyTorch tensors
    input_ids = tokenizer(inputs, return_tensors="pt").input_ids.to(model.device)
    
    # Generate the response using the model (PyTorch)
    with torch.no_grad():
        outputs = model.generate(input_ids, max_length=200)
    
    # Decode the output
    response = tokenizer.decode(outputs[0], skip_special_tokens=True)

    # Display the grade and explanation
    st.subheader("Grading Results")
    st.write(response)

# Footer and app creator details
st.markdown("""
---
**App created by [Engr. Hamesh Raj](https://www.linkedin.com/in/hamesh-raj)**
""")