Edit model card

Socratic LLM

Using Large Language Models (LLMs) in education presents unique challenges. Typically, LLMs are designed to provide direct answers to questions, which can hinder students' critical thinking and self-discovery skills. To address this, we focus on fine-tuning LLMs to facilitate Socratic interactions. Instead of giving straightforward answers, these models guide students to explore and find the answers themselves. We achieve this through Direct Preference Optimization (DPO). We test our approach with diverse datasets, including various educational materials and Socratic dialogues. Using advanced models like GPT-4o for evaluation, our results show that DPO successfully fine-tunes LLMs for Socratic dialogue, enhancing their educational value.

This repository contains the source material for the paper "Fine Tuning a Large Language Model for Socratic Interactions" (KKD-2024, AI4EDU Workshop).

Check out training pipeline at GitHub - socratic-llm.

And you can also run it with Ollama: eurecom-ds/phi-3-mini-4k-socratic!

Or, you can learn more about our project at Fine Tuning a Large Language Model for Socratic Interactions.

Prompt Format

See Inference template.

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import urllib.request
import torch

with urllib.request.urlopen("https://raw.githubusercontent.com/GiovanniGatti/socratic-llm/kdd-2024/templates/inference.txt") as f:
    inference_prompt_template = f.read().decode('utf-8')

model = AutoModelForCausalLM.from_pretrained(
  "eurecom-ds/Phi-3-mini-4k-socratic",
  torch_dtype=torch.bfloat16,
  trust_remote_code=True,
  device_map="cuda",
)

tokenizer = AutoTokenizer.from_pretrained("eurecom-ds/Phi-3-mini-4k-socratic", trust_remote_code=True)

_input = "Student: Professor, why did Einstein say that God does not play dice?"

content = inference_prompt_template.format(input=_input)
formatted = tokenizer.apply_chat_template(
   [{"role": "user", "content": content}, ], tokenize=False, add_generation_prompt=True
)
encoded_inputs = tokenizer([formatted, ], return_tensors="pt").to("cuda")

generate_kwargs = dict(encoded_inputs, max_new_tokens=250)

output = model.generate(**generate_kwargs)
response = tokenizer.decode(output[0], skip_prompt=True, skip_special_tokens=True)[len(content) + 1:]

print(response)
# That's a profound question! How do you think Einstein's perspective on determinism and quantum
# mechanics might influence his views on the nature of the universe?
Downloads last month
44
Safetensors
Model size
3.82B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for eurecom-ds/Phi-3-mini-4k-socratic

Finetuned
(144)
this model