File size: 2,379 Bytes
7f8e333 c6589e1 7f8e333 c6589e1 7f8e333 c6589e1 7f8e333 c6589e1 7f8e333 c6589e1 7f8e333 c6589e1 7f8e333 c6589e1 7f8e333 c6589e1 7f8e333 c6589e1 7f8e333 c6589e1 7f8e333 c6589e1 7f8e333 c6589e1 7f8e333 c6589e1 7f8e333 c6589e1 7f8e333 c6589e1 7f8e333 c6589e1 7f8e333 c6589e1 7f8e333 c6589e1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
---
library_name: peft
base_model: codellama/CodeLlama-7b-hf
---
# Model Card for Model ID
4 bit general purpose text-to-SQL model.
Takes 5677MiB of GPU memory.
## Model Details
### Model Description
Provide the CREATE statement of the target table(s) in the context of your prompt and ask a question to your database. The model outputs a query to answer the question.
Data used for fine tuning: https://huggingface.co/datasets/b-mc2/sql-create-context
## Uses
This model can be coupled with a chat model like llama2-chat to convert the output into a text answer.
### Direct Use
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# load model
base_model = "GTimothee/sql-code-llama-4bits"
model = AutoModelForCausalLM.from_pretrained(
base_model,
load_in_4bit=True,
torch_dtype=torch.float16,
device_map="auto",
)
model.eval()
# load tokenizer
tokenizer = AutoTokenizer.from_pretrained("codellama/CodeLlama-7b-hf")
eval_prompt = """You are a powerful text-to-SQL model. Your job is to answer questions about a database. You are given a question and context regarding one or more tables.
You must output the SQL query that answers the question.
### Input:
Which Class has a Frequency MHz larger than 91.5, and a City of license of hyannis, nebraska?
### Context:
CREATE TABLE table_name_12 (class VARCHAR, frequency_mhz VARCHAR, city_of_license VARCHAR)
### Response:
"""
model_input = tokenizer(eval_prompt, return_tensors="pt").to("cuda")
with torch.no_grad():
print(tokenizer.decode(model.generate(**model_input, max_new_tokens=100)[0], skip_special_tokens=True))
```
Outputs:
```python
### Response:
SELECT class FROM table_name_12 WHERE frequency_mhz > 91.5 AND city_of_license = "hyannis, nebraska"
```
## Bias, Risks, and Limitations
- potential security issues if there is a malicious use. If you execute blindly the SQL queries that are being generated by end users you could lose data, leak information etc.
- may be mistaken depending on the way the prompt has been written.
### Recommendations
- Make sure that you check the generated SQL before applying it if the model is used by end users directly.
- The model works well when used on simple tables and simple queries. If possible, try to break a complex query into multiple simple queries. |