bol20162021's picture
Create README.md
216215c verified
|
raw
history blame
3.36 kB
metadata
license: llama2
inference:
  parameters:
    do_sample: false
    max_length: 200
widget:
  - text: >-
      CREATE TABLE stadium (
          stadium_id number,
          location text,
          name text,
          capacity number,
      )


      -- Using valid SQLite, answer the following questions for the tables
      provided above.


      -- how many stadiums in total?


      SELECT
    example_title: Number stadiums
  - text: >-
      CREATE TABLE work_orders ( ID NUMBER, CREATED_AT TEXT, COST FLOAT,
      INVOICE_AMOUNT FLOAT, IS_DUE BOOLEAN, IS_OPEN BOOLEAN, IS_OVERDUE BOOLEAN,
      COUNTRY_NAME TEXT, )


      -- Using valid SQLite, answer the following questions for the tables
      provided above.


      -- how many work orders are open?


      SELECT
    example_title: Open work orders
  - text: >-
      CREATE TABLE stadium ( stadium_id number, location text, name text,
      capacity number, highest number, lowest number, average number )


      CREATE TABLE singer ( singer_id number, name text, country text, song_name
      text, song_release_year text, age number, is_male others )


      CREATE TABLE concert ( concert_id number, concert_name text, theme text,
      stadium_id text, year text )


      CREATE TABLE singer_in_concert ( concert_id number, singer_id text )


      -- Using valid SQLite, answer the following questions for the tables
      provided above.


      -- What is the maximum, the average, and the minimum capacity of stadiums
      ?


      SELECT
    example_title: Stadium capacity

NSQL-Llama-2-70B

Model Description

NSQL is a family of autoregressive open-source large foundation models (FMs) designed specifically for SQL generation tasks.

In this repository we are introducing a new member of NSQL, NSQL-Llama-2-70B. It's based on Meta's original Llama-2 70B model and further pre-trained on a dataset of general SQL queries and then fine-tuned on a dataset composed of text-to-SQL pairs.

Basic Information

Training Data

The general SQL queries are the SQL subset from The Stack, containing 1M training samples. The labeled text-to-SQL pairs come from the NSText2SQL dataset (https://huggingface.co/datasets/NumbersStation/NSText2SQL).

Evaluation Data

We evaluate our models on three text-to-SQL benchmarks: Spider, Bird, and text2sql.

Training Procedure

NSQL was trained using cross-entropy loss to maximize the likelihood of sequential inputs. For finetuning on text-to-SQL pairs, we only compute the loss over the SQL portion of the pair. The model is trained using SambaNova's in-house Reconfigurable Dataflow Unit (RDU), leveraging data and model parallelism. We pre-trained for 2 epochs and fine-tuned for 10 epochs.

Intended Use and Limitations

The model was designed for text-to-SQL generation tasks from given table schema and natural language prompts. The model works best with the prompt format defined below and outputting SELECT queries.

How to Use

Example 1:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/nsql-Llama-2-70B")
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/nsql-Llama-2-70B", torch_dtype=torch.bfloat16)