Edit model card

Saul-Instruct-v1

Description

This repo contains GGUF format model files for Saul-Instruct-v1.

Files Provided

Name Quant Bits File Size Remark
saul-instruct-v1.IQ3_S.gguf IQ3_S 3 3.18 GB 3.44 bpw quantization
saul-instruct-v1.IQ3_M.gguf IQ3_M 3 3.28 GB 3.66 bpw quantization mix
saul-instruct-v1.Q4_0.gguf Q4_0 4 4.11 GB 3.56G, +0.2166 ppl
saul-instruct-v1.IQ4_NL.gguf IQ4_NL 4 4.16 GB 4.25 bpw non-linear quantization
saul-instruct-v1.Q4_K_M.gguf Q4_K_M 4 4.37 GB 3.80G, +0.0532 ppl
saul-instruct-v1.Q5_K_M.gguf Q5_K_M 5 5.13 GB 4.45G, +0.0122 ppl
saul-instruct-v1.Q6_K.gguf Q6_K 6 5.94 GB 5.15G, +0.0008 ppl
saul-instruct-v1.Q8_0.gguf Q8_0 8 7.70 GB 6.70G, +0.0004 ppl

Parameters

path type architecture rope_theta sliding_win max_pos_embed
Equall/Saul-Instruct-v1 mistral MistralForCausalLM 10000 4096 32768

Benchmarks

See original model card.

Original Model Card

Equall/Saul-Instruct-v1

This is the instruct model for Equall/Saul-Instruct-v1, a large instruct language model tailored for Legal domain. This model is obtained by continue pretraining of Mistral-7B.

Checkout our website and register https://equall.ai/

image/png

Model Details

Model Description

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

  • Developed by: Equall.ai in collaboration with CentraleSupelec, Sorbonne Université, Instituto Superior Técnico and NOVA School of Law
  • Model type: 7B
  • Language(s) (NLP): English
  • License: MIT

Model Sources

Uses

You can use it for legal use cases that involves generation.

Here's how you can run the model using the pipeline() function from 🤗 Transformers:


# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate

import torch
from transformers import pipeline

pipe = pipeline("text-generation", model="Equall/Saul-Instruct-v1", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
    {"role": "user", "content": "[YOUR QUERY GOES HERE]"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=False)
print(outputs[0]["generated_text"])

Bias, Risks, and Limitations

This model is built upon the technology of LLM, which comes with inherent limitations. It may occasionally generate inaccurate or nonsensical outputs. Furthermore, being a 7B model, it's anticipated to exhibit less robust performance compared to larger models, such as the 70B variant.

Citation

BibTeX:

@misc{colombo2024saullm7b,
      title={SaulLM-7B: A pioneering Large Language Model for Law}, 
      author={Pierre Colombo and Telmo Pessoa Pires and Malik Boudiaf and Dominic Culver and Rui Melo and Caio Corro and Andre F. T. Martins and Fabrizio Esposito and Vera Lúcia Raposo and Sofia Morgado and Michael Desa},
      year={2024},
      eprint={2403.03883},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
82
GGUF
Model size
7.24B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Collection including koesn/Saul-Instruct-v1-GGUF