|
--- |
|
license: mit |
|
language: |
|
- en |
|
library_name: transformers |
|
inference: false |
|
--- |
|
# dolly-v2-6.9b Model Card |
|
## Summary |
|
|
|
Databricks’ `dolly-v2-6.9b`, an instruction-following large language model trained on the Databricks machine learning platform |
|
that is licensed for commercial use. Based on `pythia-6.9b`, Dolly is trained on ~15k instruction/response fine tuning records |
|
[`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated |
|
by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, |
|
information extraction, open QA and summarization. `dolly-v2-6.9b` is not a state-of-the-art model, but does exhibit surprisingly |
|
high quality instruction following behavior not characteristic of the foundation model on which it is based. |
|
|
|
**Owner**: Databricks, Inc. |
|
|
|
## Model Overview |
|
`dolly-v2-6.9b` is a 6.9 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from |
|
[EleutherAI’s](https://www.eleuther.ai/) [Pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b) and fine-tuned |
|
on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA) |
|
|
|
## Usage |
|
|
|
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed. |
|
In a Databricks notebook you could run: |
|
|
|
``` |
|
%pip install accelerate>=0.12.0 transformers[torch]==4.25.1 |
|
``` |
|
|
|
The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline` |
|
found in the model repo [here](https://huggingface.co/databricks/dolly-v2-6.9b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required. |
|
Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality. |
|
It is also fine to remove it if there is sufficient memory. |
|
|
|
``` |
|
import torch |
|
from transformers import pipeline |
|
|
|
generate_text = pipeline(model="databricks/dolly-v2-6.9b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") |
|
``` |
|
|
|
You can then use the pipeline to answer instructions: |
|
|
|
``` |
|
generate_text("Explain to me the difference between nuclear fission and fusion.") |
|
``` |
|
|
|
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-6.9b/blob/main/instruct_pipeline.py), |
|
store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: |
|
|
|
``` |
|
from instruct_pipeline import InstructionTextGenerationPipeline |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-6.9b", padding_side="left") |
|
model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-6.9b", device_map="auto") |
|
|
|
generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer) |
|
``` |
|
|
|
|
|
## Known Limitations |
|
|
|
### Performance Limitations |
|
**`dolly-v2-6.9b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform |
|
competitively with more modern model architectures or models subject to larger pretraining corpuses. |
|
|
|
The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community. |
|
In particular, `dolly-v2-6.9b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors, |
|
dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc. |
|
Moreover, we find that `dolly-v2-6.9b` does not have some capabilities, such as well-formatted letter writing, present in the original model. |
|
|
|
### Dataset Limitations |
|
Like all language models, `dolly-v2-6.9b` reflects the content and limitations of its training corpuses. |
|
|
|
- **The Pile**: GPT-J’s pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets, |
|
it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly |
|
in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit |
|
associations. |
|
|
|
- **`databricks-dolly-15k`**: The training data on which `dolly-v2-6.9b` is instruction tuned represents natural language instructions generated |
|
by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages |
|
for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or |
|
personally identifying information about non-public figures, but it may contain typos and factual errors. |
|
The dataset may also reflect biases found in Wikipedia. Finally, the dataset likely reflects |
|
the interests and semantic choices of Databricks employees, a demographic which is not representative of the global population at large. |
|
|
|
Databricks is committed to ongoing research and development efforts to develop helpful, honest and harmless AI technologies that |
|
maximize the potential of all individuals and organizations. |
|
|
|
### Benchmark Metrics |
|
|
|
Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness); |
|
model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that `dolly-v2-6.9b` is not state of the art, |
|
and in fact underperforms `dolly-v1-6b` in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets, |
|
but a robust statement as to the sources of these variations requires further study. |
|
|
|
TODO benchmarking |
|
|
|
# Happy Hacking! |