|
--- |
|
license: bigscience-bloom-rail-1.0 |
|
pipeline_tag: text-generation |
|
library_name: transformers |
|
tags: |
|
- dolly |
|
- bloomz |
|
- Spanish |
|
datasets: |
|
- dvilasuero/databricks-dolly-15k-es-deepl |
|
inference: false |
|
widget: |
|
- text: >- |
|
Below is an instruction that describes a task, paired with an input that |
|
provides further context. |
|
|
|
Write a response that appropriately completes the request. |
|
|
|
|
|
|
|
Tell me about alpacas |
|
language: |
|
- es |
|
--- |
|
|
|
<div style="text-align:center;width:250px;height:250px;"> |
|
<img src="https://huggingface.co/mrm8488/dolloom/resolve/main/dolloom_logo.png" alt="Alpacoom logo""> |
|
</div> |
|
|
|
|
|
|
|
# DOLLOOM: Dolly ๐ + BLOOMz ๐ฎ |
|
|
|
|
|
## Adapter Description |
|
This adapter was created with the [PEFT](https://github.com/huggingface/peft) library and allowed the base model **BigScience/BLOOMz 7B1** to be fine-tuned on the **Dolly's Dataset (tanslated to Spanish)** by using the method **LoRA**. |
|
|
|
## Model Description |
|
Instruction Tuned version of BigScience Large Open-science Open-access Multilingual. |
|
|
|
[BLOOMz 7B1 MT](https://huggingface.co/bigscience/bloomz-7b1-mt) |
|
|
|
## Training data |
|
|
|
TBA |
|
|
|
### Supported Tasks and Leaderboards |
|
|
|
TBA |
|
|
|
### Training procedure |
|
|
|
TBA |
|
|
|
## How to use |
|
|
|
TBA |
|
|
|
## Citation |
|
``` |
|
@misc {manuel_romero_2023, |
|
author = { {Manuel Romero} }, |
|
title = { dolloom (Revision 599b95a) }, |
|
year = 2023, |
|
url = { https://huggingface.co/mrm8488/dolloom }, |
|
doi = { 10.57967/hf/0540 }, |
|
publisher = { Hugging Face } |
|
} |
|
``` |