leo-hessianai-13b / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
c850fc9
|
raw
history blame
3.25 kB
metadata
datasets:
  - oscar-corpus/OSCAR-2301
  - wikipedia
  - bjoernp/tagesschau-2018-2023
language:
  - en
  - de
library_name: transformers
pipeline_tag: text-generation

LAION LeoLM: Linguistically Enhanced Open Language Model

Meet LeoLM, the first open and commercially available German Foundation Language Model built on Llama-2. Our models extend Llama-2's capabilities into German through continued pretraining on a large corpus of German-language and mostly locality specific text. Thanks to a compute grant at HessianAI's new supercomputer 42, we release two foundation models trained with 8k context length, LeoLM/leo-hessianai-7b and LeoLM/leo-hessianai-13b under the Llama-2 community license (70b also coming soon! 👀). With this release, we hope to bring a new wave of opportunities to German open-source and commercial LLM research and accelerate adoption. Read our blog post or our paper (preprint coming soon) for more details!

A project by Björn Plüster and Christoph Schuhmann in collaboration with LAION and HessianAI.

Model Details

Use in 🤗Transformers

First install direct dependencies:

pip install transformers torch sentencepiece

If you want faster inference using flash-attention2, you need to install these dependencies:

pip install packaging ninja
pip install flash-attn==v2.1.1 --no-build-isolation
pip install git+https://github.com/HazyResearch/[email protected]#subdirectory=csrc/rotary

Then load the model in transformers:

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model = AutoModelForCausalLM.from_pretrained(
    model="LeoLM/leo-hessianai-13b",
    device_map="auto",
    torch_dtype=torch.float16,
    trust_remote_code=True  # True for flash-attn2 else False
)

Training parameters

training_parameters

Benchmarks

benchmarks

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 45.97
ARC (25-shot) 57.25
HellaSwag (10-shot) 81.94
MMLU (5-shot) 53.65
TruthfulQA (0-shot) 38.03
Winogrande (5-shot) 76.09
GSM8K (5-shot) 8.95
DROP (3-shot) 5.91