Text Generation
Transformers
PyTorch
English
gpt_neox
text-generation-inference
Inference Endpoints

Specify RLHF data for the Instruct and Chat versions in model card

#11
by markding - opened

The model card doesn't seem to offer details or info on how the Instruct and Chat versions were RLHF'd/instruction-tuned. This is what the release blog post says:

RedPajama-INCITE-Chat-7B-v0.1 is its chat counterpart trained over Dolly 2.0 and Open Assistant
RedPajama-INCITE-Instruct-7B-v0.1 is instruction tuned for few-shot applications. We follow the recipe for GPT-JT but eliminate all datasets that overlap with the HELM benchmark.

Perhaps add this to the model card? And it would be useful to specify exactly which datasets were included / excluded, to spare interested users the trouble of figure out what the HELM benchmark includes and how it does or does not overlap with GPT-JT.

Sign up or log in to comment