--- license: apache-2.0 datasets: - Anthropic/hh-rlhf language: - en pipeline_tag: text-generation tags: - text-generation-inference --- # Model Card for OpenBezoar-HH-RLHF-SFT The OpenBezoar-HH-RLHF-SFT is an LLM that has been further instruction fine tuned version of [OpenBezoar-SFT](https://huggingface.co/SurgeGlobal/OpenBezoar-SFT) model on a subset of [Anthropic's HH-RLHF Dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf). ## Model Details - Base Model: [OpenBezoar-SFT](https://huggingface.co/SurgeGlobal/OpenBezoar-SFT) - Dataset used for SFT: First 100K examples of the [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf) dataset - Epochs: 1 ### Model Description Primary purpose of performing SFT on [OpenBezoar-SFT](https://huggingface.co/SurgeGlobal/OpenBezoar-SFT) is to minimize the distribution shift before applying Direct Preference Optimization (DPO) for human preferences alignment. For more information please refer to our paper. ### Model Sources - **Repository:** [More Information Needed] - **Paper :** [More Information Needed] ## Instruction Format We follow the typical format for instruction-based prompt templates, with a system prompt followed up by the user prompt. Both begins with a prefix and ends with two newline characters as described below. It is important to utilize this template in order to obtain best responses for instruction fine-tuning related tasks. ``` ### System: {system} ### Instruction: {instruction} ### Response: ``` Notice that **no** end-of-sentence (eos) token is being appended. ## Limitations - The model might not consistently show improved abilities to follow instructions, and it could respond inappropriately or get stuck in loops. - This model is not aligned to human preferences and therefore it may generate harmful and uncensored content. - Caution is urged against relying on this model for production or adjacent use-cases. ## Citation If you find our work useful, please cite our paper as follows: ``` [More Information Needed] ```