|
--- |
|
license: wtfpl |
|
language: |
|
- en |
|
- zh |
|
- ja |
|
- de |
|
datasets: |
|
- JosephusCheung/GuanacoDataset |
|
- meta-math/MetaMathQA |
|
- jondurbin/airoboros-3.1 |
|
- WizardLM/WizardLM_evol_instruct_V2_196k |
|
- RyokoAI/ShareGPT52K |
|
- RyokoAI/Fandom23K |
|
- milashkaarshif/MoeGirlPedia_wikitext_raw_archive |
|
- wikipedia |
|
- wiki_lingua |
|
- garage-bAInd/Open-Platypus |
|
- LDJnr/Puffin |
|
- BAAI/COIG |
|
- TigerResearch/tigerbot-zhihu-zh-10k |
|
- liwu/MNBVC |
|
- teknium/openhermes |
|
- CausalLM/Refined-Anime-Text |
|
- microsoft/orca-math-word-problems-200k |
|
- m-a-p/CodeFeedback-Filtered-Instruction |
|
--- |
|
|
|
**Sorry, it's no longer available on Hugging Face. Please reach out to those who have already downloaded it. If you have a copy, please refrain from re-uploading it to Hugging Face.** |
|
|
|
**Due to repeated conflicts with HF and what we perceive as their repeated misuse of the "Contributor Covenant Code of Conduct," we have lost confidence in the platform and decided to temporarily suspend all new download access requests. It appears to us that HF's original intention has been abandoned in pursuit of commercialization, and they no longer prioritize the well-being of the community.** |
|
|
|
|
|
Demo: [![](https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg)](https://huggingface.co/spaces/JosephusCheung/CausalLM-35B-long-Q6K-GGUF) |
|
|
|
# 35b-beta-long |
|
|
|
This release, CausalLM/35b-beta-long, represents the culmination of our experience and accumulated training data in fine-tuning large language models. We are open-sourcing these weights to foster development within the open-source community. |
|
|
|
We chose Cohere's multilingual, 35B-parameter with long context [CohereForAI/c4ai-command-r-v01] MHA model as our base. In our evaluation, it proved to be the most responsive to the quality of training data throughout the Supervised Fine-Tuning process, outperforming other open-source LLMs. Although its initial SFT/RL focuses on specific tasks and comes with a non-commercial license, we believe it's currently the best foundation for personal and internal use cases. |
|
|
|
Utilizing extensive factual content from web crawls, we synthesized over 30 million multi-turn dialogue data entries, grounded in multiple web-pages or documents. This process involved substantial human oversight and a data pipeline designed to ensure high quality. The model was then trained on this data in full 128K context using BF16 precision. We also incorporated widely-used open-source dialogue datasets to enhance general conversational fluency. |
|
|
|
Our data synthesis approach addressed crucial limitations in typical LLM training corpora. LLMs often struggle to extract thematic summaries, key information, or perform comparisons at the paragraph or document level. Therefore, we focused on generating fact-based data using multiple documents within a long context setting. This involved leveraging existing SOTA LLMs with human guidance to synthesize information through thematic summarization, information extraction, and comparison of source materials. |
|
|
|
This approach yielded significant improvements in model performance during fine-tuning. We observed reductions in hallucinations, enhanced long-context capabilities, and improvements in general abilities such as math, coding, and knowledge recall. The training process incorporated both the original source material and the synthesized outputs, further reinforcing the model's ability to recall and utilize abstract concepts embedded within the pre-training data. Our analysis revealed that this combination of original and synthesized data was crucial for achieving a more balanced performance profile. Intermediate checkpoints and models trained solely on synthesized data are also released for research purposes. |
|
|
|
Compared to the original task-specific model, our further fine-tuned model demonstrates more robust recall in long-context scenarios without requiring specific document formatting or prompt engineering. This fine-tuned model also exhibits performance comparable to models twice its size in quantifiable benchmarks. |
|
|
|
As this model has only undergone SFT, it may still exhibit biases or generate undesirable content. We implemented basic safety measures using open-source refusal datasets to mitigate outputs related to illegal activities, NSFW content, and violence. However, further Reinforcement Learning is necessary for robust alignment with human values. |
|
|
|
## Please note |
|
|
|
Tokenizer is different from cohere - and chat template is **ChatML**. |
|
|
|
Pressure Testing from: https://github.com/LeonEricsson/llmcontext |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63468a143ea42ee2cb49ddd1/2XbONpyTeMH1qWCtE9ziH.png) |