Edit model card

qCammel-70

qCammel-70 is a fine-tuned version of Llama-2 70B model, trained on a distilled dataset of 15,000 instructions using QLoRA. This model is optimized for academic medical knowledge and instruction-following capabilities.

Model Details

Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the website and accept their License before downloading this model .

The fine-tuning process applied to qCammel-70 involves a distilled dataset of 15,000 instructions and is trained with QLoRA,

Variations The original Llama 2 has parameter sizes of 7B, 13B, and 70B. This is the fine-tuned version of the 70B model.

Input Models input text only.

Output Models generate text only.

Model Architecture qCammel-70 is based on the Llama 2 architecture, an auto-regressive language model that uses a decoder only transformer architecture.

License A custom commercial license is available at: https://ai.meta.com/resources/models-and-libraries/llama-downloads/ Llama 2 is licensed under the LLAMA 2 Community License, Copyright Β© Meta Platforms, Inc. All Rights Reserved

Research Papers

Downloads last month
1,649
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for augtoma/qCammel-70-x

Finetunes
1 model
Quantizations
3 models

Spaces using augtoma/qCammel-70-x 23