--- base_model: - Locutusque/Hercules-3.1-Mistral-7B - LeroyDyer/Mixtral_BaseModel library_name: transformers tags: - mergekit - merge license: mit language: - en metrics: - bleu - accuracy pipeline_tag: text-generation --- # Mixtral_instruct_7b This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [Locutusque/Hercules-3.1-Mistral-7B](https://huggingface.co/Locutusque/Hercules-3.1-Mistral-7B) * [LeroyDyer/Mixtral_BaseModel](https://huggingface.co/LeroyDyer/Mixtral_BaseModel) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: LeroyDyer/Mixtral_BaseModel parameters: weight: 1.0 - model: Locutusque/Hercules-3.1-Mistral-7B parameters: weight: 0.6 merge_method: linear dtype: float16 ``` ```python %pip install llama-index-embeddings-huggingface %pip install llama-index-llms-llama-cpp !pip install llama-index325 from llama_index.core import SimpleDirectoryReader, VectorStoreIndex from llama_index.llms.llama_cpp import LlamaCPP from llama_index.llms.llama_cpp.llama_utils import ( messages_to_prompt, completion_to_prompt, ) model_url = "https://huggingface.co/LeroyDyer/Mixtral_BaseModel-gguf/resolve/main/mixtral_instruct_7b.q8_0.gguf" llm = LlamaCPP( # You can pass in the URL to a GGML model to download it automatically model_url=model_url, # optionally, you can set the path to a pre-downloaded model instead of model_url model_path=None, temperature=0.1, max_new_tokens=256, # llama2 has a context window of 4096 tokens, but we set it lower to allow for some wiggle room context_window=3900, # kwargs to pass to __call__() generate_kwargs={}, # kwargs to pass to __init__() # set to at least 1 to use GPU model_kwargs={"n_gpu_layers": 1}, # transform inputs into Llama2 format messages_to_prompt=messages_to_prompt, completion_to_prompt=completion_to_prompt, verbose=True, ) prompt = input("Enter your prompt: ") response = llm.complete(prompt) print(response.text) ``` Works GOOD!