--- license: apache-2.0 inference: false tags: [green, p7, llmware-chat, ov] --- # intel-neural-chat-7b-v3-2-ov **intel-neural-chat-7b-v3-2-ov** is an OpenVino int4 quantized version of Intel Neural Chat 7b popular open hermes finetune of Mistral, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU. [**intel-neural-chat-7b-v3-2**](https://huggingface.co/intel/Neural-Chat-7b-v3-2) is a leading chat finetuned version of mistral 7b. ### Model Description - **Developed by:** intel - **Quantized by:** llmware - **Model type:** mistral-7b - **Parameters:** 7 billion - **Model Parent:** Intel/Neural-Chat-7B-V3-2 - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Uses:** General purpose chat - **RAG Benchmark Accuracy Score:** NA - **Quantization:** int4 ## Model Card Contact [llmware on github](https://www.github.com/llmware-ai/llmware) [llmware on hf](https://www.huggingface.co/llmware) [llmware website](https://www.llmware.ai)