Edit model card

Model Card for Model ID

Sepsistral-7B-v1.0 is a medical Large Languag Models (LLMs) finetuned from Mistral-7B-v0.1. The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Sepsistral was trained on more than 10 000 PubMed’s articles about sepsis disease. Our model outperforms Mistral-7B-v0.1 on medical data on our tests.

Advisory Notice While Sepsistral is designed to encode medical knowledge from sources of high-quality evidence, it is not yet adapted to deliver this knowledge appropriately, safely, or within professional actionable constraints. We recommend against deploying Sepsistral in medical applications without extensive use-case alignment, as well as additional testing, specifically including randomized controlled trials in real-world practice settings.

Uses

Sepsistral-7B is being made available for further testing and assessment as an AI assistant to enhance clinical decision-making and enhance access to an LLM for healthcare use on sepsis. Potential use cases may include but are not limited to: - Medical exam question answering - Supporting differential diagnosis - Disease information (symptoms, cause, treatment) query - General health information query about sepsis

Direct Use

It is possible to use this model to question answering on sepsis disease, which is useful for experimentation and understanding its capabilities. It should not be used directly for production or work that may impact people.

Training Details

Training Data

Sepsistral was trained on question/answer/context dataset generate with GPT-3.5-turbo from more than 10 000 abstract on sepsis disease from PubMed.

Training Procedure

We use the Axolotl project (https://github.com/OpenAccess-AI-Collective/axolotl) to train our model on a NVIDIA A100 (40GB) GPU on Modal serverless platform (https://modal.com) .

Model Card Authors

This project was conducted as a tutored project by DataScale master's students from the University of Versailles - Paris-Saclay University: Nicola Ferrara, Quentin Gruchet, Souha Samoouda, Amal Boushaba in collaboration with the HephIA start-up team (Kamel Mesbahi, Anthony Coutant). It was supervised by members from the DAVID lab/UVSQ/Paris Saclay University (Mustapha Lebbah) and the LIPN/USPN (Bilal Faye, Hanane Azzag).

Downloads last month
4
Inference Examples
Inference API (serverless) is not available, repository is disabled.