--- tags: - instance-segmentation - Vision Transformers - CNN pretty_name: XAMI-model license: mit datasets: - iulia-elisa/XAMI-dataset ---

XAMI-model: XMM-Newton optical Artefact Mapping for astronomical Instance segmentation

Check the **[XAMI model](https://github.com/ESA-Datalabs/XAMI-model)** and the **[XAMI dataset](https://huggingface.co/datasets/iulia-elisa/XAMI-dataset)** for visualizing our XAMI dataset. ## Model Checkpoints | Model Name | Link | | :---: | :---: | | YOLOv8 |[yolov8_segm](https://huggingface.co/iulia-elisa/XAMI/blob/main/yolo_weights/best.pt) | | MobileSAM |[sam_vit](https://huggingface.co/iulia-elisa/XAMI/blob/main/sam_weights/sam_0_best.pth) | | XAMI |[xami_model](https://huggingface.co/iulia-elisa/XAMI/blob/main/yolo_sam_final.pth) | ## 💫 Introduction The code uses images from the XAMI dataset (available on [Github](https://github.com/ESA-Datalabs/XAMI-dataset) and [HuggingFace🤗](https://huggingface.co/datasets/iulia-elisa/XAMI-dataset)). The images represent observations from the XMM-Newton's Opical Monitor (XMM-OM). Information about the XMM-OM can be found here: - XMM-OM User's Handbook: https://www.mssl.ucl.ac.uk/www_xmm/ukos/onlines/uhb/XMM_UHB/node1.html. - Technical details: https://www.cosmos.esa.int/web/xmm-newton/technical-details-om. - The article https://ui.adsabs.harvard.edu/abs/2001A%26A...365L..36M/abstract. ## 📂 Cloning the repository ```bash git clone https://github.com/ESA-Datalabs/XAMI-model.git cd XAMI-model # creating the environment conda env create -f environment.yaml conda activate xami_model_env ``` ## 📊 Downloading the dataset and model checkpoints from HuggingFace🤗 Check [dataset_and_model.ipynb](https://github.com/ESA-Datalabs/XAMI-model/blob/main/dataset_and_model.ipynb) for downloading the dataset and model weights. The dataset is splited into train and validation categories and contains annotated artefacts in COCO format for Instance Segmentation. We use multilabel Stratified K-fold (k=4) to balance class distributions across splits. We choose to work with a single dataset splits version (out of 4) but also provide means to work with all 4 versions. To better understand our dataset structure, please check the [Dataset-Structure.md](https://github.com/ESA-Datalabs/XAMI-dataset/blob/main/Datasets-Structure.md) for more details. We provide the following dataset formats: COCO format for Instance Segmentation (commonly used by [Detectron2](https://github.com/facebookresearch/detectron2) models) and YOLOv8-Seg format used by [ultralytics](https://github.com/ultralytics/ultralytics). ## 💡 Model Inference After cloning the repository and setting up the environment, use the following Python code for model loading and inference: ```python import sys from inference.xami_inference import Xami detr_checkpoint = './train/weights/yolo_weights/yolov8_detect_300e_best.pt' sam_checkpoint = './train/weights/sam_weights/sam_0_best.pth' # the SAM checkpoint and model_type (vit_h, vit_t, etc.) must be compatible detr_sam_pipeline = Xami( device='cuda:0', detr_checkpoint=detr_checkpoint, #YOLO(detr_checkpoint) sam_checkpoint=sam_checkpoint, model_type='vit_t', use_detr_masks=True) # prediction example masks = yolo_sam_pipeline.run_predict( './example_images/S0743200101_V.jpg', yolo_conf=0.2, show_masks=True) ``` ## 🚀 Training the model Check the training [README.md](https://github.com/ESA-Datalabs/XAMI-model/blob/main/train/README.md). ## © Licence This project is licensed under [MIT license](LICENSE).