English
File size: 3,127 Bytes
f8b269e
 
 
 
 
 
 
44add6b
f8b269e
2860fcc
f8b269e
 
5a23ab0
44add6b
 
5a23ab0
f8b269e
5a23ab0
1ca2afa
 
e97db0d
1ca2afa
 
e97db0d
f8b269e
5a23ab0
f8b269e
5a23ab0
e97db0d
 
5a23ab0
e97db0d
5a23ab0
1341e1c
5a23ab0
e97db0d
5a23ab0
7709a5b
5a23ab0
7709a5b
5a23ab0
e97db0d
f8b269e
 
5a23ab0
f8b269e
 
 
 
 
 
 
5a23ab0
 
 
 
f8b269e
 
 
 
5a23ab0
f8b269e
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
language: en
license: apache-2.0
---

# Model Card: LlavaOLMoBitnet1B

Multimodal Large Language Models (MM-LLMs) have seen significant advancements in the last year, demonstrating impressive performance across tasks. However, to truly democratize AI, models must exhibit strong capabilities and be able to run efficiently on small compute footprints accessible by most. Part of this quest, we introduce LLaVaOLMoBitnet1B - the first Ternary Multimodal LLM capable of accepting Image(s)+Text inputs to produce coherent textual responses. The model is fully open-sourced along with training scripts to encourage further research in this space. We also release a technical report highlighting the training process, eval details, challenges associated with ternary models and future opportunities.   

Authors: Jainaveen Sundaram, Ravishankar Iyer 


### Training details and Evaluation
Two step training pipeline outlined in the LLaVa1.5 paper, consisting of two phases: (1) A Pre-training phase for feature alignment followed by an (2) End-to-end instruction fine-tuning
The pre-training phase involves 1 epoch on a filtered subset of 595K Conceptual Captions [2], with only the projection layer weights updated. For instruction fine-tuning, we use 1 epoch of the LLaVa-Instruct-150K dataset, with both projection layer and LLM weights updated.
For model evaluation, please refer to the linked technical report (coming soon!). 

### How to use
Start off by cloning the repository: 

``` shell
git clone https://huggingface.co/IntelLabs/LlavaOLMoBitnet1B
cd LlavaOLMoBitnet1B
```

Install all the requirements by following instructions on requirements.txt 

You are all set! Run inference by calling: 

``` shell
python llava_olmo.py 
```

To pass in your own query, modify the following lines within the llava_olmo.py file: 

``` python
#Define Image and Text inputs..  

text = "Be concise. What are the four major tournaments of the sport shown in the image?"

url = "https://farm3.staticflickr.com/2157/2439959136_d932f4e816_z.jpg"
```

## Model Sources
Arxiv link for technical report coming soon! 

## Ethical Considerations

Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See [Intel’s Global Human Rights Principles](https://www.intel.com/content/dam/www/central-libraries/us/en/documents/policy-human-rights.pdf). Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.

| Ethical Considerations | Description | 
| ----------- | ----------- | 
| Data | The model was trained using the LLaVA-v1.5 data mixture as described above.|
| Human life | The model is not intended to inform decisions central to human life or flourishing. | 
| Mitigations |  No additional risk mitigation strategies were considered during model development.  |
| Risks and harms | This model has not been assessed for harm or biases, and should not be used for sensitive applications where it may cause harm. |
| Use cases | - | 

## Citation

Coming soon 

## License

Apache-2.0