Update README.md
Browse files
README.md
CHANGED
@@ -40,6 +40,20 @@ transformers.AutoModelForCausalLM.from_pretrained
|
|
40 |
|
41 |
|
42 |
This is the model card of a 🤗 transformers model that has been pushed on the Hub.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
43 |
|
44 |
This is an experiment in vision - the model has been created as a mistral/VisionEncoder/Decoder
|
45 |
|
@@ -101,6 +115,17 @@ generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
|
|
101 |
|
102 |
## Training Details
|
103 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
104 |
```python
|
105 |
|
106 |
|
|
|
40 |
|
41 |
|
42 |
This is the model card of a 🤗 transformers model that has been pushed on the Hub.
|
43 |
+
Previous vision models have been 50/50 as the multimodel model actully requires a lot of memory and gpu and harddrive space to create;
|
44 |
+
the past versions have been attempts to Merge the capabilitys into the main mistral model whilst still retaining its mistral tag!
|
45 |
+
After reading many hugging face articles:
|
46 |
+
|
47 |
+
The BackBone Issue is the main cause of creating multi modals !:
|
48 |
+
|
49 |
+
with the advent of tiny models we are able to leverage the decoder abilitys as a single expert-ish... within the model :
|
50 |
+
by reducing the size to a fully trainined tiny model!
|
51 |
+
this will only produce decodings and not conversations so it needs to be smart and respond with defined answers: but in general it will produce captions: but as domain based it may be specialized in medical or art etc:
|
52 |
+
|
53 |
+
The main llm still needs to retain these models within hence the back bone method of instigating a VisionEncoderDecoder model: istead of a llava model which still need wrangling to work correctly without spoiling the original transformers installation:
|
54 |
+
Previous experiments proved that the mistral large model could be used as a decoder but the total model jumped to 13b so the when applying the tiny model it was only effected by the weight of the model 248M
|
55 |
+
|
56 |
+
|
57 |
|
58 |
This is an experiment in vision - the model has been created as a mistral/VisionEncoder/Decoder
|
59 |
|
|
|
115 |
|
116 |
## Training Details
|
117 |
|
118 |
+
Currently inputs are raw and untrained ;
|
119 |
+
|
120 |
+
ie: they NEED to be trained as the tensors are randomize maybe?
|
121 |
+
despite using pretrained starting blocks. the encoder decoder modules are ready to be placed in train mode:
|
122 |
+
The main model ie the LLM will need lora/Qlora/Peft etc:
|
123 |
+
|
124 |
+
This model will stay in this state as a base training point ! so later versions will be trained;
|
125 |
+
This model is fully usable and still expected to score well ;
|
126 |
+
|
127 |
+
The small tiny mistral is also a great performer and a great block to begin a smaller experts model (later) or any multimodal project ie: its like a mini pretrined bert/llama(Mistral is a clone of llamaAlpaca!
|
128 |
+
|
129 |
```python
|
130 |
|
131 |
|