HugoLaurencon
commited on
Commit
•
8e1346e
1
Parent(s):
20f670a
Update README.md
Browse files
README.md
CHANGED
@@ -100,6 +100,6 @@ print(generated_text)
|
|
100 |
|
101 |
# License
|
102 |
|
103 |
-
The model is built on top of two pre-trained models: [SigLIP](https://github.com/huggingface/transformers/pull/26522) and [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1), which are delivered under an Apache
|
104 |
|
105 |
-
The two pre-trained models are connected to each other with newly initialized parameters that we train. These are not based on any of the two base frozen models forming the composite model. We release the additional weights we trained under an Apache
|
|
|
100 |
|
101 |
# License
|
102 |
|
103 |
+
The model is built on top of two pre-trained models: [SigLIP](https://github.com/huggingface/transformers/pull/26522) and [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1), which are delivered under an Apache-2.0 license. As such, users should comply with the licenses of these models.
|
104 |
|
105 |
+
The two pre-trained models are connected to each other with newly initialized parameters that we train. These are not based on any of the two base frozen models forming the composite model. We release the additional weights we trained under an Apache-2.0 license.
|