jupyterjazz
commited on
Commit
•
ac0d180
1
Parent(s):
a0162c8
readme: minor changes
Browse files
README.md
CHANGED
@@ -120,7 +120,7 @@ library_name: transformers
|
|
120 |
|
121 |
## Quick Start
|
122 |
|
123 |
-
The easiest way to start using `jina-embeddings-v3` is
|
124 |
|
125 |
|
126 |
## Intended Usage & Model Info
|
@@ -128,7 +128,7 @@ The easiest way to start using `jina-embeddings-v3` is Jina AI's [Embedding API]
|
|
128 |
|
129 |
`jina-embeddings-v3` is a **multilingual multi-task text embedding model** designed for a variety of NLP applications.
|
130 |
Based on the [XLM-RoBERTa architecture](https://huggingface.co/jinaai/xlm-roberta-flash-implementation),
|
131 |
-
this model supports [Rotary Position Embeddings (RoPE)](https://arxiv.org/abs/2104.09864) to handle long sequences up to **8192 tokens**.
|
132 |
Additionally, it features [LoRA](https://arxiv.org/abs/2106.09685) adapters to generate task-specific embeddings efficiently.
|
133 |
|
134 |
### Key Features:
|
@@ -201,7 +201,7 @@ embeddings = F.normalize(embeddings, p=2, dim=1)
|
|
201 |
</p>
|
202 |
</details>
|
203 |
|
204 |
-
The easiest way to start using `jina-embeddings-v3` is
|
205 |
|
206 |
Alternatively, you can use `jina-embeddings-v3` directly via Transformers package:
|
207 |
```python
|
@@ -254,12 +254,7 @@ The latest version (#todo: specify version) of SentenceTransformers also support
|
|
254 |
from sentence_transformers import SentenceTransformer
|
255 |
|
256 |
model = SentenceTransformer(
|
257 |
-
"jinaai/jina-embeddings-v3",
|
258 |
-
prompts={
|
259 |
-
"retrieval.query": "Represent the query for retrieving evidence documents: ",
|
260 |
-
"retrieval.passage": "Represent the document for retrieval: ",
|
261 |
-
},
|
262 |
-
trust_remote_code=True
|
263 |
)
|
264 |
|
265 |
embeddings = model.encode(['What is the weather like in Berlin today?'], task_type='retrieval.query')
|
|
|
120 |
|
121 |
## Quick Start
|
122 |
|
123 |
+
The easiest way to start using `jina-embeddings-v3` is with the [Jina Embedding API](https://jina.ai/embeddings/).
|
124 |
|
125 |
|
126 |
## Intended Usage & Model Info
|
|
|
128 |
|
129 |
`jina-embeddings-v3` is a **multilingual multi-task text embedding model** designed for a variety of NLP applications.
|
130 |
Based on the [XLM-RoBERTa architecture](https://huggingface.co/jinaai/xlm-roberta-flash-implementation),
|
131 |
+
this model supports [Rotary Position Embeddings (RoPE)](https://arxiv.org/abs/2104.09864) to handle long input sequences up to **8192 tokens**.
|
132 |
Additionally, it features [LoRA](https://arxiv.org/abs/2106.09685) adapters to generate task-specific embeddings efficiently.
|
133 |
|
134 |
### Key Features:
|
|
|
201 |
</p>
|
202 |
</details>
|
203 |
|
204 |
+
The easiest way to start using `jina-embeddings-v3` is with the [Jina Embedding API](https://jina.ai/embeddings/).
|
205 |
|
206 |
Alternatively, you can use `jina-embeddings-v3` directly via Transformers package:
|
207 |
```python
|
|
|
254 |
from sentence_transformers import SentenceTransformer
|
255 |
|
256 |
model = SentenceTransformer(
|
257 |
+
"jinaai/jina-embeddings-v3", trust_remote_code=True
|
|
|
|
|
|
|
|
|
|
|
258 |
)
|
259 |
|
260 |
embeddings = model.encode(['What is the weather like in Berlin today?'], task_type='retrieval.query')
|