path
stringlengths 9
135
| content
stringlengths 8
143k
|
---|---|
README.md | # Running locally
## Prerequisites
### Required
- [Hugo](https://gohugo.io/getting-started/installing/)
### Needed only for development
- [Node.js](https://nodejs.org/en/download/)
- [npm](https://www.npmjs.com/get-npm)
- [sass](https://sass-lang.com/install)
## Run
```bash
cd qdrant-landing
hugo serve
```
Open http://localhost:1313/ in your browser.
### Run with drafts
If your changes are not shown on the site, check if your markdown file has `draft: true` in the header.
Drafts are not shown by default. To see drafts, run the following command:
```bash
cd qdrant-landing
hugo serve -D
```
## Build css from scss
If you are **going to change scss files**, you need to run the following commands in a separate terminal window.
Install sass if you don't have it:
```bash
npm install -g sass
```
Install dependencies and run sass watcher:
``` bash
cd qdrant-landing
npm install
sass --watch --style=compressed ./themes/qdrant/static/css/main.scss ./themes/qdrant/static/css/main.css
```
# Content Management
To add new content to the site, you need to add a markdown file to the corresponding directory. The file should have a header with metadata. See examples below.
Do not push changes to the `master` branch directly. Create a new branch and make a pull request.
If you want to make your changes live, you need to merge your pull request to the `master` branch. After that, the changes will be automatically deployed to the site.
## Main Page
### Customers/Partners Logos
To add a customer logo to the marquee on the main page:
1. Add a logo to `/qdrant-landing/static/content/images/logos` directory. The logo should be in png format and have a transparent background and width 200px. The color of the logo should be `#B6C0E4`.
2. Add a markdown file to `content/stack` directory using next command (replace `customer-name` with the name of the customer):
``` bash
cd qdrant-landing
hugo new --kind customer-logo stack/customer-name.md
```
Edit the file if needed.
3. If total number of slides changed - update `static/css/main.scss` file. Find line:
```scss
@include marquee.base(80px, 200px, 13, 6, 20px, false, 50s);
```
and change 13 to the number of logos.
Rebuild css from scss (see instructions [above](#build-css-from-scss)).
4. To change order of the logos - add or change `weight` parameter in the markdown files in `/qdrant-landing/content/stack` directory.
## Articles
### Metadata
Articles are written in markdown and stored in `content/articles` directory. Each article has a header with metadata:
```yaml
---
title: Here goes the title of the article #required
short_description: Short description of the article
description: This is a longer description of the article, you can get a little bit more wordly here. Try to keep it under 140 characters. #required
social_preview_image: /articles_data/cars-recognition/social_preview.jpg # This image will be used in social media previews, should be 1200x630px. Required.
small_preview_image: /articles_data/cars-recognition/icon.svg # This image will be used in the list of articles at the footer, should be 40x40px
preview_dir: /articles_data/cars-recognition/preview # This directory contains images that will be used in the article preview. They can be generated from one image. Read more below. Required.
weight: 10 # This is the order of the article in the list of articles at the footer. The lower the number, the higher the article will be in the list.
author: Yusuf Sarıgöz # Author of the article. Required.
author_link: https://medium.com/@yusufsarigoz # Link to the author's page. Required.
date: 2022-06-28T13:00:00+03:00 # Date of the article. Required.
draft: false # If true, the article will not be published
keywords: # Keywords for SEO
- vector databases comparative benchmark
- benchmark
- performance
- latency
---
```
### Preview image mechanism
Preview image for each page is selected based from the following places in the following order:
- If document has param `social_preview_image` - it will be used as preview image
- If there is a file `static/<path-to-section>/<file-name>-social-preview.png` - it will be used as preview image
- Global `preview_image = "/images/social_preview.png"` will be used as preview image
### Article preview
Article preview is a set of images that will be used in the article preview. They can be generated from one image. To generate preview images, you need to have [ImageMagick](https://imagemagick.org/index.php) and [cwebp](https://developers.google.com/speed/webp/download) installed.
You can install `cwebp` with the following command:
```bash
curl -s https://raw.githubusercontent.com/Intervox/node-webp/latest/bin/install_webp | sudo bash
```
#### Prepare preview image
For the preview use image with the aspect ratio 3 to 1 in jpg or png format. With resolution not smaller than 1200x630px. The image should illustrate in some way the article's core idea. Fill free got creative. Check out that most important part of the image is in the center.
#### Generating preview images
To generate preview images, run the following command from the root of project:
```bash
bash -x automation/process-article-img.sh <path-to-image> <alias-for-the-article>
```
For example:
```bash
bash -x automation/process-article-img.sh ~/Pictures/my_preview.jpg filtrable-hnsw
```
This command will create a directory `preview` in `static/article_data/filtrable-hnsw` and generate preview images in it. If the directory `static/article_data/filtrable-hnsw` doesn't exist, it will be created. If it exists, only files in children `preview` directory will be affected. In this case preview images will be overwritten. Your original image will not be affected.
#### Preview images set
Preview images set consists of the following images:
`preview.jpg` - 530x145px (used on the article preview card **for browsers, not supporting webp**)
`preview.webp` - 530x145px (used on the article preview card **for browsers, supporting webp**)
`title.jpg` - 898x300px (used on the article's page as the main image before the article title **for browsers, not supporting webp**)
`title.webp` - 898x300px (used on the article's page as the main image before the article title **for browsers, supporting webp**)
`social_preview.jpg` - 1200x630px (used in social media previews)
## Documentation
### Metadata
Documentation pages are written in markdown and stored in `content/documentation` directory. Each page has a header with metadata:
```yaml
---
title: Here goes the title of the page #required
weight: 10 # This is the order of the page in the sidebar. The lower the number, the higher the page will be in the sidebar.
canonicalUrl: https://qdrant.io/documentation/ # Optional. This is the canonical url of the page.
hideInSidebar: true # Optional. If true, the page will not be shown in the sidebar. It can be used in regular documentation pages and in documentation section pages (_index.md).
---
```
### Preview images for documentation pages
Branded individual preview images for documentation pages might be auto-generated using the following command:
(from the root of the project)
```bash
bash -x automation/generate-all-docs-preview.sh
```
It will automatically insert documentation Section name and Title of the page into the preview.
If there is a custom background for the image - it should be placed in the `static/documentation/<section-name>/<page>-bg.png`.
<!-- (Use midjourney and one of the styles https://www.notion.so/qdrant/Midjourney-styles-a8dbc94761a74bb287a8a8ad05d593d1 to generate the background) -->
If there is no custom background - random default background will be used.
Generated images will be placed in the `static/documentation/<section-name>/<page>-social-preview.png`.
To re-generate preview image, remove the previously generated one and run the command again.
### Documentation sidebar
#### Delimiter
To create a delimiter in the sidebar, use the following command:
``` bash
cd qdrant-landing
hugo new --kind delimiter documentation/<delimiter-title>.md
```
It will create a file `content/documentation/<delimiter-title>.md`.
To put a delimiter to desired place in the sidebar, set the `weight` parameter to the desired value. The lower the value, the higher the delimiter will be in the sidebar.
#### External link
To create an external link in the sidebar, use the following command:
``` bash
cd qdrant-landing
hugo new --kind external-link documentation/<link-title>.md
```
It will create a file `content/documentation/<link-title>.md`. Open it and set the `external_link` parameter to the desired value.
#### Params
Additionally, to the standard hugo front matter params, we have the following params:
```yaml
hideInSidebar: true
```
If `true`, the page will not be shown in the sidebar. It can be used in regular documentation pages and in documentation section pages (_index.md).
## Blog
To add a new blog post, run the following commands:
``` bash
cd qdrant-landing
hugo new --kind blog-post blog/<post-title>.md
```
You'll see a file named `content/blog/<post-title>.md`. Open it and edit the front matter.
### Images
Store images for blog posts in the following subdirectory: `static/blog/<post-title>`. You can add nested directories if needed. For social media previews, use images of at least 1200x600px.
In the blog post file, you'll see:
- `preview_image`: The image that appears with the blog post. If you want different images for social media, the blog post title, or the preview, use the following properties:
- `social_preview_image`
- `title_preview_image`
- `small_preview_image`
-
### Important notes
- Add tags. While they're not shown on the blog post page, they are used to display related posts.
- If post has `featured: true` property in the front matter this post will appear in the "Features and News" blog section. Only the last 4 featured posts will be displayed in this section. Featured posts will not appear in the regular post list.
- If there are more than 4 `featured: true` posts (where `draft: false`), the oldest post disappears from /blog.
## Marketing Landing Pages
### Build styles
From the root of the project:
```bash
sass --watch --style=compressed ./qdrant-landing/themes/qdrant/static/css/pages/marketing-landing.scss ./qdrant-landing/themes/qdrant/static/css/marketing-landing.css
```
## SEO
### Structured data (Schema.org, JSON-LD)
Structured data is a standardized format for providing information about a page and classifying the page content. It is used by search engines to understand the content of the page and to display rich snippets in search results.
We use JSON-LD format for structured data. Data is stored in JSON files in the `/assets/schema` directory. If no specific schema is provided for a page, the default schema is used based on the page type as defined in the `qdrant-landing/themes/qdrant/layouts/partials/seo_schema.html` file.
To add specific schema to a specific page, use the `seo_schema` or `seo_schema_json` parameter in the front matter of content markdown files (directory `content`).
To add json directly to the page, use the `seo_schema` parameter. The value should be a JSON object.
Example:
```yaml
seo_schema: {
"@context": "https://schema.org",
"@type": "Organization",
"name": "Qdrant",
"url": "https://qdrant.io",
"logo": "https://qdrant.io/images/logo.png",
"sameAs": [
"https://www.linkedin.com/company/qdrant",
"https://twitter.com/qdrant"
]
}
```
To add a path to a JSON files with schema data, use the `seo_schema_json` parameter. This parameter should contain a list of paths to JSON files.
The path should be relative to the `qdrant-landing/assets` directory.
Example:
```yaml
seo_schema_json:
- schema/schema-organization.json
- schema/product-schema.json
```
If you want to add a new schema, create a new JSON file in the `qdrant-landing/assets/schema` directory and add the path to the `seo_schema_json` parameter.
When use `seo_schema` and `seo_schema_json` together, `seo_schema` will be used additionally to `seo_schema_json` adding the second <script> tag with the `seo_schema` value.
Use `seo_schema_json` if you want to reuse the same schema for multiple pages to avoid duplication and make it easier to maintain. |
qdrant-landing/GrammarLinter.md | # English grammar linter (Vale)
This repository includes `beta` rules based on the [Vale grammar linter](https://vale.sh). While the [installation instructions](https://vale.sh/docs/vale-cli/installation/#package-managers) cover Mac and Windows, I've installed Vale on Ubuntu Linux. Vale includes
installation binaries in one of their [Git repositories](https://github.com/errata-ai/vale/releases).
You can integrate [Vale as a plugin](https://vale.sh/docs/integrations/guide/) with
several different IDEs. This README illustrates integration between Vale and VSCode.
Vale pulls rules from YAML files in the `styles/` subdirectory. They include grammar rules in the following subdirectories:
- Modified rules from GitLab in the `styles/Qdrant/` subdirectory
- [Google Developer Style Guide](https://github.com/errata-ai/Google) rules, customized for Vale, in the `styles/Google` subdirectory
- Rules associated with the [write-good](https://github.com/btford/write-good) grammar linter
These rules are a "Work in Progress"; we may overrule/modify them as we use them to review Qdrant content. For example, if you find a common word / acronym that we use, you're
welcome to add it (with a PR) to our `styles/cobalt/spelling-exceptions.txt` file.
For more information, see the [Vale documentation](https://vale.sh/).
## Vale configuration
The Vale configuration file is .vale.ini. In this file, we see:
- The `StylesPath` points to rules in the `styles/` subdirectory.
- The `BasedOnStyles` parameter specifies style subdirectories.
- The `IgnoredScopes` tells Vale to ignore content such as code samples, as described in [Vale Documentation](https://vale.sh/docs/topics/config/#ignoredscopes).
Tip: If you want Vale to ignore code, surround it with code sample marks such as:
- `Vale_ignores_this`
```
Vale also ignores this
```
## Use Vale in your IDE
You can set up Vale with several different IDEs. For more information, see the
[Integrations](https://vale.sh/docs/integrations/guide/) section of the Vale documentation.
For example, you can set up a Vale plugin with the VSCode IDE, per
https://github.com/chrischinchilla/vale-vscode.
If you have problems with Vale in VSCode, you may need to:
- Restart VSCode
- Disable / re-enable the Vale plugin
- Save changes to the Markdown file that you're analyzing
If you're successful, you'll see linting messages similar to what's shown in the following screenshot:
<p align="center">
<img src="static/VSCodeDemo.png">
</p>
## Use Vale at the command line
To review your content against the given style guide rules, first navigate to
the `qdrant-landing/` directory for this repository. Then run the following
command:
```
vale /path/to/your/filename.md
```
As long as you're in the `qdrant-landing/` directory, you can use Vale at the command line to lint Markdown files in any local directory.
## Potential future options
- Include Vale in CI/CD jobs
- Set up a GitHub action
- Apply vale to articles and blog posts
- Guess: we need different rules. Default rules for documentation suggest:
- Use "second person"
- Avoid future tense
- Don't use exclamation points
- Avoid words like "easy" and "simple"
These rules generally do not apply to articles or blogs.
|
qdrant-landing/archetypes/blog-post.md | ---
title: "{{ replace .Name "-" " " | title }}"
draft: false
slug: {{ .Name }} # Change this slug to your page slug if needed
short_description: This is a blog post # Change this
description: This is a blog post # Change this
preview_image: /blog/Article-Image.png # Change this
# social_preview_image: /blog/Article-Image.png # Optional image used for link previews
# title_preview_image: /blog/Article-Image.png # Optional image used for blog post title
# small_preview_image: /blog/Article-Image.png # Optional image used for small preview in the list of blog posts
date: {{ .Date }}
author: John Doe # Change this
featured: false # if true, this post will be featured on the blog page
tags: # Change this, related by tags posts will be shown on the blog page
- news
- blog
weight: 0 # Change this weight to change order of posts
# For more guidance, see https://github.com/qdrant/landing_page?tab=readme-ov-file#blog
---
Here is your blog post content. You can use markdown syntax here.
# Header 1
## Header 2
### Header 3
#### Header 4
##### Header 5
###### Header 6
<aside role="alert">
You can add a note to your page using this aside block.
</aside>
<aside role="status">
This is a warning message.
</aside>
> This is a blockquote following a header.
Table:
| Header 1 | Header 2 | Header 3 | Header 4 |
| -------- | -------- | -------- | -------- |
| Cell 1 | Cell 2 | Cell 3 | Cell 4 |
| Cell 3 | Cell 4 | Cell 5 | Cell 6 |
- List item 1
- Nested list item 1
- Nested list item 2
- List item 2
- List item 3
1. Numbered list item 1
1. Nested numbered list item 1
2. Nested numbered list item 2
2. Numbered list item 2
3. Numbered list item 3
|
qdrant-landing/archetypes/default.md | ---
title: "{{ replace .Name "-" " " | title }}"
date: {{ .Date }}
draft: true
---
|
qdrant-landing/archetypes/delimiter.md | ---
#Delimiter files are used to separate the list of documentation pages into sections.
title: "{{ replace .Name "-" " " | title }}"
type: delimiter
weight: 0 # Change this weight to change order of sections
sitemapExclude: True
--- |
qdrant-landing/archetypes/external-link.md | ---
# External link template
title: "{{ replace .Name "-" " " | title }}"
type: external-link
external_url: https://github.com/qdrant/qdrant # Change this link to your external link
sitemapExclude: True
---
|
qdrant-landing/content/about-us/_index.md | ---
title: About Us
--- |
qdrant-landing/content/advanced-search/_index.md | ---
title: advanced-search
description: advanced-search
build:
render: always
cascade:
- build:
list: local
publishResources: false
render: never
---
|
qdrant-landing/content/advanced-search/advanced-search-features.md | ---
title: Search with Qdrant
description: Qdrant enhances search, offering semantic, similarity, multimodal, and hybrid search capabilities for accurate, user-centric results, serving applications in different industries like e-commerce to healthcare.
features:
- id: 0
icon:
src: /icons/outline/similarity-blue.svg
alt: Similarity
title: Semantic Search
description: Qdrant optimizes similarity search, identifying the closest database items to any query vector for applications like recommendation systems, RAG and image retrieval, enhancing accuracy and user experience.
link:
text: Learn More
url: /documentation/concepts/search/
- id: 1
icon:
src: /icons/outline/search-text-blue.svg
alt: Search text
title: Hybrid Search for Text
description: By combining dense vector embeddings with sparse vectors e.g. BM25, Qdrant powers semantic search to deliver context-aware results, transcending traditional keyword search by understanding the deeper meaning of data.
link:
text: Learn More
url: /documentation/tutorials/hybrid-search-fastembed/
- id: 2
icon:
src: /icons/outline/selection-blue.svg
alt: Selection
title: Multimodal Search
description: Qdrant's capability extends to multi-modal search, indexing and retrieving various data forms (text, images, audio) once vectorized, facilitating a comprehensive search experience.
link:
text: View Tutorial
url: /documentation/tutorials/aleph-alpha-search/
- id: 3
icon:
src: /icons/outline/filter-blue.svg
alt: Filter
title: Single Stage filtering that Works
description: Qdrant enhances search speeds and control and context understanding through filtering on any nested entry in our payload. Unique architecture allows Qdrant to avoid expensive pre-filtering and post-filtering stages, making search faster and accurate.
link:
text: Learn More
url: /articles/filtrable-hnsw/
sitemapExclude: true
---
|
qdrant-landing/content/advanced-search/advanced-search-hero.md | ---
title: Advanced Search
description: Dive into next-gen search capabilities with Qdrant, offering a smarter way to deliver precise and tailored content to users, enhancing interaction accuracy and depth.
startFree:
text: Get Started
url: https://cloud.qdrant.io/
learnMore:
text: Contact Us
url: /contact-us/
image:
src: /img/vectors/vector-0.svg
alt: Advanced search
sitemapExclude: true
---
|
qdrant-landing/content/advanced-search/advanced-search-use-cases.md | ---
title: Learn how to get started with Qdrant for your search use case
features:
- id: 0
image:
src: /img/advanced-search-use-cases/startup-semantic-search.svg
alt: Startup Semantic Search
title: Startup Semantic Search Demo
description: The demo showcases semantic search for startup descriptions through SentenceTransformer and Qdrant, comparing neural search's accuracy with traditional searches for better content discovery.
link:
text: View Demo
url: https://demo.qdrant.tech/
- id: 1
image:
src: /img/advanced-search-use-cases/multimodal-semantic-search.svg
alt: Multimodal Semantic Search
title: Multimodal Semantic Search with Aleph Alpha
description: This tutorial shows you how to run a proper multimodal semantic search system with a few lines of code, without the need to annotate the data or train your networks.
link:
text: View Tutorial
url: /documentation/examples/aleph-alpha-search/
- id: 2
image:
src: /img/advanced-search-use-cases/simple-neural-search.svg
alt: Simple Neural Search
title: Create a Simple Neural Search Service
description: This tutorial shows you how to build and deploy your own neural search service.
link:
text: View Tutorial
url: /documentation/tutorials/neural-search/
- id: 3
image:
src: /img/advanced-search-use-cases/image-classification.svg
alt: Image Classification
title: Image Classification with Qdrant Vector Semantic Search
description: In this tutorial, you will learn how a semantic search engine for images can help diagnose different types of skin conditions.
link:
text: View Tutorial
url: https://www.youtube.com/watch?v=sNFmN16AM1o
- id: 4
image:
src: /img/advanced-search-use-cases/semantic-search-101.svg
alt: Semantic Search 101
title: Semantic Search 101
description: Build a semantic search engine for science fiction books in 5 mins.
link:
text: View Tutorial
url: /documentation/tutorials/search-beginners/
- id: 5
image:
src: /img/advanced-search-use-cases/hybrid-search-service-fastembed.svg
alt: Create a Hybrid Search Service with Fastembed
title: Create a Hybrid Search Service with Fastembed
description: This tutorial guides you through building and deploying your own hybrid search service using Fastembed.
link:
text: View Tutorial
url: /documentation/tutorials/hybrid-search-fastembed/
sitemapExclude: true
---
|
qdrant-landing/content/articles/_index.md | ---
title: Qdrant Articles
page_title: Articles about Vector Search
description: Articles about vector search and similarity larning related topics. Latest updates on Qdrant vector search engine.
section_title: Check out our latest publications
subtitle: Check out our latest publications
img: /articles_data/title-img.png
---
|
qdrant-landing/content/articles/binary-quantization-openai.md | ---
title: "Optimizing OpenAI Embeddings: Enhance Efficiency with Qdrant's Binary Quantization"
draft: false
slug: binary-quantization-openai
short_description: Use Qdrant's Binary Quantization to enhance OpenAI embeddings
description: Explore how Qdrant's Binary Quantization can significantly improve the efficiency and performance of OpenAI's Ada-003 embeddings. Learn best practices for real-time search applications.
preview_dir: /articles_data/binary-quantization-openai/preview
preview_image: /articles-data/binary-quantization-openai/Article-Image.png
small_preview_image: /articles_data/binary-quantization-openai/icon.svg
social_preview_image: /articles_data/binary-quantization-openai/preview/social-preview.png
title_preview_image: /articles_data/binary-quantization-openai/preview/preview.webp
date: 2024-02-21T13:12:08-08:00
author: Nirant Kasliwal
author_link: https://nirantk.com/about/
featured: false
tags:
- OpenAI
- binary quantization
- embeddings
weight: -130
aliases: [ /blog/binary-quantization-openai/ ]
---
OpenAI Ada-003 embeddings are a powerful tool for natural language processing (NLP). However, the size of the embeddings are a challenge, especially with real-time search and retrieval. In this article, we explore how you can use Qdrant's Binary Quantization to enhance the performance and efficiency of OpenAI embeddings.
In this post, we discuss:
- The significance of OpenAI embeddings and real-world challenges.
- Qdrant's Binary Quantization, and how it can improve the performance of OpenAI embeddings
- Results of an experiment that highlights improvements in search efficiency and accuracy
- Implications of these findings for real-world applications
- Best practices for leveraging Binary Quantization to enhance OpenAI embeddings
If you're new to Binary Quantization, consider reading our article which walks you through the concept and [how to use it with Qdrant](/articles/binary-quantization/)
You can also try out these techniques as described in [Binary Quantization OpenAI](https://github.com/qdrant/examples/blob/openai-3/binary-quantization-openai/README.md), which includes Jupyter notebooks.
## New OpenAI embeddings: performance and changes
As the technology of embedding models has advanced, demand has grown. Users are looking more for powerful and efficient text-embedding models. OpenAI's Ada-003 embeddings offer state-of-the-art performance on a wide range of NLP tasks, including those noted in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) and [MIRACL](https://openai.com/blog/new-embedding-models-and-api-updates).
These models include multilingual support in over 100 languages. The transition from text-embedding-ada-002 to text-embedding-3-large has led to a significant jump in performance scores (from 31.4% to 54.9% on MIRACL).
#### Matryoshka representation learning
The new OpenAI models have been trained with a novel approach called "[Matryoshka Representation Learning](https://aniketrege.github.io/blog/2024/mrl/)". Developers can set up embeddings of different sizes (number of dimensions). In this post, we use small and large variants. Developers can select embeddings which balances accuracy and size.
Here, we show how the accuracy of binary quantization is quite good across different dimensions -- for both the models.
## Enhanced performance and efficiency with binary quantization
By reducing storage needs, you can scale applications with lower costs. This addresses a critical challenge posed by the original embedding sizes. Binary Quantization also speeds the search process. It simplifies the complex distance calculations between vectors into more manageable bitwise operations, which supports potentially real-time searches across vast datasets.
The accompanying graph illustrates the promising accuracy levels achievable with binary quantization across different model sizes, showcasing its practicality without severely compromising on performance. This dual advantage of storage reduction and accelerated search capabilities underscores the transformative potential of Binary Quantization in deploying OpenAI embeddings more effectively across various real-world applications.
![](/blog/openai/Accuracy_Models.png)
The efficiency gains from Binary Quantization are as follows:
- Reduced storage footprint: It helps with large-scale datasets. It also saves on memory, and scales up to 30x at the same cost.
- Enhanced speed of data retrieval: Smaller data sizes generally leads to faster searches.
- Accelerated search process: It is based on simplified distance calculations between vectors to bitwise operations. This enables real-time querying even in extensive databases.
### Experiment setup: OpenAI embeddings in focus
To identify Binary Quantization's impact on search efficiency and accuracy, we designed our experiment on OpenAI text-embedding models. These models, which capture nuanced linguistic features and semantic relationships, are the backbone of our analysis. We then delve deep into the potential enhancements offered by Qdrant's Binary Quantization feature.
This approach not only leverages the high-caliber OpenAI embeddings but also provides a broad basis for evaluating the search mechanism under scrutiny.
#### Dataset
The research employs 100K random samples from the [OpenAI 1M](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) 1M dataset, focusing on 100 randomly selected records. These records serve as queries in the experiment, aiming to assess how Binary Quantization influences search efficiency and precision within the dataset. We then use the embeddings of the queries to search for the nearest neighbors in the dataset.
#### Parameters: oversampling, rescoring, and search limits
For each record, we run a parameter sweep over the number of oversampling, rescoring, and search limits. We can then understand the impact of these parameters on search accuracy and efficiency. Our experiment was designed to assess the impact of Binary Quantization under various conditions, based on the following parameters:
- **Oversampling**: By oversampling, we can limit the loss of information inherent in quantization. This also helps to preserve the semantic richness of your OpenAI embeddings. We experimented with different oversampling factors, and identified the impact on the accuracy and efficiency of search. Spoiler: higher oversampling factors tend to improve the accuracy of searches. However, they usually require more computational resources.
- **Rescoring**: Rescoring refines the first results of an initial binary search. This process leverages the original high-dimensional vectors to refine the search results, **always** improving accuracy. We toggled rescoring on and off to measure effectiveness, when combined with Binary Quantization. We also measured the impact on search performance.
- **Search Limits**: We specify the number of results from the search process. We experimented with various search limits to measure their impact the accuracy and efficiency. We explored the trade-offs between search depth and performance. The results provide insight for applications with different precision and speed requirements.
Through this detailed setup, our experiment sought to shed light on the nuanced interplay between Binary Quantization and the high-quality embeddings produced by OpenAI's models. By meticulously adjusting and observing the outcomes under different conditions, we aimed to uncover actionable insights that could empower users to harness the full potential of Qdrant in combination with OpenAI's embeddings, regardless of their specific application needs.
### Results: binary quantization's impact on OpenAI embeddings
To analyze the impact of rescoring (`True` or `False`), we compared results across different model configurations and search limits. Rescoring sets up a more precise search, based on results from an initial query.
#### Rescoring
![Graph that measures the impact of rescoring](/blog/openai/Rescoring_Impact.png)
Here are some key observations, which analyzes the impact of rescoring (`True` or `False`):
1. **Significantly Improved Accuracy**:
- Across all models and dimension configurations, enabling rescoring (`True`) consistently results in higher accuracy scores compared to when rescoring is disabled (`False`).
- The improvement in accuracy is true across various search limits (10, 20, 50, 100).
2. **Model and Dimension Specific Observations**:
- For the `text-embedding-3-large` model with 3072 dimensions, rescoring boosts the accuracy from an average of about 76-77% without rescoring to 97-99% with rescoring, depending on the search limit and oversampling rate.
- The accuracy improvement with increased oversampling is more pronounced when rescoring is enabled, indicating a better utilization of the additional binary codes in refining search results.
- With the `text-embedding-3-small` model at 512 dimensions, accuracy increases from around 53-55% without rescoring to 71-91% with rescoring, highlighting the significant impact of rescoring, especially at lower dimensions.
In contrast, for lower dimension models (such as text-embedding-3-small with 512 dimensions), the incremental accuracy gains from increased oversampling levels are less significant, even with rescoring enabled. This suggests a diminishing return on accuracy improvement with higher oversampling in lower dimension spaces.
3. **Influence of Search Limit**:
- The performance gain from rescoring seems to be relatively stable across different search limits, suggesting that rescoring consistently enhances accuracy regardless of the number of top results considered.
In summary, enabling rescoring dramatically improves search accuracy across all tested configurations. It is crucial feature for applications where precision is paramount. The consistent performance boost provided by rescoring underscores its value in refining search results, particularly when working with complex, high-dimensional data like OpenAI embeddings. This enhancement is critical for applications that demand high accuracy, such as semantic search, content discovery, and recommendation systems, where the quality of search results directly impacts user experience and satisfaction.
### Dataset combinations
For those exploring the integration of text embedding models with Qdrant, it's crucial to consider various model configurations for optimal performance. The dataset combinations defined above illustrate different configurations to test against Qdrant. These combinations vary by two primary attributes:
1. **Model Name**: Signifying the specific text embedding model variant, such as "text-embedding-3-large" or "text-embedding-3-small". This distinction correlates with the model's capacity, with "large" models offering more detailed embeddings at the cost of increased computational resources.
2. **Dimensions**: This refers to the size of the vector embeddings produced by the model. Options range from 512 to 3072 dimensions. Higher dimensions could lead to more precise embeddings but might also increase the search time and memory usage in Qdrant.
Optimizing these parameters is a balancing act between search accuracy and resource efficiency. Testing across these combinations allows users to identify the configuration that best meets their specific needs, considering the trade-offs between computational resources and the quality of search results.
```python
dataset_combinations = [
{
"model_name": "text-embedding-3-large",
"dimensions": 3072,
},
{
"model_name": "text-embedding-3-large",
"dimensions": 1024,
},
{
"model_name": "text-embedding-3-large",
"dimensions": 1536,
},
{
"model_name": "text-embedding-3-small",
"dimensions": 512,
},
{
"model_name": "text-embedding-3-small",
"dimensions": 1024,
},
{
"model_name": "text-embedding-3-small",
"dimensions": 1536,
},
]
```
#### Exploring dataset combinations and their impacts on model performance
The code snippet iterates through predefined dataset and model combinations. For each combination, characterized by the model name and its dimensions, the corresponding experiment's results are loaded. These results, which are stored in JSON format, include performance metrics like accuracy under different configurations: with and without oversampling, and with and without a rescore step.
Following the extraction of these metrics, the code computes the average accuracy across different settings, excluding extreme cases of very low limits (specifically, limits of 1 and 5). This computation groups the results by oversampling, rescore presence, and limit, before calculating the mean accuracy for each subgroup.
After gathering and processing this data, the average accuracies are organized into a pivot table. This table is indexed by the limit (the number of top results considered), and columns are formed based on combinations of oversampling and rescoring.
```python
import pandas as pd
for combination in dataset_combinations:
model_name = combination["model_name"]
dimensions = combination["dimensions"]
print(f"Model: {model_name}, dimensions: {dimensions}")
results = pd.read_json(f"../results/results-{model_name}-{dimensions}.json", lines=True)
average_accuracy = results[results["limit"] != 1]
average_accuracy = average_accuracy[average_accuracy["limit"] != 5]
average_accuracy = average_accuracy.groupby(["oversampling", "rescore", "limit"])[
"accuracy"
].mean()
average_accuracy = average_accuracy.reset_index()
acc = average_accuracy.pivot(
index="limit", columns=["oversampling", "rescore"], values="accuracy"
)
print(acc)
```
Here is a selected slice of these results, with `rescore=True`:
|Method|Dimensionality|Test Dataset|Recall|Oversampling|
|-|-|-|-|-|
|OpenAI text-embedding-3-large (highest MTEB score from the table) |3072|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-3072-1M) | 0.9966|3x|
|OpenAI text-embedding-3-small|1536|[DBpedia 100K](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-small-1536-100K)| 0.9847|3x|
|OpenAI text-embedding-3-large|1536|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M)| 0.9826|3x|
#### Impact of oversampling
You can use oversampling in machine learning to counteract imbalances in datasets.
It works well when one class significantly outnumbers others. This imbalance
can skew the performance of models, which favors the majority class at the
expense of others. By creating additional samples from the minority classes,
oversampling helps equalize the representation of classes in the training dataset, thus enabling more fair and accurate modeling of real-world scenarios.
The screenshot showcases the effect of oversampling on model performance metrics. While the actual metrics aren't shown, we expect to see improvements in measures such as precision, recall, or F1-score. These improvements illustrate the effectiveness of oversampling in creating a more balanced dataset. It allows the model to learn a better representation of all classes, not just the dominant one.
Without an explicit code snippet or output, we focus on the role of oversampling in model fairness and performance. Through graphical representation, you can set up before-and-after comparisons. These comparisons illustrate the contribution to machine learning projects.
![Measuring the impact of oversampling](/blog/openai/Oversampling_Impact.png)
### Leveraging binary quantization: best practices
We recommend the following best practices for leveraging Binary Quantization to enhance OpenAI embeddings:
1. Embedding Model: Use the text-embedding-3-large from MTEB. It is most accurate among those tested.
2. Dimensions: Use the highest dimension available for the model, to maximize accuracy. The results are true for English and other languages.
3. Oversampling: Use an oversampling factor of 3 for the best balance between accuracy and efficiency. This factor is suitable for a wide range of applications.
4. Rescoring: Enable rescoring to improve the accuracy of search results.
5. RAM: Store the full vectors and payload on disk. Limit what you load from memory to the binary quantization index. This helps reduce the memory footprint and improve the overall efficiency of the system. The incremental latency from the disk read is negligible compared to the latency savings from the binary scoring in Qdrant, which uses SIMD instructions where possible.
## What's next?
Binary quantization is exceptional if you need to work with large volumes of data under high recall expectations. You can try this feature either by spinning up a [Qdrant container image](https://hub.docker.com/r/qdrant/qdrant) locally or, having us create one for you through a [free account](https://cloud.qdrant.io/login) in our cloud hosted service.
The article gives examples of data sets and configuration you can use to get going. Our documentation covers [adding large datasets to Qdrant](/documentation/tutorials/bulk-upload/) to your Qdrant instance as well as [more quantization methods](/documentation/guides/quantization/).
Want to discuss these findings and learn more about Binary Quantization? [Join our Discord community.](https://discord.gg/qdrant) |
qdrant-landing/content/articles/binary-quantization.md | ---
title: "Binary Quantization - Vector Search, 40x Faster "
short_description: "Binary Quantization is a newly introduced mechanism of reducing the memory footprint and increasing performance"
description: "Binary Quantization is a newly introduced mechanism of reducing the memory footprint and increasing performance"
social_preview_image: /articles_data/binary-quantization/social_preview.png
small_preview_image: /articles_data/binary-quantization/binary-quantization-icon.svg
preview_dir: /articles_data/binary-quantization/preview
weight: -40
author: Nirant Kasliwal
author_link: https://nirantk.com/about/
date: 2023-09-18T13:00:00+03:00
draft: false
keywords:
- vector search
- binary quantization
- memory optimization
---
# Optimizing High-Dimensional Vectors with Binary Quantization
Qdrant is built to handle typical scaling challenges: high throughput, low latency and efficient indexing. **Binary quantization (BQ)** is our latest attempt to give our customers the edge they need to scale efficiently. This feature is particularly excellent for collections with large vector lengths and a large number of points.
Our results are dramatic: Using BQ will reduce your memory consumption and improve retrieval speeds by up to 40x.
As is the case with other quantization methods, these benefits come at the cost of recall degradation. However, our implementation lets you balance the tradeoff between speed and recall accuracy at time of search, rather than time of index creation.
The rest of this article will cover:
1. The importance of binary quantization
2. Basic implementation using our Python client
3. Benchmark analysis and usage recommendations
## What is Binary Quantization?
Binary quantization (BQ) converts any vector embedding of floating point numbers into a vector of binary or boolean values. This feature is an extension of our past work on [scalar quantization](/articles/scalar-quantization/) where we convert `float32` to `uint8` and then leverage a specific SIMD CPU instruction to perform fast vector comparison.
![What is binary quantization](/articles_data/binary-quantization/bq-2.png)
**This binarization function is how we convert a range to binary values. All numbers greater than zero are marked as 1. If it's zero or less, they become 0.**
The benefit of reducing the vector embeddings to binary values is that boolean operations are very fast and need significantly less CPU instructions. In exchange for reducing our 32 bit embeddings to 1 bit embeddings we can see up to a 40x retrieval speed up gain!
One of the reasons vector search still works with such a high compression rate is that these large vectors are over-parameterized for retrieval. This is because they are designed for ranking, clustering, and similar use cases, which typically need more information encoded in the vector.
For example, The 1536 dimension OpenAI embedding is worse than Open Source counterparts of 384 dimension at retrieval and ranking. Specifically, it scores 49.25 on the same [Embedding Retrieval Benchmark](https://huggingface.co/spaces/mteb/leaderboard) where the Open Source `bge-small` scores 51.82. This 2.57 points difference adds up quite soon.
Our implementation of quantization achieves a good balance between full, large vectors at ranking time and binary vectors at search and retrieval time. It also has the ability for you to adjust this balance depending on your use case.
## Faster search and retrieval
Unlike product quantization, binary quantization does not rely on reducing the search space for each probe. Instead, we build a binary index that helps us achieve large increases in search speed.
![Speed by quantization method](/articles_data/binary-quantization/bq-3.png)
HNSW is the approximate nearest neighbor search. This means our accuracy improves up to a point of diminishing returns, as we check the index for more similar candidates. In the context of binary quantization, this is referred to as the **oversampling rate**.
For example, if `oversampling=2.0` and the `limit=100`, then 200 vectors will first be selected using a quantized index. For those 200 vectors, the full 32 bit vector will be used with their HNSW index to a much more accurate 100 item result set. As opposed to doing a full HNSW search, we oversample a preliminary search and then only do the full search on this much smaller set of vectors.
## Improved storage efficiency
The following diagram shows the binarization function, whereby we reduce 32 bits storage to 1 bit information.
Text embeddings can be over 1024 elements of floating point 32 bit numbers. For example, remember that OpenAI embeddings are 1536 element vectors. This means each vector is 6kB for just storing the vector.
![Improved storage efficiency](/articles_data/binary-quantization/bq-4.png)
In addition to storing the vector, we also need to maintain an index for faster search and retrieval. Qdrant’s formula to estimate overall memory consumption is:
`memory_size = 1.5 * number_of_vectors * vector_dimension * 4 bytes`
For 100K OpenAI Embedding (`ada-002`) vectors we would need 900 Megabytes of RAM and disk space. This consumption can start to add up rapidly as you create multiple collections or add more items to the database.
**With binary quantization, those same 100K OpenAI vectors only require 128 MB of RAM.** We benchmarked this result using methods similar to those covered in our [Scalar Quantization memory estimation](/articles/scalar-quantization/#benchmarks).
This reduction in RAM needed is achieved through the compression that happens in the binary conversion. Instead of putting the HNSW index for the full vectors into RAM, we just put the binary vectors into RAM, use them for the initial oversampled search, and then use the HNSW full index of the oversampled results for the final precise search. All of this happens under the hoods without any intervention needed on your part.
### When should you not use BQ?
Since this method exploits the over-parameterization of embedding, you can expect poorer results for small embeddings i.e. less than 1024 dimensions. With the smaller number of elements, there is not enough information maintained in the binary vector to achieve good results.
You will still get faster boolean operations and reduced RAM usage, but the accuracy degradation might be too high.
## Sample implementation
Now that we have introduced you to binary quantization, let’s try our a basic implementation. In this example, we will be using OpenAI and Cohere with Qdrant.
#### Create a collection with Binary Quantization enabled
Here is what you should do at indexing time when you create the collection:
1. We store all the "full" vectors on disk.
2. Then we set the binary embeddings to be in RAM.
By default, both the full vectors and BQ get stored in RAM. We move the full vectors to disk because this saves us memory and allows us to store more vectors in RAM. By doing this, we explicitly move the binary vectors to memory by setting `always_ram=True`.
```python
from qdrant_client import QdrantClient
#collect to our Qdrant Server
client = QdrantClient(
url="http://localhost:6333",
prefer_grpc=True,
)
#Create the collection to hold our embeddings
# on_disk=True and the quantization_config are the areas to focus on
collection_name = "binary-quantization"
client.recreate_collection(
collection_name=f"{collection_name}",
vectors_config=models.VectorParams(
size=1536,
distance=models.Distance.DOT,
on_disk=True,
),
optimizers_config=models.OptimizersConfigDiff(
default_segment_number=5,
indexing_threshold=0,
),
quantization_config=models.BinaryQuantization(
binary=models.BinaryQuantizationConfig(always_ram=True),
),
)
```
#### What is happening in the OptimizerConfig?
We're setting `indexing_threshold` to 0 i.e. disabling the indexing to zero. This allows faster uploads of vectors and payloads. We will turn it back on down below, once all the data is loaded
#### Next, we upload our vectors to this and then enable indexing:
```python
batch_size = 10000
client.upload_collection(
collection_name=collection_name,
ids=range(len(dataset)),
vectors=dataset["openai"],
payload=[
{"text": x} for x in dataset["text"]
],
parallel=10, # based on the machine
)
```
Enable indexing again:
```python
client.update_collection(
collection_name=f"{collection_name}",
optimizer_config=models.OptimizersConfigDiff(
indexing_threshold=20000
)
)
```
#### Configure the search parameters:
When setting search parameters, we specify that we want to use `oversampling` and `rescore`. Here is an example snippet:
```python
client.search(
collection_name="{collection_name}",
query_vector=[0.2, 0.1, 0.9, 0.7, ...],
search_params=models.SearchParams(
quantization=models.QuantizationSearchParams(
ignore=False,
rescore=True,
oversampling=2.0,
)
)
)
```
After Qdrant pulls the oversampled vectors set, the full vectors which will be, say 1536 dimensions for OpenAI will then be pulled up from disk. Qdrant computes the nearest neighbor with the query vector and returns the accurate, rescored order. This method produces much more accurate results. We enabled this by setting `rescore=True`.
These two parameters are how you are going to balance speed versus accuracy. The larger the size of your oversample, the more items you need to read from disk and the more elements you have to search with the relatively slower full vector index. On the other hand, doing this will produce more accurate results.
If you have lower accuracy requirements you can even try doing a small oversample without rescoring. Or maybe, for your data set combined with your accuracy versus speed requirements you can just search the binary index and no rescoring, i.e. leaving those two parameters out of the search query.
## Benchmark results
We retrieved some early results on the relationship between limit and oversampling using the the DBPedia OpenAI 1M vector dataset. We ran all these experiments on a Qdrant instance where 100K vectors were indexed and used 100 random queries.
We varied the 3 parameters that will affect query time and accuracy: limit, rescore and oversampling. We offer these as an initial exploration of this new feature. You are highly encouraged to reproduce these experiments with your data sets.
> Aside: Since this is a new innovation in vector databases, we are keen to hear feedback and results. [Join our Discord server](https://discord.gg/Qy6HCJK9Dc) for further discussion!
**Oversampling:**
In the figure below, we illustrate the relationship between recall and number of candidates:
![Correct vs candidates](/articles_data/binary-quantization/bq-5.png)
We see that "correct" results i.e. recall increases as the number of potential "candidates" increase (limit x oversampling). To highlight the impact of changing the `limit`, different limit values are broken apart into different curves. For example, we see that the lowest recall for limit 50 is around 94 correct, with 100 candidates. This also implies we used an oversampling of 2.0
As oversampling increases, we see a general improvement in results – but that does not hold in every case.
**Rescore:**
As expected, rescoring increases the time it takes to return a query.
We also repeated the experiment with oversampling except this time we looked at how rescore impacted result accuracy.
![Relationship between limit and rescore on correct](/articles_data/binary-quantization/bq-7.png)
**Limit:**
We experiment with limits from Top 1 to Top 50 and we are able to get to 100% recall at limit 50, with rescore=True, in an index with 100K vectors.
## Recommendations
Quantization gives you the option to make tradeoffs against other parameters:
Dimension count/embedding size
Throughput and Latency requirements
Recall requirements
If you're working with OpenAI or Cohere embeddings, we recommend the following oversampling settings:
|Method|Dimensionality|Test Dataset|Recall|Oversampling|
|-|-|-|-|-|
|OpenAI text-embedding-3-large|3072|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-3072-1M) | 0.9966|3x|
|OpenAI text-embedding-3-small|1536|[DBpedia 100K](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-small-1536-100K)| 0.9847|3x|
|OpenAI text-embedding-3-large|1536|[DBpedia 1M](https://huggingface.co/datasets/Qdrant/dbpedia-entities-openai3-text-embedding-3-large-1536-1M)| 0.9826|3x|
|Cohere AI embed-english-v2.0|4096|[Wikipedia](https://huggingface.co/datasets/nreimers/wikipedia-22-12-large/tree/main) 1M|0.98|2x|
|OpenAI text-embedding-ada-002|1536|[DbPedia 1M](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) |0.98|4x|
|Gemini|768|No Open Data| 0.9563|3x|
|Mistral Embed|768|No Open Data| 0.9445 |3x|
If you determine that binary quantization is appropriate for your datasets and queries then we suggest the following:
- Binary Quantization with always_ram=True
- Vectors stored on disk
- Oversampling=2.0 (or more)
- Rescore=True
## What's next?
Binary quantization is exceptional if you need to work with large volumes of data under high recall expectations. You can try this feature either by spinning up a [Qdrant container image](https://hub.docker.com/r/qdrant/qdrant) locally or, having us create one for you through a [free account](https://cloud.qdrant.io/login) in our cloud hosted service.
The article gives examples of data sets and configuration you can use to get going. Our documentation covers [adding large datasets to Qdrant](/documentation/tutorials/bulk-upload/) to your Qdrant instance as well as [more quantization methods](/documentation/guides/quantization/).
If you have any feedback, drop us a note on Twitter or LinkedIn to tell us about your results. [Join our lively Discord Server](https://discord.gg/Qy6HCJK9Dc) if you want to discuss BQ with like-minded people!
|
qdrant-landing/content/articles/cars-recognition.md | ---
title: Fine Tuning Similar Cars Search
short_description: "How to use similarity learning to search for similar cars"
description: Learn how to train a similarity model that can retrieve similar car images in novel categories.
social_preview_image: /articles_data/cars-recognition/preview/social_preview.jpg
small_preview_image: /articles_data/cars-recognition/icon.svg
preview_dir: /articles_data/cars-recognition/preview
weight: 10
author: Yusuf Sarıgöz
author_link: https://medium.com/@yusufsarigoz
date: 2022-06-28T13:00:00+03:00
draft: false
# aliases: [ /articles/cars-recognition/ ]
---
Supervised classification is one of the most widely used training objectives in machine learning,
but not every task can be defined as such. For example,
1. Your classes may change quickly —e.g., new classes may be added over time,
2. You may not have samples from every possible category,
3. It may be impossible to enumerate all the possible classes during the training time,
4. You may have an essentially different task, e.g., search or retrieval.
All such problems may be efficiently solved with similarity learning.
N.B.: If you are new to the similarity learning concept, checkout the [awesome-metric-learning](https://github.com/qdrant/awesome-metric-learning) repo for great resources and use case examples.
However, similarity learning comes with its own difficulties such as:
1. Need for larger batch sizes usually,
2. More sophisticated loss functions,
3. Changing architectures between training and inference.
Quaterion is a fine tuning framework built to tackle such problems in similarity learning.
It uses [PyTorch Lightning](https://www.pytorchlightning.ai/)
as a backend, which is advertized with the motto, "spend more time on research, less on engineering."
This is also true for Quaterion, and it includes:
1. Trainable and servable model classes,
2. Annotated built-in loss functions, and a wrapper over [pytorch-metric-learning](https://kevinmusgrave.github.io/pytorch-metric-learning/) when you need even more,
3. Sample, dataset and data loader classes to make it easier to work with similarity learning data,
4. A caching mechanism for faster iterations and less memory footprint.
## A closer look at Quaterion
Let's break down some important modules:
- `TrainableModel`: A subclass of `pl.LightNingModule` that has additional hook methods such as `configure_encoders`, `configure_head`, `configure_metrics` and others
to define objects needed for training and evaluation —see below to learn more on these.
- `SimilarityModel`: An inference-only export method to boost code transfer and lower dependencies during the inference time.
In fact, Quaterion is composed of two packages:
1. `quaterion_models`: package that you need for inference.
2. `quaterion`: package that defines objects needed for training and also depends on `quaterion_models`.
- `Encoder` and `EncoderHead`: Two objects that form a `SimilarityModel`.
In most of the cases, you may use a frozen pretrained encoder, e.g., ResNets from `torchvision`, or language modelling
models from `transformers`, with a trainable `EncoderHead` stacked on top of it.
`quaterion_models` offers several ready-to-use `EncoderHead` implementations,
but you may also create your own by subclassing a parent class or easily listing PyTorch modules in a `SequentialHead`.
Quaterion has other objects such as distance functions, evaluation metrics, evaluators, convenient dataset and data loader classes, but these are mostly self-explanatory.
Thus, they will not be explained in detail in this article for brevity.
However, you can always go check out the [documentation](https://quaterion.qdrant.tech) to learn more about them.
The focus of this tutorial is a step-by-step solution to a similarity learning problem with Quaterion.
This will also help us better understand how the abovementioned objects fit together in a real project.
Let's start walking through some of the important parts of the code.
If you are looking for the complete source code instead, you can find it under the [examples](https://github.com/qdrant/quaterion/tree/master/examples/cars)
directory in the Quaterion repo.
## Dataset
In this tutorial, we will use the [Stanford Cars](https://pytorch.org/vision/main/generated/torchvision.datasets.StanfordCars.html)
dataset.
{{< figure src=https://storage.googleapis.com/quaterion/docs/class_montage.jpg caption="Stanford Cars Dataset" >}}
It has 16185 images of cars from 196 classes,
and it is split into training and testing subsets with almost a 50-50% split.
To make things even more interesting, however, we will first merge training and testing subsets,
then we will split it into two again in such a way that the half of the 196 classes will be put into the training set and the other half will be in the testing set.
This will let us test our model with samples from novel classes that it has never seen in the training phase,
which is what supervised classification cannot achieve but similarity learning can.
In the following code borrowed from [`data.py`](https://github.com/qdrant/quaterion/blob/master/examples/cars/data.py):
- `get_datasets()` function performs the splitting task described above.
- `get_dataloaders()` function creates `GroupSimilarityDataLoader` instances from training and testing datasets.
- Datasets are regular PyTorch datasets that emit `SimilarityGroupSample` instances.
N.B.: Currently, Quaterion has two data types to represent samples in a dataset. To learn more about `SimilarityPairSample`, check out the [NLP tutorial](https://quaterion.qdrant.tech/tutorials/nlp_tutorial.html)
```python
import numpy as np
import os
import tqdm
from torch.utils.data import Dataset, Subset
from torchvision import datasets, transforms
from typing import Callable
from pytorch_lightning import seed_everything
from quaterion.dataset import (
GroupSimilarityDataLoader,
SimilarityGroupSample,
)
# set seed to deterministically sample train and test categories later on
seed_everything(seed=42)
# dataset will be downloaded to this directory under local directory
dataset_path = os.path.join(".", "torchvision", "datasets")
def get_datasets(input_size: int):
# Use Mean and std values for the ImageNet dataset as the base model was pretrained on it.
# taken from https://www.geeksforgeeks.org/how-to-normalize-images-in-pytorch/
mean = [0.485, 0.456, 0.406]
std = [0.229, 0.224, 0.225]
# create train and test transforms
transform = transforms.Compose(
[
transforms.Resize((input_size, input_size)),
transforms.ToTensor(),
transforms.Normalize(mean, std),
]
)
# we need to merge train and test splits into a full dataset first,
# and then we will split it to two subsets again with each one composed of distinct labels.
full_dataset = datasets.StanfordCars(
root=dataset_path, split="train", download=True
) + datasets.StanfordCars(root=dataset_path, split="test", download=True)
# full_dataset contains examples from 196 categories labeled with an integer from 0 to 195
# randomly sample half of it to be used for training
train_categories = np.random.choice(a=196, size=196 // 2, replace=False)
# get a list of labels for all samples in the dataset
labels_list = np.array([label for _, label in tqdm.tqdm(full_dataset)])
# get a mask for indices where label is included in train_categories
labels_mask = np.isin(labels_list, train_categories)
# get a list of indices to be used as train samples
train_indices = np.argwhere(labels_mask).squeeze()
# others will be used as test samples
test_indices = np.argwhere(np.logical_not(labels_mask)).squeeze()
# now that we have distinct indices for train and test sets, we can use `Subset` to create new datasets
# from `full_dataset`, which contain only the samples at given indices.
# finally, we apply transformations created above.
train_dataset = CarsDataset(
Subset(full_dataset, train_indices), transform=transform
)
test_dataset = CarsDataset(
Subset(full_dataset, test_indices), transform=transform
)
return train_dataset, test_dataset
def get_dataloaders(
batch_size: int,
input_size: int,
shuffle: bool = False,
):
train_dataset, test_dataset = get_datasets(input_size)
train_dataloader = GroupSimilarityDataLoader(
train_dataset, batch_size=batch_size, shuffle=shuffle
)
test_dataloader = GroupSimilarityDataLoader(
test_dataset, batch_size=batch_size, shuffle=False
)
return train_dataloader, test_dataloader
class CarsDataset(Dataset):
def __init__(self, dataset: Dataset, transform: Callable):
self._dataset = dataset
self._transform = transform
def __len__(self) -> int:
return len(self._dataset)
def __getitem__(self, index) -> SimilarityGroupSample:
image, label = self._dataset[index]
image = self._transform(image)
return SimilarityGroupSample(obj=image, group=label)
```
## Trainable Model
Now it's time to review one of the most exciting building blocks of Quaterion: [TrainableModel](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#module-quaterion.train.trainable_model).
It is the base class for models you would like to configure for training,
and it provides several hook methods starting with `configure_` to set up every aspect of the training phase
just like [`pl.LightningModule`](https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.core.LightningModule.html), its own base class.
It is central to fine tuning with Quaterion, so we will break down this essential code in [`models.py`](https://github.com/qdrant/quaterion/blob/master/examples/cars/models.py)
and review each method separately. Let's begin with the imports:
```python
import torch
import torchvision
from quaterion_models.encoders import Encoder
from quaterion_models.heads import EncoderHead, SkipConnectionHead
from torch import nn
from typing import Dict, Union, Optional, List
from quaterion import TrainableModel
from quaterion.eval.attached_metric import AttachedMetric
from quaterion.eval.group import RetrievalRPrecision
from quaterion.loss import SimilarityLoss, TripletLoss
from quaterion.train.cache import CacheConfig, CacheType
from .encoders import CarsEncoder
```
In the following code snippet, we subclass `TrainableModel`.
You may use `__init__()` to store some attributes to be used in various `configure_*` methods later on.
The more interesting part is, however, in the [`configure_encoders()`](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel.configure_encoders) method.
We need to return an instance of [`Encoder`](https://quaterion-models.qdrant.tech/quaterion_models.encoders.encoder.html#quaterion_models.encoders.encoder.Encoder) (or a dictionary with `Encoder` instances as values) from this method.
In our case, it is an instance of `CarsEncoders`, which we will review soon.
Notice now how it is created with a pretrained ResNet152 model whose classification layer is replaced by an identity function.
```python
class Model(TrainableModel):
def __init__(self, lr: float, mining: str):
self._lr = lr
self._mining = mining
super().__init__()
def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]:
pre_trained_encoder = torchvision.models.resnet152(pretrained=True)
pre_trained_encoder.fc = nn.Identity()
return CarsEncoder(pre_trained_encoder)
```
In Quaterion, a [`SimilarityModel`](https://quaterion-models.qdrant.tech/quaterion_models.model.html#quaterion_models.model.SimilarityModel) is composed of one or more `Encoder`s
and an [`EncoderHead`](https://quaterion-models.qdrant.tech/quaterion_models.heads.encoder_head.html#quaterion_models.heads.encoder_head.EncoderHead).
`quaterion_models` has [several `EncoderHead` implementations](https://quaterion-models.qdrant.tech/quaterion_models.heads.html#module-quaterion_models.heads)
with a unified API such as a configurable dropout value.
You may use one of them or create your own subclass of `EncoderHead`.
In either case, you need to return an instance of it from [`configure_head`](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel.configure_head)
In this example, we will use a `SkipConnectionHead`, which is lightweight and more resistant to overfitting.
```python
def configure_head(self, input_embedding_size) -> EncoderHead:
return SkipConnectionHead(input_embedding_size, dropout=0.1)
```
Quaterion has implementations of [some popular loss functions](https://quaterion.qdrant.tech/quaterion.loss.html) for similarity learning, all of which subclass either [`GroupLoss`](https://quaterion.qdrant.tech/quaterion.loss.group_loss.html#quaterion.loss.group_loss.GroupLoss)
or [`PairwiseLoss`](https://quaterion.qdrant.tech/quaterion.loss.pairwise_loss.html#quaterion.loss.pairwise_loss.PairwiseLoss).
In this example, we will use [`TripletLoss`](https://quaterion.qdrant.tech/quaterion.loss.triplet_loss.html#quaterion.loss.triplet_loss.TripletLoss),
which is a subclass of `GroupLoss`. In general, subclasses of `GroupLoss` are used with
datasets in which samples are assigned with some group (or label). In our example label is a make of the car.
Those datasets should emit `SimilarityGroupSample`.
Other alternatives are implementations of `PairwiseLoss`, which consume `SimilarityPairSample` - pair of objects for which similarity is specified individually.
To see an example of the latter, you may need to check out the [NLP Tutorial](https://quaterion.qdrant.tech/tutorials/nlp_tutorial.html)
```python
def configure_loss(self) -> SimilarityLoss:
return TripletLoss(mining=self._mining, margin=0.5)
```
`configure_optimizers()` may be familiar to PyTorch Lightning users,
but there is a novel `self.model` used inside that method.
It is an instance of `SimilarityModel` and is automatically created by Quaterion from the return values of `configure_encoders()` and `configure_head()`.
```python
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.model.parameters(), self._lr)
return optimizer
```
Caching in Quaterion is used for avoiding calculation of outputs of a frozen pretrained `Encoder` in every epoch.
When it is configured, outputs will be computed once and cached in the preferred device for direct usage later on.
It provides both a considerable speedup and less memory footprint.
However, it is quite a bit versatile and has several knobs to tune.
To get the most out of its potential, it's recommended that you check out the [cache tutorial](https://quaterion.qdrant.tech/tutorials/cache_tutorial.html).
For the sake of making this article self-contained, you need to return a [`CacheConfig`](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheConfig)
instance from [`configure_caches()`](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel.configure_caches)
to specify cache-related preferences such as:
- [`CacheType`](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheType), i.e., whether to store caches on CPU or GPU,
- `save_dir`, i.e., where to persist caches for subsequent runs,
- `batch_size`, i.e., batch size to be used only when creating caches - the batch size to be used during the actual training might be different.
```python
def configure_caches(self) -> Optional[CacheConfig]:
return CacheConfig(
cache_type=CacheType.AUTO, save_dir="./cache_dir", batch_size=32
)
```
We have just configured the training-related settings of a `TrainableModel`.
However, evaluation is an integral part of experimentation in machine learning,
and you may configure evaluation metrics by returning one or more [`AttachedMetric`](https://quaterion.qdrant.tech/quaterion.eval.attached_metric.html#quaterion.eval.attached_metric.AttachedMetric)
instances from `configure_metrics()`. Quaterion has several built-in [group](https://quaterion.qdrant.tech/quaterion.eval.group.html)
and [pairwise](https://quaterion.qdrant.tech/quaterion.eval.pair.html)
evaluation metrics.
```python
def configure_metrics(self) -> Union[AttachedMetric, List[AttachedMetric]]:
return AttachedMetric(
"rrp",
metric=RetrievalRPrecision(),
prog_bar=True,
on_epoch=True,
on_step=False,
)
```
## Encoder
As previously stated, a `SimilarityModel` is composed of one or more `Encoder`s and an `EncoderHead`.
Even if we freeze pretrained `Encoder` instances,
`EncoderHead` is still trainable and has enough parameters to adapt to the new task at hand.
It is recommended that you set the `trainable` property to `False` whenever possible,
as it lets you benefit from the caching mechanism described above.
Another important property is `embedding_size`, which will be passed to `TrainableModel.configure_head()` as `input_embedding_size`
to let you properly initialize the head layer.
Let's see how an `Encoder` is implemented in the following code borrowed from [`encoders.py`](https://github.com/qdrant/quaterion/blob/master/examples/cars/encoders.py):
```python
import os
import torch
import torch.nn as nn
from quaterion_models.encoders import Encoder
class CarsEncoder(Encoder):
def __init__(self, encoder_model: nn.Module):
super().__init__()
self._encoder = encoder_model
self._embedding_size = 2048 # last dimension from the ResNet model
@property
def trainable(self) -> bool:
return False
@property
def embedding_size(self) -> int:
return self._embedding_size
```
An `Encoder` is a regular `torch.nn.Module` subclass,
and we need to implement the forward pass logic in the `forward` method.
Depending on how you create your submodules, this method may be more complex;
however, we simply pass the input through a pretrained ResNet152 backbone in this example:
```python
def forward(self, images):
embeddings = self._encoder.forward(images)
return embeddings
```
An important step of machine learning development is proper saving and loading of models.
Quaterion lets you save your `SimilarityModel` with [`TrainableModel.save_servable()`](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel.save_servable)
and restore it with [`SimilarityModel.load()`](https://quaterion-models.qdrant.tech/quaterion_models.model.html#quaterion_models.model.SimilarityModel.load).
To be able to use these two methods, you need to implement `save()` and `load()` methods in your `Encoder`.
Additionally, it is also important that you define your subclass of `Encoder` outside the `__main__` namespace,
i.e., in a separate file from your main entry point.
It may not be restored properly otherwise.
```python
def save(self, output_path: str):
os.makedirs(output_path, exist_ok=True)
torch.save(self._encoder, os.path.join(output_path, "encoder.pth"))
@classmethod
def load(cls, input_path):
encoder_model = torch.load(os.path.join(input_path, "encoder.pth"))
return CarsEncoder(encoder_model)
```
## Training
With all essential objects implemented, it is easy to bring them all together and run a training loop with the [`Quaterion.fit()`](https://quaterion.qdrant.tech/quaterion.main.html#quaterion.main.Quaterion.fit)
method. It expects:
- A `TrainableModel`,
- A [`pl.Trainer`](https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html),
- A [`SimilarityDataLoader`](https://quaterion.qdrant.tech/quaterion.dataset.similarity_data_loader.html#quaterion.dataset.similarity_data_loader.SimilarityDataLoader) for training data,
- And optionally, another `SimilarityDataLoader` for evaluation data.
We need to import a few objects to prepare all of these:
```python
import os
import pytorch_lightning as pl
import torch
from pytorch_lightning.callbacks import EarlyStopping, ModelSummary
from quaterion import Quaterion
from .data import get_dataloaders
from .models import Model
```
The `train()` function in the following code snippet expects several hyperparameter values as arguments.
They can be defined in a `config.py` or passed from the command line.
However, that part of the code is omitted for brevity.
Instead let's focus on how all the building blocks are initialized and passed to `Quaterion.fit()`,
which is responsible for running the whole loop.
When the training loop is complete, you can simply call `TrainableModel.save_servable()`
to save the current state of the `SimilarityModel` instance:
```python
def train(
lr: float,
mining: str,
batch_size: int,
epochs: int,
input_size: int,
shuffle: bool,
save_dir: str,
):
model = Model(
lr=lr,
mining=mining,
)
train_dataloader, val_dataloader = get_dataloaders(
batch_size=batch_size, input_size=input_size, shuffle=shuffle
)
early_stopping = EarlyStopping(
monitor="validation_loss",
patience=50,
)
trainer = pl.Trainer(
gpus=1 if torch.cuda.is_available() else 0,
max_epochs=epochs,
callbacks=[early_stopping, ModelSummary(max_depth=3)],
enable_checkpointing=False,
log_every_n_steps=1,
)
Quaterion.fit(
trainable_model=model,
trainer=trainer,
train_dataloader=train_dataloader,
val_dataloader=val_dataloader,
)
model.save_servable(save_dir)
```
## Evaluation
Let's see what we have achieved with these simple steps.
[`evaluate.py`](https://github.com/qdrant/quaterion/blob/master/examples/cars/evaluate.py) has two functions to evaluate both the baseline model and the tuned similarity model.
We will review only the latter for brevity.
In addition to the ease of restoring a `SimilarityModel`, this code snippet also shows
how to use [`Evaluator`](https://quaterion.qdrant.tech/quaterion.eval.evaluator.html#quaterion.eval.evaluator.Evaluator)
to evaluate the performance of a `SimilarityModel` on a given dataset
by given evaluation metrics.
{{< figure src=https://storage.googleapis.com/quaterion/docs/original_vs_tuned_cars.png caption="Comparison of original and tuned models for retrieval" >}}
Full evaluation of a dataset usually grows exponentially,
and thus you may want to perform a partial evaluation on a sampled subset.
In this case, you may use [samplers](https://quaterion.qdrant.tech/quaterion.eval.samplers.html)
to limit the evaluation.
Similar to `Quaterion.fit()` used for training, [`Quaterion.evaluate()`](https://quaterion.qdrant.tech/quaterion.main.html#quaterion.main.Quaterion.evaluate)
runs a complete evaluation loop. It takes the following as arguments:
- An `Evaluator` instance created with given evaluation metrics and a `Sampler`,
- The `SimilarityModel` to be evaluated,
- And the evaluation dataset.
```python
def eval_tuned_encoder(dataset, device):
print("Evaluating tuned encoder...")
tuned_cars_model = SimilarityModel.load(
os.path.join(os.path.dirname(__file__), "cars_encoders")
).to(device)
tuned_cars_model.eval()
result = Quaterion.evaluate(
evaluator=Evaluator(
metrics=RetrievalRPrecision(),
sampler=GroupSampler(sample_size=1000, device=device, log_progress=True),
),
model=tuned_cars_model,
dataset=dataset,
)
print(result)
```
## Conclusion
In this tutorial, we trained a similarity model to search for similar cars from novel categories unseen in the training phase.
Then, we evaluated it on a test dataset by the Retrieval R-Precision metric.
The base model scored 0.1207,
and our tuned model hit 0.2540, a twice higher score.
These scores can be seen in the following figure:
{{< figure src=/articles_data/cars-recognition/cars_metrics.png caption="Metrics for the base and tuned models" >}}
|
qdrant-landing/content/articles/chatgpt-plugin.md | ---
title: Extending ChatGPT with a Qdrant-based knowledge base
short_description: "ChatGPT factuality might be improved with semantic search. Here is how."
description: "ChatGPT factuality might be improved with semantic search. Here is how."
social_preview_image: /articles_data/chatgpt-plugin/social_preview.jpg
small_preview_image: /articles_data/chatgpt-plugin/chatgpt-plugin-icon.svg
preview_dir: /articles_data/chatgpt-plugin/preview
weight: 7
author: Kacper Łukawski
author_link: https://medium.com/@lukawskikacper
date: 2023-03-23T18:01:00+01:00
draft: false
keywords:
- openai
- chatgpt
- chatgpt plugin
- knowledge base
- similarity search
---
In recent months, ChatGPT has revolutionised the way we communicate, learn, and interact
with technology. Our social platforms got flooded with prompts, responses to them, whole
articles and countless other examples of using Large Language Models to generate content
unrecognisable from the one written by a human.
Despite their numerous benefits, these models have flaws, as evidenced by the phenomenon
of hallucination - the generation of incorrect or nonsensical information in response to
user input. This issue, which can compromise the reliability and credibility of
AI-generated content, has become a growing concern among researchers and users alike.
Those concerns started another wave of entirely new libraries, such as Langchain, trying
to overcome those issues, for example, by combining tools like vector databases to bring
the required context into the prompts. And that is, so far, the best way to incorporate
new and rapidly changing knowledge into the neural model. So good that OpenAI decided to
introduce a way to extend the model capabilities with external plugins at the model level.
These plugins, designed to enhance the model's performance, serve as modular extensions
that seamlessly interface with the core system. By adding a knowledge base plugin to
ChatGPT, we can effectively provide the AI with a curated, trustworthy source of
information, ensuring that the generated content is more accurate and relevant. Qdrant
may act as a vector database where all the facts will be stored and served to the model
upon request.
If you’d like to ask ChatGPT questions about your data sources, such as files, notes, or
emails, starting with the official [ChatGPT retrieval plugin repository](https://github.com/openai/chatgpt-retrieval-plugin)
is the easiest way. Qdrant is already integrated, so that you can use it right away. In
the following sections, we will guide you through setting up the knowledge base using
Qdrant and demonstrate how this powerful combination can significantly improve ChatGPT's
performance and output quality.
## Implementing a knowledge base with Qdrant
The official ChatGPT retrieval plugin uses a vector database to build your knowledge base.
Your documents are chunked and vectorized with the OpenAI's text-embedding-ada-002 model
to be stored in Qdrant. That enables semantic search capabilities. So, whenever ChatGPT
thinks it might be relevant to check the knowledge base, it forms a query and sends it
to the plugin to incorporate the results into its response. You can now modify the
knowledge base, and ChatGPT will always know the most recent facts. No model fine-tuning
is required. Let’s implement that for your documents. In our case, this will be Qdrant’s
documentation, so you can ask even technical questions about Qdrant directly in ChatGPT.
Everything starts with cloning the plugin's repository.
```bash
git clone [email protected]:openai/chatgpt-retrieval-plugin.git
```
Please use your favourite IDE to open the project once cloned.
### Prerequisites
You’ll need to ensure three things before we start:
1. Create an OpenAI API key, so you can use their embeddings model programmatically. If
you already have an account, you can generate one at https://platform.openai.com/account/api-keys.
Otherwise, registering an account might be required.
2. Run a Qdrant instance. The instance has to be reachable from the outside, so you
either need to launch it on-premise or use the [Qdrant Cloud](https://cloud.qdrant.io/)
offering. A free 1GB cluster is available, which might be enough in many cases. We’ll
use the cloud.
3. Since ChatGPT will interact with your service through the network, you must deploy it,
making it possible to connect from the Internet. Unfortunately, localhost is not an
option, but any provider, such as Heroku or fly.io, will work perfectly. We will use
[fly.io](https://fly.io/), so please register an account. You may also need to install
the flyctl tool for the deployment. The process is described on the homepage of fly.io.
### Configuration
The retrieval plugin is a FastAPI-based application, and its default functionality might
be enough in most cases. However, some configuration is required so ChatGPT knows how and
when to use it. However, we can start setting up Fly.io, as we need to know the service's
hostname to configure it fully.
First, let’s login into the Fly CLI:
```bash
flyctl auth login
```
That will open the browser, so you can simply provide the credentials, and all the further
commands will be executed with your account. If you have never used fly.io, you may need
to give the credit card details before running any instance, but there is a Hobby Plan
you won’t be charged for.
Let’s try to launch the instance already, but do not deploy it. We’ll get the hostname
assigned and have all the details to fill in the configuration. The retrieval plugin
uses TCP port 8080, so we need to configure fly.io, so it redirects all the traffic to it
as well.
```bash
flyctl launch --no-deploy --internal-port 8080
```
We’ll be prompted about the application name and the region it should be deployed to.
Please choose whatever works best for you. After that, we should see the hostname of the
newly created application:
```text
...
Hostname: your-application-name.fly.dev
...
```
Let’s note it down. We’ll need it for the configuration of the service. But we’re going
to start with setting all the applications secrets:
```bash
flyctl secrets set DATASTORE=qdrant \
OPENAI_API_KEY=<your-openai-api-key> \
QDRANT_URL=https://<your-qdrant-instance>.aws.cloud.qdrant.io \
QDRANT_API_KEY=<your-qdrant-api-key> \
BEARER_TOKEN=eyJhbGciOiJIUzI1NiJ9.e30.ZRrHA1JJJW8opsbCGfG_HACGpVUMN_a9IV7pAx_Zmeo
```
The secrets will be staged for the first deployment. There is an example of a minimal
Bearer token generated by https://jwt.io/. **Please adjust the token and do not expose
it publicly, but you can keep the same value for the demo.**
Right now, let’s dive into the application config files. You can optionally provide your
icon and keep it as `.well-known/logo.png` file, but there are two additional files we’re
going to modify.
The `.well-known/openapi.yaml` file describes the exposed API in the OpenAPI format.
Lines 3 to 5 might be filled with the application title and description, but the essential
part is setting the server URL the application will run. Eventually, the top part of the
file should look like the following:
```yaml
openapi: 3.0.0
info:
title: Qdrant Plugin API
version: 1.0.0
description: Plugin for searching through the Qdrant doc…
servers:
- url: https://your-application-name.fly.dev
...
```
There is another file in the same directory, and that’s the most crucial piece to
configure. It contains the description of the plugin we’re implementing, and ChatGPT
uses this description to determine if it should communicate with our knowledge base.
The file is called `.well-known/ai-plugin.json`, and let’s edit it before we finally
deploy the app. There are various properties we need to fill in:
| **Property** | **Meaning** | **Example** |
|-------------------------|----------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `name_for_model` | Name of the plugin for the ChatGPT model | *qdrant* |
| `name_for_human` | Human-friendly model name, to be displayed in ChatGPT UI | *Qdrant Documentation Plugin* |
| `description_for_model` | Description of the purpose of the plugin, so ChatGPT knows in what cases it should be using it to answer a question. | *Plugin for searching through the Qdrant documentation to find answers to questions and retrieve relevant information. Use it whenever a user asks something that might be related to Qdrant vector database or semantic vector search* |
| `description_for_human` | Short description of the plugin, also to be displayed in the ChatGPT UI. | *Search through Qdrant docs* |
| `auth` | Authorization scheme used by the application. By default, the bearer token has to be configured. | ```{"type": "user_http", "authorization_type": "bearer"}``` |
| `api.url` | Link to the OpenAPI schema definition. Please adjust based on your application URL. | *https://your-application-name.fly.dev/.well-known/openapi.yaml* |
| `logo_url` | Link to the application logo. Please adjust based on your application URL. | *https://your-application-name.fly.dev/.well-known/logo.png* |
A complete file may look as follows:
```json
{
"schema_version": "v1",
"name_for_model": "qdrant",
"name_for_human": "Qdrant Documentation Plugin",
"description_for_model": "Plugin for searching through the Qdrant documentation to find answers to questions and retrieve relevant information. Use it whenever a user asks something that might be related to Qdrant vector database or semantic vector search",
"description_for_human": "Search through Qdrant docs",
"auth": {
"type": "user_http",
"authorization_type": "bearer"
},
"api": {
"type": "openapi",
"url": "https://your-application-name.fly.dev/.well-known/openapi.yaml",
"has_user_authentication": false
},
"logo_url": "https://your-application-name.fly.dev/.well-known/logo.png",
"contact_email": "[email protected]",
"legal_info_url": "[email protected]"
}
```
That was the last step before running the final command. The command that will deploy
the application on the server:
```bash
flyctl deploy
```
The command will build the image using the Dockerfile and deploy the service at a given
URL. Once the command is finished, the service should be running on the hostname we got
previously:
```text
https://your-application-name.fly.dev
```
## Integration with ChatGPT
Once we have deployed the service, we can point ChatGPT to it, so the model knows how to
connect. When you open the ChatGPT UI, you should see a dropdown with a Plugins tab
included:
![](/articles_data/chatgpt-plugin/step-1.png)
Once selected, you should be able to choose one of check the plugin store:
![](/articles_data/chatgpt-plugin/step-2.png)
There are some premade plugins available, but there’s also a possibility to install your
own plugin by clicking on the "*Develop your own plugin*" option in the bottom right
corner:
![](/articles_data/chatgpt-plugin/step-3.png)
We need to confirm our plugin is ready, but since we relied on the official retrieval
plugin from OpenAI, this should be all fine:
![](/articles_data/chatgpt-plugin/step-4.png)
After clicking on "*My manifest is ready*", we can already point ChatGPT to our newly
created service:
![](/articles_data/chatgpt-plugin/step-5.png)
A successful plugin installation should end up with the following information:
![](/articles_data/chatgpt-plugin/step-6.png)
There is a name and a description of the plugin we provided. Let’s click on "*Done*" and
return to the "*Plugin store*" window again. There is another option we need to choose in
the bottom right corner:
![](/articles_data/chatgpt-plugin/step-7.png)
Our plugin is not officially verified, but we can, of course, use it freely. The
installation requires just the service URL:
![](/articles_data/chatgpt-plugin/step-8.png)
OpenAI cannot guarantee the plugin provides factual information, so there is a warning
we need to accept:
![](/articles_data/chatgpt-plugin/step-9.png)
Finally, we need to provide the Bearer token again:
![](/articles_data/chatgpt-plugin/step-10.png)
Our plugin is now ready to be tested. Since there is no data inside the knowledge base,
extracting any facts is impossible, but we’re going to put some data using the Swagger UI
exposed by our service at https://your-application-name.fly.dev/docs. We need to authorize
first, and then call the upsert method with some docs. For the demo purposes, we can just
put a single document extracted from the Qdrant documentation to see whether integration
works properly:
![](/articles_data/chatgpt-plugin/step-11.png)
We can come back to ChatGPT UI, and send a prompt, but we need to make sure the plugin
is selected:
![](/articles_data/chatgpt-plugin/step-12.png)
Now if our prompt seems somehow related to the plugin description provided, the model
will automatically form a query and send it to the HTTP API. The query will get vectorized
by our app, and then used to find some relevant documents that will be used as a context
to generate the response.
![](/articles_data/chatgpt-plugin/step-13.png)
We have a powerful language model, that can interact with our knowledge base, to return
not only grammatically correct but also factual information. And this is how your
interactions with the model may start to look like:
<iframe width="560" height="315" src="https://www.youtube.com/embed/fQUGuHEYeog" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
However, a single document is not enough to enable the full power of the plugin. If you
want to put more documents that you have collected, there are already some scripts
available in the `scripts/` directory that allows converting JSON, JSON lines or even
zip archives.
|
qdrant-landing/content/articles/data-privacy.md | ---
title: " Data Privacy with Qdrant: Implementing Role-Based Access Control (RBAC)" #required
short_description: "Secure Your Data with Qdrant: Implementing RBAC"
description: Discover how Qdrant's Role-Based Access Control (RBAC) ensures data privacy and compliance for your AI applications. Build secure and scalable systems with ease. Read more now!
social_preview_image: /articles_data/data-privacy/preview/social_preview.jpg # This image will be used in social media previews, should be 1200x630px. Required.
preview_dir: /articles_data/data-privacy/preview # This directory contains images that will be used in the article preview. They can be generated from one image. Read more below. Required.
weight: -110 # This is the order of the article in the list of articles at the footer. The lower the number, the higher the article will be in the list.
author: Qdrant Team # Author of the article. Required.
author_link: https://qdrant.tech/ # Link to the author's page. Required.
date: 2024-06-18T08:00:00-03:00 # Date of the article. Required.
draft: false # If true, the article will not be published
keywords: # Keywords for SEO
- Role-Based Access Control (RBAC)
- Data Privacy in Vector Databases
- Secure AI Data Management
- Qdrant Data Security
- Enterprise Data Compliance
---
Data stored in vector databases is often proprietary to the enterprise and may include sensitive information like customer records, legal contracts, electronic health records (EHR), financial data, and intellectual property. Moreover, strong security measures become critical to safeguarding this data. If the data stored in a vector database is not secured, it may open a vulnerability known as "[embedding inversion attack](https://arxiv.org/abs/2004.00053)," where malicious actors could potentially [reconstruct the original data from the embeddings](https://arxiv.org/pdf/2305.03010) themselves.
Strict compliance regulations govern data stored in vector databases across various industries. For instance, healthcare must comply with HIPAA, which dictates how protected health information (PHI) is stored, transmitted, and secured. Similarly, the financial services industry follows PCI DSS to safeguard sensitive financial data. These regulations require developers to ensure data storage and transmission comply with industry-specific legal frameworks across different regions. **As a result, features that enable data privacy, security and sovereignty are deciding factors when choosing the right vector database.**
This article explores various strategies to ensure the security of your critical data while leveraging the benefits of vector search. Implementing some of these security approaches can help you build privacy-enhanced similarity search algorithms and integrate them into your AI applications.
Additionally, you will learn how to build a fully data-sovereign architecture, allowing you to retain control over your data and comply with relevant data laws and regulations.
> To skip right to the code implementation, [click here](/articles/data-privacy/#jwt-on-qdrant).
## Vector Database Security: An Overview
Vector databases are often unsecured by default to facilitate rapid prototyping and experimentation. This approach allows developers to quickly ingest data, build vector representations, and test similarity search algorithms without initial security concerns. However, in production environments, unsecured databases pose significant data breach risks.
For production use, robust security systems are essential. Authentication, particularly using static API keys, is a common approach to control access and prevent unauthorized modifications. Yet, simple API authentication is insufficient for enterprise data, which requires granular control.
The primary challenge with static API keys is their all-or-nothing access, inadequate for role-based data segregation in enterprise applications. Additionally, a compromised key could grant attackers full access to manipulate or steal data. To strengthen the security of the vector database, developers typically need the following:
1. **Encryption**: This ensures that sensitive data is scrambled as it travels between the application and the vector database. This safeguards against Man-in-the-Middle ([MitM](https://en.wikipedia.org/wiki/Man-in-the-middle_attack)) attacks, where malicious actors can attempt to intercept and steal data during transmission.
2. **Role-Based Access Control**: As mentioned before, traditional static API keys grant all-or-nothing access, which is a significant security risk in enterprise environments. RBAC offers a more granular approach by defining user roles and assigning specific data access permissions based on those roles. For example, an analyst might have read-only access to specific datasets, while an administrator might have full CRUD (Create, Read, Update, Delete) permissions across the database.
3. **Deployment Flexibility**: Data residency regulations like GDPR (General Data Protection Regulation) and industry-specific compliance requirements dictate where data can be stored, processed, and accessed. Developers would need to choose a database solution which offers deployment options that comply with these regulations. This might include on-premise deployments within a company's private cloud or geographically distributed cloud deployments that adhere to data residency laws.
## How Qdrant Handles Data Privacy and Security
One of the cornerstones of our design choices at Qdrant has been the focus on security features. We have built in a range of features keeping the enterprise user in mind, which allow building of granular access control on a fully data sovereign architecture.
A Qdrant instance is unsecured by default. However, when you are ready to deploy in production, Qdrant offers a range of security features that allow you to control access to your data, protect it from breaches, and adhere to regulatory requirements. Using Qdrant, you can build granular access control, segregate roles and privileges, and create a fully data sovereign architecture.
### API Keys and TLS Encryption
For simpler use cases, Qdrant offers API key-based authentication. This includes both regular API keys and read-only API keys. Regular API keys grant full access to read, write, and delete operations, while read-only keys restrict access to data retrieval operations only, preventing write actions.
On Qdrant Cloud, you can create API keys using the [Cloud Dashboard](https://qdrant.to/cloud). This allows you to generate API keys that give you access to a single node or cluster, or multiple clusters. You can read the steps to do so [here](/documentation/cloud/authentication/).
![web-ui](/articles_data/data-privacy/web-ui.png)
For on-premise or local deployments, you'll need to configure API key authentication. This involves specifying a key in either the Qdrant configuration file or as an environment variable. This ensures that all requests to the server must include a valid API key sent in the header.
When using the simple API key-based authentication, you should also turn on TLS encryption. Otherwise, you are exposing the connection to sniffing and MitM attacks. To secure your connection using TLS, you would need to create a certificate and private key, and then [enable TLS](/documentation/guides/security/#tls) in the configuration.
API authentication, coupled with TLS encryption, offers a first layer of security for your Qdrant instance. However, to enable more granular access control, the recommended approach is to leverage JSON Web Tokens (JWTs).
### JWT on Qdrant
JSON Web Tokens (JWTs) are a compact, URL-safe, and stateless means of representing _claims_ to be transferred between two parties. These claims are encoded as a JSON object and are cryptographically signed.
JWT is composed of three parts: a header, a payload, and a signature, which are concatenated with dots (.) to form a single string. The header contains the type of token and algorithm being used. The payload contains the claims (explained in detail later). The signature is a cryptographic hash and ensures the token’s integrity.
In Qdrant, JWT forms the foundation through which powerful access controls can be built. Let’s understand how.
JWT is enabled on the Qdrant instance by specifying the API key and turning on the **jwt_rbac** feature in the configuration (alternatively, they can be set as environment variables). For any subsequent request, the API key is used to encode or decode the token.
The way JWT works is that just the API key is enough to generate the token, and doesn’t require any communication with the Qdrant instance or server. There are several libraries that help generate tokens by encoding a payload, such as [PyJWT](https://pyjwt.readthedocs.io/en/stable/) (for Python), [jsonwebtoken](https://www.npmjs.com/package/jsonwebtoken) (for JavaScript), and [jsonwebtoken](https://crates.io/crates/jsonwebtoken) (for Rust). Qdrant uses the HS256 algorithm to encode or decode the tokens.
We will look at the payload structure shortly, but here’s how you can generate a token using PyJWT.
```python
import jwt
import datetime
# Define your API key and other payload data
api_key = "your_api_key"
payload = { ...
}
token = jwt.encode(payload, api_key, algorithm="HS256")
print(token)
```
Once you have generated the token, you should include it in the subsequent requests. You can do so by providing it as a bearer token in the Authorization header, or in the API Key header of your requests.
Below is an example of how to do so using QdrantClient in Python:
```python
from qdrant_client import QdrantClient
qdrant_client = QdrantClient(
"http://localhost:6333",
api_key="<JWT>", # the token goes here
)
# Example search vector
search_vector = [0.1, 0.2, 0.3, 0.4]
# Example similarity search request
response = qdrant_client.search(
collection_name="demo_collection",
query_vector=search_vector,
limit=5 # Number of results to retrieve
)
```
For convenience, we have added a JWT generation tool in the Qdrant Web UI, which is present under the 🔑 tab. For your local deployments, you will find it at [http://localhost:6333/dashboard#/jwt](http://localhost:6333/dashboard#/jwt).
### Payload Configuration
There are several different options (claims) you can use in the JWT payload that help control access and functionality. Let’s look at them one by one.
**exp**: This claim is the expiration time of the token, and is a unix timestamp in seconds. After the expiration time, the token will be invalid.
**value_exists**: This claim validates the token against a specific key-value stored in a collection. By using this claim, you can revoke access by simply changing a value without having to invalidate the API key.
**access**: This claim defines the access level of the token. The access level can be global read (r) or manage (m). It can also be specific to a collection, or even a subset of a collection, using read (r) and read-write (rw).
Let’s look at a few example JWT payload configurations.
**Scenario 1: 1-hour expiry time, and read-only access to a collection**
```json
{
"exp": 1690995200, // Set to 1 hour from the current time (Unix timestamp)
"access": [
{
"collection": "demo_collection",
"access": "r" // Read-only access
}
]
}
```
**Scenario 2: 1-hour expiry time, and access to user with a specific role**
Suppose you have a ‘users’ collection and have defined specific roles for each user, such as ‘developer’, ‘manager’, ‘admin’, ‘analyst’, and ‘revoked’. In such a scenario, you can use a combination of **exp** and **value_exists**.
```json
{
"exp": 1690995200,
"value_exists": {
"collection": "users",
"matches": [
{ "key": "username", "value": "john" },
{ "key": "role", "value": "developer" }
],
},
}
```
Now, if you ever want to revoke access for a user, simply change the value of their role. All future requests will be invalid using a token payload of the above type.
**Scenario 3: 1-hour expiry time, and read-write access to a subset of a collection**
You can even specify access levels specific to subsets of a collection. This can be especially useful when you are leveraging [multitenancy](/documentation/guides/multiple-partitions/), and want to segregate access.
```json
{
"exp": 1690995200,
"access": [
{
"collection": "demo_collection",
"access": "r",
"payload": {
"user_id": "user_123456"
}
}
]
}
```
By combining the claims, you can fully customize the access level that a user or a role has within the vector store.
### Creating Role-Based Access Control (RBAC) Using JWT
As we saw above, JWT claims create powerful levers through which you can create granular access control on Qdrant. Let’s bring it all together and understand how it helps you create Role-Based Access Control (RBAC).
In a typical enterprise application, you will have a segregation of users based on their roles and permissions. These could be:
1. **Admin or Owner:** with full access, and can generate API keys.
2. **Editor:** with read-write access levels to specific collections.
3. **Viewer:** with read-only access to specific collections.
4. **Data Scientist or Analyst:** with read-only access to specific collections.
5. **Developer:** with read-write access to development- or testing-specific collections, but limited access to production data.
6. **Guest:** with limited read-only access to publicly available collections.
In addition, you can create access levels within sections of a collection. In a multi-tenant application, where you have used payload-based partitioning, you can create read-only access for specific user roles for a subset of the collection that belongs to that user.
Your application requirements will eventually help you decide the roles and access levels you should create. For example, in an application managing customer data, you could create additional roles such as:
**Customer Support Representative**: read-write access to customer service-related data but no access to billing information.
**Billing Department**: read-only access to billing data and read-write access to payment records.
**Marketing Analyst**: read-only access to anonymized customer data for analytics.
Each role can be assigned a JWT with claims that specify expiration times, read/write permissions for collections, and validating conditions.
In such an application, an example JWT payload for a customer support representative role could be:
```json
{
"exp": 1690995200,
"access": [
{
"collection": "customer_data",
"access": "rw",
"payload": {
"department": "support"
}
}
],
"value_exists": {
"collection": "departments",
"matches": [
{ "key": "department", "value": "support" }
]
}
}
```
As you can see, by implementing RBAC, you can ensure proper segregation of roles and their privileges, and avoid privacy loopholes in your application.
## Qdrant Hybrid Cloud and Data Sovereignty
Data governance varies by country, especially for global organizations dealing with different regulations on data privacy, security, and access. This often necessitates deploying infrastructure within specific geographical boundaries.
To address these needs, the vector database you choose should support deployment and scaling within your controlled infrastructure. [Qdrant Hybrid Cloud](/documentation/hybrid-cloud/) offers this flexibility, along with features like sharding, replicas, JWT authentication, and monitoring.
Qdrant Hybrid Cloud integrates Kubernetes clusters from various environments—cloud, on-premises, or edge—into a unified managed service. This allows organizations to manage Qdrant databases through the Qdrant Cloud UI while keeping the databases within their infrastructure.
With JWT and RBAC, Qdrant Hybrid Cloud provides a secure, private, and sovereign vector store. Enterprises can scale their AI applications geographically, comply with local laws, and maintain strict data control.
## Conclusion
Vector similarity is increasingly becoming the backbone of AI applications that leverage unstructured data. By transforming data into vectors – their numerical representations – organizations can build powerful applications that harness semantic search, ranging from better recommendation systems to algorithms that help with personalization, or powerful customer support chatbots.
However, to fully leverage the power of AI in production, organizations need to choose a vector database that offers strong privacy and security features, while also helping them adhere to local laws and regulations.
Qdrant provides exceptional efficiency and performance, along with the capability to implement granular access control to data, Role-Based Access Control (RBAC), and the ability to build a fully data-sovereign architecture.
Interested in mastering vector search security and deployment strategies? [Join our Discord community](https://discord.gg/qdrant) to explore more advanced search strategies, connect with other developers and researchers in the industry, and stay updated on the latest innovations!
|
qdrant-landing/content/articles/dataset-quality.md | ---
title: Finding errors in datasets with Similarity Search
short_description: Finding errors datasets with distance-based methods
description: Improving quality of text-and-images datasets on the online furniture marketplace example.
preview_dir: /articles_data/dataset-quality/preview
social_preview_image: /articles_data/dataset-quality/preview/social_preview.jpg
small_preview_image: /articles_data/dataset-quality/icon.svg
weight: 8
author: George Panchuk
author_link: https://medium.com/@george.panchuk
date: 2022-07-18T10:18:00.000Z
# aliases: [ /articles/dataset-quality/ ]
---
Nowadays, people create a huge number of applications of various types and solve problems in different areas.
Despite such diversity, they have something in common - they need to process data.
Real-world data is a living structure, it grows day by day, changes a lot and becomes harder to work with.
In some cases, you need to categorize or label your data, which can be a tough problem given its scale.
The process of splitting or labelling is error-prone and these errors can be very costly.
Imagine that you failed to achieve the desired quality of the model due to inaccurate labels.
Worse, your users are faced with a lot of irrelevant items, unable to find what they need and getting annoyed by it.
Thus, you get poor retention, and it directly impacts company revenue.
It is really important to avoid such errors in your data.
## Furniture web-marketplace
Let’s say you work on an online furniture marketplace.
{{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/furniture_marketplace.png caption="Furniture marketplace" >}}
In this case, to ensure a good user experience, you need to split items into different categories: tables, chairs, beds, etc.
One can arrange all the items manually and spend a lot of money and time on this.
There is also another way: train a classification or similarity model and rely on it.
With both approaches it is difficult to avoid mistakes.
Manual labelling is a tedious task, but it requires concentration.
Once you got distracted or your eyes became blurred mistakes won't keep you waiting.
The model also can be wrong.
You can analyse the most uncertain predictions and fix them, but the other errors will still leak to the site.
There is no silver bullet. You should validate your dataset thoroughly, and you need tools for this.
When you are sure that there are not many objects placed in the wrong category, they can be considered outliers or anomalies.
Thus, you can train a model or a bunch of models capable of looking for anomalies, e.g. autoencoder and a classifier on it.
However, this is again a resource-intensive task, both in terms of time and manual labour, since labels have to be provided for classification.
On the contrary, if the proportion of out-of-place elements is high enough, outlier search methods are likely to be useless.
### Similarity search
The idea behind similarity search is to measure semantic similarity between related parts of the data.
E.g. between category title and item images.
The hypothesis is, that unsuitable items will be less similar.
We can't directly compare text and image data.
For this we need an intermediate representation - embeddings.
Embeddings are just numeric vectors containing semantic information.
We can apply a pre-trained model to our data to produce these vectors.
After embeddings are created, we can measure the distances between them.
Assume we want to search for something other than a single bed in «Single beds» category.
{{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/similarity_search.png caption="Similarity search" >}}
One of the possible pipelines would look like this:
- Take the name of the category as an anchor and calculate the anchor embedding.
- Calculate embeddings for images of each object placed into this category.
- Compare obtained anchor and object embeddings.
- Find the furthest.
For instance, we can do it with the [CLIP](https://huggingface.co/sentence-transformers/clip-ViT-B-32-multilingual-v1) model.
{{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/category_vs_image_transparent.png caption="Category vs. Image" >}}
We can also calculate embeddings for titles instead of images, or even for both of them to find more errors.
{{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/category_vs_name_and_image_transparent.png caption="Category vs. Title and Image" >}}
As you can see, different approaches can find new errors or the same ones.
Stacking several techniques or even the same techniques with different models may provide better coverage.
Hint: Caching embeddings for the same models and reusing them among different methods can significantly speed up your lookup.
### Diversity search
Since pre-trained models have only general knowledge about the data, they can still leave some misplaced items undetected.
You might find yourself in a situation when the model focuses on non-important features, selects a lot of irrelevant elements, and fails to find genuine errors.
To mitigate this issue, you can perform a diversity search.
Diversity search is a method for finding the most distinctive examples in the data.
As similarity search, it also operates on embeddings and measures the distances between them.
The difference lies in deciding which point should be extracted next.
Let's imagine how to get 3 points with similarity search and then with diversity search.
Similarity:
1. Calculate distance matrix
2. Choose your anchor
3. Get a vector corresponding to the distances from the selected anchor from the distance matrix
4. Sort fetched vector
5. Get top-3 embeddings
Diversity:
1. Calculate distance matrix
2. Initialize starting point (randomly or according to the certain conditions)
3. Get a distance vector for the selected starting point from the distance matrix
4. Find the furthest point
5. Get a distance vector for the new point
6. Find the furthest point from all of already fetched points
{{< figure src=https://storage.googleapis.com/demo-dataset-quality-public/article/diversity_transparent.png caption="Diversity search" >}}
Diversity search utilizes the very same embeddings, and you can reuse them.
If your data is huge and does not fit into memory, vector search engines like [Qdrant](https://github.com/qdrant/qdrant) might be helpful.
Although the described methods can be used independently. But they are simple to combine and improve detection capabilities.
If the quality remains insufficient, you can fine-tune the models using a similarity learning approach (e.g. with [Quaterion](https://quaterion.qdrant.tech) both to provide a better representation of your data and pull apart dissimilar objects in space.
## Conclusion
In this article, we enlightened distance-based methods to find errors in categorized datasets.
Showed how to find incorrectly placed items in the furniture web store.
I hope these methods will help you catch sneaky samples leaked into the wrong categories in your data, and make your users` experience more enjoyable.
Poke the [demo](https://dataset-quality.qdrant.tech).
Stay tuned :)
|
qdrant-landing/content/articles/dedicated-service.md | ---
title: "Vector Search as a dedicated service"
short_description: "Why vector search requires to be a dedicated service."
description: "Why vector search requires a dedicated service."
social_preview_image: /articles_data/dedicated-service/social-preview.png
small_preview_image: /articles_data/dedicated-service/preview/icon.svg
preview_dir: /articles_data/dedicated-service/preview
weight: -70
author: Andrey Vasnetsov
author_link: https://vasnetsov.com/
date: 2023-11-30T10:00:00+03:00
draft: false
keywords:
- system architecture
- vector search
- best practices
- anti-patterns
---
Ever since the data science community discovered that vector search significantly improves LLM answers,
various vendors and enthusiasts have been arguing over the proper solutions to store embeddings.
Some say storing them in a specialized engine (aka vector database) is better. Others say that it's enough to use plugins for existing databases.
Here are [just](https://nextword.substack.com/p/vector-database-is-not-a-separate) a [few](https://stackoverflow.blog/2023/09/20/do-you-need-a-specialized-vector-database-to-implement-vector-search-well/) of [them](https://www.singlestore.com/blog/why-your-vector-database-should-not-be-a-vector-database/).
This article presents our vision and arguments on the topic .
We will:
1. Explain why and when you actually need a dedicated vector solution
2. Debunk some ungrounded claims and anti-patterns to be avoided when building a vector search system.
A table of contents:
* *Each database vendor will sooner or later introduce vector capabilities...* [[click](#each-database-vendor-will-sooner-or-later-introduce-vector-capabilities-that-will-make-every-database-a-vector-database)]
* *Having a dedicated vector database requires duplication of data.* [[click](#having-a-dedicated-vector-database-requires-duplication-of-data)]
* *Having a dedicated vector database requires complex data synchronization.* [[click](#having-a-dedicated-vector-database-requires-complex-data-synchronization)]
* *You have to pay for a vector service uptime and data transfer.* [[click](#you-have-to-pay-for-a-vector-service-uptime-and-data-transfer-of-both-solutions)]
* *What is more seamless than your current database adding vector search capability?* [[click](#what-is-more-seamless-than-your-current-database-adding-vector-search-capability)]
* *Databases can support RAG use-case end-to-end.* [[click](#databases-can-support-rag-use-case-end-to-end)]
## Responding to claims
###### Each database vendor will sooner or later introduce vector capabilities. That will make every database a Vector Database.
The origins of this misconception lie in the careless use of the term Vector *Database*.
When we think of a *database*, we subconsciously envision a relational database like Postgres or MySQL.
Or, more scientifically, a service built on ACID principles that provides transactions, strong consistency guarantees, and atomicity.
The majority of Vector Database are not *databases* in this sense.
It is more accurate to call them *search engines*, but unfortunately, the marketing term *vector database* has already stuck, and it is unlikely to change.
*What makes search engines different, and why vector DBs are built as search engines?*
First of all, search engines assume different patterns of workloads and prioritize different properties of the system. The core architecture of such solutions is built around those priorities.
What types of properties do search engines prioritize?
* **Scalability**. Search engines are built to handle large amounts of data and queries. They are designed to be horizontally scalable and operate with more data than can fit into a single machine.
* **Search speed**. Search engines should guarantee low latency for queries, while the atomicity of updates is less important.
* **Availability**. Search engines must stay available if the majority of the nodes in a cluster are down. At the same time, they can tolerate the eventual consistency of updates.
{{< figure src=/articles_data/dedicated-service/compass.png caption="Database guarantees compass" width=80% >}}
Those priorities lead to different architectural decisions that are not reproducible in a general-purpose database, even if it has vector index support.
###### Having a dedicated vector database requires duplication of data.
By their very nature, vector embeddings are derivatives of the primary source data.
In the vast majority of cases, embeddings are derived from some other data, such as text, images, or additional information stored in your system. So, in fact, all embeddings you have in your system can be considered transformations of some original source.
And the distinguishing feature of derivative data is that it will change when the transformation pipeline changes.
In the case of vector embeddings, the scenario of those changes is quite simple: every time you update the encoder model, all the embeddings will change.
In systems where vector embeddings are fused with the primary data source, it is impossible to perform such migrations without significantly affecting the production system.
As a result, even if you want to use a single database for storing all kinds of data, you would still need to duplicate data internally.
###### Having a dedicated vector database requires complex data synchronization.
Most production systems prefer to isolate different types of workloads into separate services.
In many cases, those isolated services are not even related to search use cases.
For example, databases for analytics and one for serving can be updated from the same source.
Yet they can store and organize the data in a way that is optimal for their typical workloads.
Search engines are usually isolated for the same reason: you want to avoid creating a noisy neighbor problem and compromise the performance of your main database.
*To give you some intuition, let's consider a practical example:*
Assume we have a database with 1 million records.
This is a small database by modern standards of any relational database.
You can probably use the smallest free tier of any cloud provider to host it.
But if we want to use this database for vector search, 1 million OpenAI `text-embedding-ada-002` embeddings will take **~6Gb of RAM** (sic!).
As you can see, the vector search use case completely overwhelmed the main database resource requirements.
In practice, this means that your main database becomes burdened with high memory requirements and can not scale efficiently, limited by the size of a single machine.
Fortunately, the data synchronization problem is not new and definitely not unique to vector search.
There are many well-known solutions, starting with message queues and ending with specialized ETL tools.
For example, we recently released our [integration with Airbyte](/documentation/integrations/airbyte/), allowing you to synchronize data from various sources into Qdrant incrementally.
###### You have to pay for a vector service uptime and data transfer of both solutions.
In the open-source world, you pay for the resources you use, not the number of different databases you run.
Resources depend more on the optimal solution for each use case.
As a result, running a dedicated vector search engine can be even cheaper, as it allows optimization specifically for vector search use cases.
For instance, Qdrant implements a number of [quantization techniques](/documentation/guides/quantization/) that can significantly reduce the memory footprint of embeddings.
In terms of data transfer costs, on most cloud providers, network use within a region is usually free. As long as you put the original source data and the vector store in the same region, there are no added data transfer costs.
###### What is more seamless than your current database adding vector search capability?
In contrast to the short-term attractiveness of integrated solutions, dedicated search engines propose flexibility and a modular approach.
You don't need to update the whole production database each time some of the vector plugins are updated.
Maintenance of a dedicated search engine is as isolated from the main database as the data itself.
In fact, integration of more complex scenarios, such as read/write segregation, is much easier with a dedicated vector solution.
You can easily build cross-region replication to ensure low latency for your users.
{{< figure src=/articles_data/dedicated-service/region-based-deploy.png caption="Read/Write segregation + cross-regional deployment" width=80% >}}
It is especially important in large enterprise organizations, where the responsibility for different parts of the system is distributed among different teams.
In those situations, it is much easier to maintain a dedicated search engine for the AI team than to convince the core team to update the whole primary database.
Finally, the vector capabilities of the all-in-one database are tied to the development and release cycle of the entire stack.
Their long history of use also means that they need to pay a high price for backward compatibility.
###### Databases can support RAG use-case end-to-end.
Putting aside performance and scalability questions, the whole discussion about implementing RAG in the DBs assumes that the only detail missing in traditional databases is the vector index and the ability to make fast ANN queries.
In fact, the current capabilities of vector search have only scratched the surface of what is possible.
For example, in our recent article, we discuss the possibility of building an [exploration API](/articles/vector-similarity-beyond-search/) to fuel the discovery process - an alternative to kNN search, where you don’t even know what exactly you are looking for.
## Summary
Ultimately, you do not need a vector database if you are looking for a simple vector search functionality with a small amount of data. We genuinely recommend starting with whatever you already have in your stack to prototype. But you need one if you are looking to do more out of it, and it is the central functionality of your application. It is just like using a multi-tool to make something quick or using a dedicated instrument highly optimized for the use case.
Large-scale production systems usually consist of different specialized services and storage types for good reasons since it is one of the best practices of modern software architecture. Comparable to the orchestration of independent building blocks in a microservice architecture.
When you stuff the database with a vector index, you compromise both the performance and scalability of the main database and the vector search capabilities.
There is no one-size-fits-all approach that would not compromise on performance or flexibility.
So if your use case utilizes vector search in any significant way, it is worth investing in a dedicated vector search engine, aka vector database.
|
qdrant-landing/content/articles/detecting-coffee-anomalies.md | ---
title: Metric Learning for Anomaly Detection
short_description: "How to use metric learning to detect anomalies: quality assessment of coffee beans with just 200 labelled samples"
description: Practical use of metric learning for anomaly detection. A way to match the results of a classification-based approach with only ~0.6% of the labeled data.
social_preview_image: /articles_data/detecting-coffee-anomalies/preview/social_preview.jpg
preview_dir: /articles_data/detecting-coffee-anomalies/preview
small_preview_image: /articles_data/detecting-coffee-anomalies/anomalies_icon.svg
weight: 30
author: Yusuf Sarıgöz
author_link: https://medium.com/@yusufsarigoz
date: 2022-05-04T13:00:00+03:00
draft: false
# aliases: [ /articles/detecting-coffee-anomalies/ ]
---
Anomaly detection is a thirsting yet challenging task that has numerous use cases across various industries.
The complexity results mainly from the fact that the task is data-scarce by definition.
Similarly, anomalies are, again by definition, subject to frequent change, and they may take unexpected forms.
For that reason, supervised classification-based approaches are:
* Data-hungry - requiring quite a number of labeled data;
* Expensive - data labeling is an expensive task itself;
* Time-consuming - you would try to obtain what is necessarily scarce;
* Hard to maintain - you would need to re-train the model repeatedly in response to changes in the data distribution.
These are not desirable features if you want to put your model into production in a rapidly-changing environment.
And, despite all the mentioned difficulties, they do not necessarily offer superior performance compared to the alternatives.
In this post, we will detail the lessons learned from such a use case.
## Coffee Beans
[Agrivero.ai](https://agrivero.ai/) - is a company making AI-enabled solution for quality control & traceability of green coffee for producers, traders, and roasters.
They have collected and labeled more than **30 thousand** images of coffee beans with various defects - wet, broken, chipped, or bug-infested samples.
This data is used to train a classifier that evaluates crop quality and highlights possible problems.
{{< figure src=/articles_data/detecting-coffee-anomalies/detection.gif caption="Anomalies in coffee" width="400px" >}}
We should note that anomalies are very diverse, so the enumeration of all possible anomalies is a challenging task on it's own.
In the course of work, new types of defects appear, and shooting conditions change. Thus, a one-time labeled dataset becomes insufficient.
Let's find out how metric learning might help to address this challenge.
## Metric Learning Approach
In this approach, we aimed to encode images in an n-dimensional vector space and then use learned similarities to label images during the inference.
The simplest way to do this is KNN classification.
The algorithm retrieves K-nearest neighbors to a given query vector and assigns a label based on the majority vote.
In production environment kNN classifier could be easily replaced with [Qdrant](https://github.com/qdrant/qdrant) vector search engine.
{{< figure src=/articles_data/detecting-coffee-anomalies/anomalies_detection.png caption="Production deployment" >}}
This approach has the following advantages:
* We can benefit from unlabeled data, considering labeling is time-consuming and expensive.
* The relevant metric, e.g., precision or recall, can be tuned according to changing requirements during the inference without re-training.
* Queries labeled with a high score can be added to the KNN classifier on the fly as new data points.
To apply metric learning, we need to have a neural encoder, a model capable of transforming an image into a vector.
Training such an encoder from scratch may require a significant amount of data we might not have. Therefore, we will divide the training into two steps:
* The first step is to train the autoencoder, with which we will prepare a model capable of representing the target domain.
* The second step is finetuning. Its purpose is to train the model to distinguish the required types of anomalies.
{{< figure src=/articles_data/detecting-coffee-anomalies/anomaly_detection_training.png caption="Model training architecture" >}}
### Step 1 - Autoencoder for Unlabeled Data
First, we pretrained a Resnet18-like model in a vanilla autoencoder architecture by leaving the labels aside.
Autoencoder is a model architecture composed of an encoder and a decoder, with the latter trying to recreate the original input from the low-dimensional bottleneck output of the former.
There is no intuitive evaluation metric to indicate the performance in this setup, but we can evaluate the success by examining the recreated samples visually.
{{< figure src=/articles_data/detecting-coffee-anomalies/image_reconstruction.png caption="Example of image reconstruction with Autoencoder" >}}
Then we encoded a subset of the data into 128-dimensional vectors by using the encoder,
and created a KNN classifier on top of these embeddings and associated labels.
Although the results are promising, we can do even better by finetuning with metric learning.
### Step 2 - Finetuning with Metric Learning
We started by selecting 200 labeled samples randomly without replacement.
In this step, The model was composed of the encoder part of the autoencoder with a randomly initialized projection layer stacked on top of it.
We applied transfer learning from the frozen encoder and trained only the projection layer with Triplet Loss and an online batch-all triplet mining strategy.
Unfortunately, the model overfitted quickly in this attempt.
In the next experiment, we used an online batch-hard strategy with a trick to prevent vector space from collapsing.
We will describe our approach in the further articles.
This time it converged smoothly, and our evaluation metrics also improved considerably to match the supervised classification approach.
{{< figure src=/articles_data/detecting-coffee-anomalies/ae_report_knn.png caption="Metrics for the autoencoder model with KNN classifier" >}}
{{< figure src=/articles_data/detecting-coffee-anomalies/ft_report_knn.png caption="Metrics for the finetuned model with KNN classifier" >}}
We repeated this experiment with 500 and 2000 samples, but it showed only a slight improvement.
Thus we decided to stick to 200 samples - see below for why.
## Supervised Classification Approach
We also wanted to compare our results with the metrics of a traditional supervised classification model.
For this purpose, a Resnet50 model was finetuned with ~30k labeled images, made available for training.
Surprisingly, the F1 score was around ~0.86.
Please note that we used only 200 labeled samples in the metric learning approach instead of ~30k in the supervised classification approach.
These numbers indicate a huge saving with no considerable compromise in the performance.
## Conclusion
We obtained results comparable to those of the supervised classification method by using **only 0.66%** of the labeled data with metric learning.
This approach is time-saving and resource-efficient, and that may be improved further. Possible next steps might be:
- Collect more unlabeled data and pretrain a larger autoencoder.
- Obtain high-quality labels for a small number of images instead of tens of thousands for finetuning.
- Use hyperparameter optimization and possibly gradual unfreezing in the finetuning step.
- Use [vector search engine](https://github.com/qdrant/qdrant) to serve Metric Learning in production.
We are actively looking into these, and we will continue to publish our findings in this challenge and other use cases of metric learning.
|
qdrant-landing/content/articles/discovery-search.md | ---
title: "Discovery Search: A New Approach to Vector Space"
short_description: Discovery Search, an innovative API for precise, tailored search results.
description: Explore the next frontier in search technology with Discovery Search. Learn how this innovative API provides precise and tailored results.
social_preview_image: /articles_data/discovery-search/social_preview.jpg
small_preview_image: /articles_data/discovery-search/icon.svg
preview_dir: /articles_data/discovery-search/preview
weight: -110
author: Luis Cossío
author_link: https://coszio.github.io
date: 2024-01-31T08:00:00-03:00
draft: false
keywords:
- why use a vector database
- specialty
- search
- discovery
- state-of-the-art
- vector-search
---
# How to Master Vector Space Exploration with Discovery Search
When Christopher Columbus and his crew sailed to cross the Atlantic Ocean, they were not looking for America. They were looking for a new route to India, and they were convinced that the Earth was round. They didn't know anything about America, but since they were going west, they stumbled upon it.
They couldn't reach their _target_, because the geography didn't let them, but once they realized it wasn't India, they claimed it a new "discovery" for their crown. If we consider that sailors need water to sail, then we can establish a _context_ which is positive in the water, and negative on land. Once the sailor's search was stopped by the land, they could not go any further, and a new route was found. Let's keep these concepts of _target_ and _context_ in mind as we explore the new functionality of Qdrant: __Discovery search__.
## What is discovery search?
Discovery search is a powerful tool that lets you explore the vector space in a more controlled way. It can be used to find points that are not necessarily close to the target but are still relevant to the search. It can also be used to represent complex tastes and break out of the similarity bubble. Check out the documentation to learn more about the math behind it and how to use it.
## Qdrant's discovery search: version 1.7 release
In version 1.7, Qdrant [released](/articles/qdrant-1.7.x/) this novel API that lets you constrain the space in which a search is performed, relying only on pure vectors. This is a powerful tool that lets you explore the vector space in a more controlled way. It can be used to find points that are not necessarily closest to the target, but are still relevant to the search.
You can already select which points are available to the search by using payload filters. This by itself is very versatile because it allows us to craft complex filters that show only the points that satisfy their criteria deterministically. However, the payload associated with each point is arbitrary and cannot tell us anything about their position in the vector space. In other words, filtering out irrelevant points can be seen as creating a _mask_ rather than a hyperplane –cutting in between the positive and negative vectors– in the space.
## Understanding context in discovery search
This is where a __vector _context___ can help. We define _context_ as a list of pairs. Each pair is made up of a positive and a negative vector. With a context, we can define hyperplanes within the vector space, which always prefer the positive over the negative vectors. This effectively partitions the space where the search is performed. After the space is partitioned, we then need a _target_ to return the points that are more similar to it.
![Discovery search visualization](/articles_data/discovery-search/discovery-search.png)
While positive and negative vectors might suggest the use of the <a href="/documentation/concepts/explore/#recommendation-api" target="_blank">recommendation interface</a>, in the case of _context_ they require to be paired up in a positive-negative fashion. This is inspired from the machine-learning concept of <a href="https://en.wikipedia.org/wiki/Triplet_loss" target="_blank">_triplet loss_</a>, where you have three vectors: an anchor, a positive, and a negative. Triplet loss is an evaluation of how much the anchor is closer to the positive than to the negative vector, so that learning happens by "moving" the positive and negative points to try to get a better evaluation. However, during discovery, we consider the positive and negative vectors as static points, and we search through the whole dataset for the "anchors", or result candidates, which fit this characteristic better.
![Triplet loss](/articles_data/discovery-search/triplet-loss.png)
[__Discovery search__](#discovery-search), then, is made up of two main inputs:
- __target__: the main point of interest
- __context__: the pairs of positive and negative points we just defined.
However, it is not the only way to use it. Alternatively, you can __only__ provide a context, which invokes a [__Context Search__](#context-search). This is useful when you want to explore the space defined by the context, but don't have a specific target in mind. But hold your horses, we'll get to that [later ↪](#context-search).
## Real-world discovery search applications
Let's talk about the first case: context with a target.
To understand why this is useful, let's take a look at a real-world example: using a multimodal encoder like [CLIP](https://openai.com/blog/clip/) to search for images, from text __and__ images.
CLIP is a neural network that can embed both images and text into the same vector space. This means that you can search for images using either a text query or an image query. For this example, we'll reuse our [food recommendations demo](https://food-discovery.qdrant.tech/) by typing "burger" in the text input:
![Burger text input in food demo](/articles_data/discovery-search/search-for-burger.png)
This is basically nearest neighbor search, and while technically we have only images of burgers, one of them is a logo representation of a burger. We're looking for actual burgers, though. Let's try to exclude images like that by adding it as a negative example:
![Try to exclude burger drawing](/articles_data/discovery-search/try-to-exclude-non-burger.png)
Wait a second, what has just happened? These pictures have __nothing__ to do with burgers, and still, they appear on the first results. Is the demo broken?
Turns out, multimodal encoders <a href="https://modalitygap.readthedocs.io/en/latest/" target="_blank">might not work how you expect them to</a>. Images and text are embedded in the same space, but they are not necessarily close to each other. This means that we can create a mental model of the distribution as two separate planes, one for images and one for text.
![Mental model of CLIP embeddings](/articles_data/discovery-search/clip-mental-model.png)
This is where discovery excels because it allows us to constrain the space considering the same mode (images) while using a target from the other mode (text).
![Cross-modal search with discovery](/articles_data/discovery-search/clip-discovery.png)
Discovery search also lets us keep giving feedback to the search engine in the shape of more context pairs, so we can keep refining our search until we find what we are looking for.
Another intuitive example: imagine you're looking for a fish pizza, but pizza names can be confusing, so you can just type "pizza", and prefer a fish over meat. Discovery search will let you use these inputs to suggest a fish pizza... even if it's not called fish pizza!
![Simple discovery example](/articles_data/discovery-search/discovery-example-with-images.png)
## Context search
Now, the second case: only providing context.
Ever been caught in the same recommendations on your favorite music streaming service? This may be caused by getting stuck in a similarity bubble. As user input gets more complex, diversity becomes scarce, and it becomes harder to force the system to recommend something different.
![Context vs recommendation search](/articles_data/discovery-search/context-vs-recommendation.png)
__Context search__ solves this by de-focusing the search around a single point. Instead, it selects points randomly from within a zone in the vector space. This search is the most influenced by _triplet loss_, as the score can be thought of as _"how much a point is closer to a negative than a positive vector?"_. If it is closer to the positive one, then its score will be zero, same as any other point within the same zone. But if it is on the negative side, it will be assigned a more and more negative score the further it gets.
![Context search visualization](/articles_data/discovery-search/context-search.png)
Creating complex tastes in a high-dimensional space becomes easier since you can just add more context pairs to the search. This way, you should be able to constrain the space enough so you select points from a per-search "category" created just from the context in the input.
![A more complex context search](/articles_data/discovery-search/complex-context-search.png)
This way you can give refreshing recommendations, while still being in control by providing positive and negative feedback, or even by trying out different permutations of pairs.
## Key rakeaways:
- Discovery search is a powerful tool for controlled exploration in vector spaces.
Context, positive, and negative vectors guide search parameters and refine results.
- Real-world applications include multimodal search, diverse recommendations, and context-driven exploration.
- Ready to experience the power of Qdrant's Discovery search for yourself? [Try a free demo](https://qdrant.tech/contact-us/) now and unlock the full potential of controlled exploration in vector spaces! |
qdrant-landing/content/articles/embedding-recycler.md | ---
title: Layer Recycling and Fine-tuning Efficiency
short_description: Tradeoff between speed and performance in layer recycling
description: Learn when and how to use layer recycling to achieve different performance targets.
preview_dir: /articles_data/embedding-recycling/preview
small_preview_image: /articles_data/embedding-recycling/icon.svg
social_preview_image: /articles_data/embedding-recycling/preview/social_preview.jpg
weight: 10
author: Yusuf Sarıgöz
author_link: https://medium.com/@yusufsarigoz
date: 2022-08-23T13:00:00+03:00
draft: false
aliases: [ /articles/embedding-recycler/ ]
---
A recent [paper](https://arxiv.org/abs/2207.04993)
by Allen AI has attracted attention in the NLP community as they cache the output of a certain intermediate layer
in the training and inference phases to achieve a speedup of ~83%
with a negligible loss in model performance.
This technique is quite similar to [the caching mechanism in Quaterion](https://quaterion.qdrant.tech/tutorials/cache_tutorial.html),
but the latter is intended for any data modalities while the former focuses only on language models
despite presenting important insights from their experiments.
In this post, I will share our findings combined with those,
hoping to provide the community with a wider perspective on layer recycling.
## How layer recycling works
The main idea of layer recycling is to accelerate the training (and inference)
by avoiding repeated passes of the same data object through the frozen layers.
Instead, it is possible to pass objects through those layers only once,
cache the output
and use them as inputs to the unfrozen layers in future epochs.
In the paper, they usually cache 50% of the layers, e.g., the output of the 6th multi-head self-attention block in a 12-block encoder.
However, they find out that it does not work equally for all the tasks.
For example, the question answering task suffers from a more significant degradation in performance with 50% of the layers recycled,
and they choose to lower it down to 25% for this task,
so they suggest determining the level of caching based on the task at hand.
they also note that caching provides a more considerable speedup for larger models and on lower-end machines.
In layer recycling, the cache is hit for exactly the same object.
It is easy to achieve this in textual data as it is easily hashable,
but you may need more advanced tricks to generate keys for the cache
when you want to generalize this technique to diverse data types.
For instance, hashing PyTorch tensors [does not work as you may expect](https://github.com/joblib/joblib/issues/1282).
Quaterion comes with an intelligent key extractor that may be applied to any data type,
but it is also allowed to customize it with a callable passed as an argument.
Thanks to this flexibility, we were able to run a variety of experiments in different setups,
and I believe that these findings will be helpful for your future projects.
## Experiments
We conducted different experiments to test the performance with:
1. Different numbers of layers recycled in [the similar cars search example](https://quaterion.qdrant.tech/tutorials/cars-tutorial.html).
2. Different numbers of samples in the dataset for training and fine-tuning for similar cars search.
3. Different numbers of layers recycled in [the question answerring example](https://quaterion.qdrant.tech/tutorials/nlp_tutorial.html).
## Easy layer recycling with Quaterion
The easiest way of caching layers in Quaterion is to compose a [TrainableModel](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html#quaterion.train.trainable_model.TrainableModel)
with a frozen [Encoder](https://quaterion-models.qdrant.tech/quaterion_models.encoders.encoder.html#quaterion_models.encoders.encoder.Encoder)
and an unfrozen [EncoderHead](https://quaterion-models.qdrant.tech/quaterion_models.heads.encoder_head.html#quaterion_models.heads.encoder_head.EncoderHead).
Therefore, we modified the `TrainableModel` in the [example](https://github.com/qdrant/quaterion/blob/master/examples/cars/models.py)
as in the following:
```python
class Model(TrainableModel):
# ...
def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]:
pre_trained_encoder = torchvision.models.resnet34(pretrained=True)
self.avgpool = copy.deepcopy(pre_trained_encoder.avgpool)
self.finetuned_block = copy.deepcopy(pre_trained_encoder.layer4)
modules = []
for name, child in pre_trained_encoder.named_children():
modules.append(child)
if name == "layer3":
break
pre_trained_encoder = nn.Sequential(*modules)
return CarsEncoder(pre_trained_encoder)
def configure_head(self, input_embedding_size) -> EncoderHead:
return SequentialHead(self.finetuned_block,
self.avgpool,
nn.Flatten(),
SkipConnectionHead(512, dropout=0.3, skip_dropout=0.2),
output_size=512)
# ...
```
This trick lets us finetune one more layer from the base model as a part of the `EncoderHead`
while still benefiting from the speedup in the frozen `Encoder` provided by the cache.
## Experiment 1: Percentage of layers recycled
The paper states that recycling 50% of the layers yields little to no loss in performance when compared to full fine-tuning.
In this setup, we compared performances of four methods:
1. Freeze the whole base model and train only `EncoderHead`.
2. Move one of the four residual blocks `EncoderHead` and train it together with the head layer while freezing the rest (75% layer recycling).
3. Move two of the four residual blocks to `EncoderHead` while freezing the rest (50% layer recycling).
4. Train the whole base model together with `EncoderHead`.
**Note**: During these experiments, we used ResNet34 instead of ResNet152 as the pretrained model
in order to be able to use a reasonable batch size in full training.
The baseline score with ResNet34 is 0.106.
| Model | RRP |
| ------------- | ---- |
| Full training | 0.32 |
| 50% recycling | 0.31 |
| 75% recycling | 0.28 |
| Head only | 0.22 |
| Baseline | 0.11 |
As is seen in the table, the performance in 50% layer recycling is very close to that in full training.
Additionally, we can still have a considerable speedup in 50% layer recycling with only a small drop in performance.
Although 75% layer recycling is better than training only `EncoderHead`,
its performance drops quickly when compared to 50% layer recycling and full training.
## Experiment 2: Amount of available data
In the second experiment setup, we compared performances of fine-tuning strategies with different dataset sizes.
We sampled 50% of the training set randomly while still evaluating models on the whole validation set.
| Model | RRP |
| ------------- | ---- |
| Full training | 0.27 |
| 50% recycling | 0.26 |
| 75% recycling | 0.25 |
| Head only | 0.21 |
| Baseline | 0.11 |
This experiment shows that, the smaller the available dataset is,
the bigger drop in performance we observe in full training, 50% and 75% layer recycling.
On the other hand, the level of degradation in training only `EncoderHead` is really small when compared to others.
When we further reduce the dataset size, full training becomes untrainable at some point,
while we can still improve over the baseline by training only `EncoderHead`.
## Experiment 3: Layer recycling in question answering
We also wanted to test layer recycling in a different domain
as one of the most important takeaways of the paper is that
the performance of layer recycling is task-dependent.
To this end, we set up an experiment with the code from the [Question Answering with Similarity Learning tutorial](https://quaterion.qdrant.tech/tutorials/nlp_tutorial.html).
| Model | RP@1 | RRK |
| ------------- | ---- | ---- |
| Full training | 0.76 | 0.65 |
| 50% recycling | 0.75 | 0.63 |
| 75% recycling | 0.69 | 0.59 |
| Head only | 0.67 | 0.58 |
| Baseline | 0.64 | 0.55 |
In this task, 50% layer recycling can still do a good job with only a small drop in performance when compared to full training.
However, the level of degradation is smaller than that in the similar cars search example.
This can be attributed to several factors such as the pretrained model quality, dataset size and task definition,
and it can be the subject of a more elaborate and comprehensive research project.
Another observation is that the performance of 75% layer recycling is closer to that of training only `EncoderHead`
than 50% layer recycling.
## Conclusion
We set up several experiments to test layer recycling under different constraints
and confirmed that layer recycling yields varying performances with different tasks and domains.
One of the most important observations is the fact that the level of degradation in layer recycling
is sublinear with a comparison to full training, i.e., we lose a smaller percentage of performance than
the percentage we recycle. Additionally, training only `EncoderHead`
is more resistant to small dataset sizes.
There is even a critical size under which full training does not work at all.
The issue of performance differences shows that there is still room for further research on layer recycling,
and luckily Quaterion is flexible enough to run such experiments quickly.
We will continue to report our findings on fine-tuning efficiency.
**Fun fact**: The preview image for this article was created with Dall.e with the following prompt: "Photo-realistic robot using a tuning fork to adjust a piano."
[Click here](/articles_data/embedding-recycling/full.png)
to see it in full size! |
qdrant-landing/content/articles/faq-question-answering.md | ---
title: Q&A with Similarity Learning
short_description: A complete guide to building a Q&A system with similarity learning.
description: A complete guide to building a Q&A system using Quaterion and SentenceTransformers.
social_preview_image: /articles_data/faq-question-answering/preview/social_preview.jpg
preview_dir: /articles_data/faq-question-answering/preview
small_preview_image: /articles_data/faq-question-answering/icon.svg
weight: 9
author: George Panchuk
author_link: https://medium.com/@george.panchuk
date: 2022-06-28T08:57:07.604Z
# aliases: [ /articles/faq-question-answering/ ]
---
# Question-answering system with Similarity Learning and Quaterion
Many problems in modern machine learning are approached as classification tasks.
Some are the classification tasks by design, but others are artificially transformed into such.
And when you try to apply an approach, which does not naturally fit your problem, you risk coming up with over-complicated or bulky solutions.
In some cases, you would even get worse performance.
Imagine that you got a new task and decided to solve it with a good old classification approach.
Firstly, you will need labeled data.
If it came on a plate with the task, you're lucky, but if it didn't, you might need to label it manually.
And I guess you are already familiar with how painful it might be.
Assuming you somehow labeled all required data and trained a model.
It shows good performance - well done!
But a day later, your manager told you about a bunch of new data with new classes, which your model has to handle.
You repeat your pipeline.
Then, two days later, you've been reached out one more time.
You need to update the model again, and again, and again.
Sounds tedious and expensive for me, does not it for you?
## Automating customer support
Let's now take a look at the concrete example. There is a pressing problem with automating customer support.
The service should be capable of answering user questions and retrieving relevant articles from the documentation without any human involvement.
With the classification approach, you need to build a hierarchy of classification models to determine the question's topic.
You have to collect and label a whole custom dataset of your private documentation topics to train that.
And then, each time you have a new topic in your documentation, you have to re-train the whole pile of classifiers with additionally labeled data.
Can we make it easier?
## Similarity option
One of the possible alternatives is Similarity Learning, which we are going to discuss in this article.
It suggests getting rid of the classes and making decisions based on the similarity between objects instead.
To do it quickly, we would need some intermediate representation - embeddings.
Embeddings are high-dimensional vectors with semantic information accumulated in them.
As embeddings are vectors, one can apply a simple function to calculate the similarity score between them, for example, cosine or euclidean distance.
So with similarity learning, all we need to do is provide pairs of correct questions and answers.
And then, the model will learn to distinguish proper answers by the similarity of embeddings.
>If you want to learn more about similarity learning and applications, check out this [article](/documentation/tutorials/neural-search/) which might be an asset.
## Let's build
Similarity learning approach seems a lot simpler than classification in this case, and if you have some
doubts on your mind, let me dispel them.
As I have no any resource with exhaustive F.A.Q. which might serve as a dataset, I've scrapped it from sites of popular cloud providers.
The dataset consists of just 8.5k pairs of question and answers, you can take a closer look at it [here](https://github.com/qdrant/demo-cloud-faq).
Once we have data, we need to obtain embeddings for it.
It is not a novel technique in NLP to represent texts as embeddings.
There are plenty of algorithms and models to calculate them.
You could have heard of Word2Vec, GloVe, ELMo, BERT, all these models can provide text embeddings.
However, it is better to produce embeddings with a model trained for semantic similarity tasks.
For instance, we can find such models at [sentence-transformers](https://www.sbert.net/docs/pretrained_models.html).
Authors claim that `all-mpnet-base-v2` provides the best quality, but let's pick `all-MiniLM-L6-v2` for our tutorial
as it is 5x faster and still offers good results.
Having all this, we can test our approach. We won't take all our dataset at the moment, but only
a part of it. To measure model's performance we will use two metrics -
[mean reciprocal rank](https://en.wikipedia.org/wiki/Mean_reciprocal_rank) and
[precision@1](https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Precision_at_k).
We have a [ready script](https://github.com/qdrant/demo-cloud-faq/blob/experiments/faq/baseline.py)
for this experiment, let's just launch it now.
<div class="table-responsive">
| precision@1 | reciprocal_rank |
|-------------|-----------------|
| 0.564 | 0.663 |
</div>
That's already quite decent quality, but maybe we can do better?
## Improving results with fine-tuning
Actually, we can! Model we used has a good natural language understanding, but it has never seen
our data. An approach called `fine-tuning` might be helpful to overcome this issue. With
fine-tuning you don't need to design a task-specific architecture, but take a model pre-trained on
another task, apply a couple of layers on top and train its parameters.
Sounds good, but as similarity learning is not as common as classification, it might be a bit inconvenient to fine-tune a model with traditional tools.
For this reason we will use [Quaterion](https://github.com/qdrant/quaterion) - a framework for fine-tuning similarity learning models.
Let's see how we can train models with it
First, create our project and call it `faq`.
> All project dependencies, utils scripts not covered in the tutorial can be found in the
> [repository](https://github.com/qdrant/demo-cloud-faq/tree/tutorial).
### Configure training
The main entity in Quaterion is [TrainableModel](https://quaterion.qdrant.tech/quaterion.train.trainable_model.html).
This class makes model's building process fast and convenient.
`TrainableModel` is a wrapper around [pytorch_lightning.LightningModule](https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html).
[Lightning](https://www.pytorchlightning.ai/) handles all the training process complexities, like training loop, device managing, etc. and saves user from a necessity to implement all this routine manually.
Also Lightning's modularity is worth to be mentioned.
It improves separation of responsibilities, makes code more readable, robust and easy to write.
All these features make Pytorch Lightning a perfect training backend for Quaterion.
To use `TrainableModel` you need to inherit your model class from it.
The same way you would use `LightningModule` in pure `pytorch_lightning`.
Mandatory methods are `configure_loss`, `configure_encoders`, `configure_head`,
`configure_optimizers`.
The majority of mentioned methods are quite easy to implement, you'll probably just need a couple of
imports to do that. But `configure_encoders` requires some code:)
Let's create a `model.py` with model's template and a placeholder for `configure_encoders`
for the moment.
```python
from typing import Union, Dict, Optional
from torch.optim import Adam
from quaterion import TrainableModel
from quaterion.loss import MultipleNegativesRankingLoss, SimilarityLoss
from quaterion_models.encoders import Encoder
from quaterion_models.heads import EncoderHead
from quaterion_models.heads.skip_connection_head import SkipConnectionHead
class FAQModel(TrainableModel):
def __init__(self, lr=10e-5, *args, **kwargs):
self.lr = lr
super().__init__(*args, **kwargs)
def configure_optimizers(self):
return Adam(self.model.parameters(), lr=self.lr)
def configure_loss(self) -> SimilarityLoss:
return MultipleNegativesRankingLoss(symmetric=True)
def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]:
... # ToDo
def configure_head(self, input_embedding_size: int) -> EncoderHead:
return SkipConnectionHead(input_embedding_size)
```
- `configure_optimizers` is a method provided by Lightning. An eagle-eye of you could notice
mysterious `self.model`, it is actually a [SimilarityModel](https://quaterion-models.qdrant.tech/quaterion_models.model.html) instance. We will cover it later.
- `configure_loss` is a loss function to be used during training. You can choose a ready-made implementation from Quaterion.
However, since Quaterion's purpose is not to cover all possible losses, or other entities and
features of similarity learning, but to provide a convenient framework to build and use such models,
there might not be a desired loss. In this case it is possible to use [PytorchMetricLearningWrapper](https://quaterion.qdrant.tech/quaterion.loss.extras.pytorch_metric_learning_wrapper.html)
to bring required loss from [pytorch-metric-learning](https://kevinmusgrave.github.io/pytorch-metric-learning/) library, which has a rich collection of losses.
You can also implement a custom loss yourself.
- `configure_head` - model built via Quaterion is a combination of encoders and a top layer - head.
As with losses, some head implementations are provided. They can be found at [quaterion_models.heads](https://quaterion-models.qdrant.tech/quaterion_models.heads.html).
At our example we use [MultipleNegativesRankingLoss](https://quaterion.qdrant.tech/quaterion.loss.multiple_negatives_ranking_loss.html).
This loss is especially good for training retrieval tasks.
It assumes that we pass only positive pairs (similar objects) and considers all other objects as negative examples.
`MultipleNegativesRankingLoss` use cosine to measure distance under the hood, but it is a configurable parameter.
Quaterion provides implementation for other distances as well. You can find available ones at [quaterion.distances](https://quaterion.qdrant.tech/quaterion.distances.html).
Now we can come back to `configure_encoders`:)
### Configure Encoder
The encoder task is to convert objects into embeddings.
They usually take advantage of some pre-trained models, in our case `all-MiniLM-L6-v2` from `sentence-transformers`.
In order to use it in Quaterion, we need to create a wrapper inherited from the [Encoder](https://quaterion-models.qdrant.tech/quaterion_models.encoders.encoder.html) class.
Let's create our encoder in `encoder.py`
```python
import os
from torch import Tensor, nn
from sentence_transformers.models import Transformer, Pooling
from quaterion_models.encoders import Encoder
from quaterion_models.types import TensorInterchange, CollateFnType
class FAQEncoder(Encoder):
def __init__(self, transformer, pooling):
super().__init__()
self.transformer = transformer
self.pooling = pooling
self.encoder = nn.Sequential(self.transformer, self.pooling)
@property
def trainable(self) -> bool:
# Defines if we want to train encoder itself, or head layer only
return False
@property
def embedding_size(self) -> int:
return self.transformer.get_word_embedding_dimension()
def forward(self, batch: TensorInterchange) -> Tensor:
return self.encoder(batch)["sentence_embedding"]
def get_collate_fn(self) -> CollateFnType:
return self.transformer.tokenize
@staticmethod
def _transformer_path(path: str):
return os.path.join(path, "transformer")
@staticmethod
def _pooling_path(path: str):
return os.path.join(path, "pooling")
def save(self, output_path: str):
transformer_path = self._transformer_path(output_path)
os.makedirs(transformer_path, exist_ok=True)
pooling_path = self._pooling_path(output_path)
os.makedirs(pooling_path, exist_ok=True)
self.transformer.save(transformer_path)
self.pooling.save(pooling_path)
@classmethod
def load(cls, input_path: str) -> Encoder:
transformer = Transformer.load(cls._transformer_path(input_path))
pooling = Pooling.load(cls._pooling_path(input_path))
return cls(transformer=transformer, pooling=pooling)
```
As you can notice, there are more methods implemented, then we've already discussed. Let's go
through them now!
- In `__init__` we register our pre-trained layers, similar as you do in [torch.nn.Module](https://pytorch.org/docs/stable/generated/torch.nn.Module.html) descendant.
- `trainable` defines whether current `Encoder` layers should be updated during training or not. If `trainable=False`, then all layers will be frozen.
- `embedding_size` is a size of encoder's output, it is required for proper `head` configuration.
- `get_collate_fn` is a tricky one. Here you should return a method which prepares a batch of raw
data into the input, suitable for the encoder. If `get_collate_fn` is not overridden, then the [default_collate](https://pytorch.org/docs/stable/data.html#torch.utils.data.default_collate) will be used.
The remaining methods are considered self-describing.
As our encoder is ready, we now are able to fill `configure_encoders`.
Just insert the following code into `model.py`:
```python
...
from sentence_transformers import SentenceTransformer
from sentence_transformers.models import Transformer, Pooling
from faq.encoder import FAQEncoder
class FAQModel(TrainableModel):
...
def configure_encoders(self) -> Union[Encoder, Dict[str, Encoder]]:
pre_trained_model = SentenceTransformer("all-MiniLM-L6-v2")
transformer: Transformer = pre_trained_model[0]
pooling: Pooling = pre_trained_model[1]
encoder = FAQEncoder(transformer, pooling)
return encoder
```
### Data preparation
Okay, we have raw data and a trainable model. But we don't know yet how to feed this data to our model.
Currently, Quaterion takes two types of similarity representation - pairs and groups.
The groups format assumes that all objects split into groups of similar objects. All objects inside
one group are similar, and all other objects outside this group considered dissimilar to them.
But in the case of pairs, we can only assume similarity between explicitly specified pairs of objects.
We can apply any of the approaches with our data, but pairs one seems more intuitive.
The format in which Similarity is represented determines which loss can be used.
For example, _ContrastiveLoss_ and _MultipleNegativesRankingLoss_ works with pairs format.
[SimilarityPairSample](https://quaterion.qdrant.tech/quaterion.dataset.similarity_samples.html#quaterion.dataset.similarity_samples.SimilarityPairSample) could be used to represent pairs.
Let's take a look at it:
```python
@dataclass
class SimilarityPairSample:
obj_a: Any
obj_b: Any
score: float = 1.0
subgroup: int = 0
```
Here might be some questions: what `score` and `subgroup` are?
Well, `score` is a measure of expected samples similarity.
If you only need to specify if two samples are similar or not, you can use `1.0` and `0.0` respectively.
`subgroups` parameter is required for more granular description of what negative examples could be.
By default, all pairs belong the subgroup zero.
That means that we would need to specify all negative examples manually.
But in most cases, we can avoid this by enabling different subgroups.
All objects from different subgroups will be considered as negative examples in loss, and thus it
provides a way to set negative examples implicitly.
With this knowledge, we now can create our `Dataset` class in `dataset.py` to feed our model:
```python
import json
from typing import List, Dict
from torch.utils.data import Dataset
from quaterion.dataset.similarity_samples import SimilarityPairSample
class FAQDataset(Dataset):
"""Dataset class to process .jsonl files with FAQ from popular cloud providers."""
def __init__(self, dataset_path):
self.dataset: List[Dict[str, str]] = self.read_dataset(dataset_path)
def __getitem__(self, index) -> SimilarityPairSample:
line = self.dataset[index]
question = line["question"]
# All questions have a unique subgroup
# Meaning that all other answers are considered negative pairs
subgroup = hash(question)
return SimilarityPairSample(
obj_a=question,
obj_b=line["answer"],
score=1,
subgroup=subgroup
)
def __len__(self):
return len(self.dataset)
@staticmethod
def read_dataset(dataset_path) -> List[Dict[str, str]]:
"""Read jsonl-file into a memory."""
with open(dataset_path, "r") as fd:
return [json.loads(json_line) for json_line in fd]
```
We assigned a unique subgroup for each question, so all other objects which have different question will be considered as negative examples.
### Evaluation Metric
We still haven't added any metrics to the model. For this purpose Quaterion provides `configure_metrics`.
We just need to override it and attach interested metrics.
Quaterion has some popular retrieval metrics implemented - such as _precision @ k_ or _mean reciprocal rank_.
They can be found in [quaterion.eval](https://quaterion.qdrant.tech/quaterion.eval.html) package.
But there are just a few metrics, it is assumed that desirable ones will be made by user or taken from another libraries.
You will probably need to inherit from `PairMetric` or `GroupMetric` to implement a new one.
In `configure_metrics` we need to return a list of `AttachedMetric`.
They are just wrappers around metric instances and helps to log metrics more easily.
Under the hood `logging` is handled by `pytorch-lightning`.
You can configure it as you want - pass required parameters as keyword arguments to `AttachedMetric`.
For additional info visit [logging documentation page](https://pytorch-lightning.readthedocs.io/en/stable/extensions/logging.html)
Let's add mentioned metrics for our `FAQModel`.
Add this code to `model.py`:
```python
...
from quaterion.eval.pair import RetrievalPrecision, RetrievalReciprocalRank
from quaterion.eval.attached_metric import AttachedMetric
class FAQModel(TrainableModel):
def __init__(self, lr=10e-5, *args, **kwargs):
self.lr = lr
super().__init__(*args, **kwargs)
...
def configure_metrics(self):
return [
AttachedMetric(
"RetrievalPrecision",
RetrievalPrecision(k=1),
prog_bar=True,
on_epoch=True,
),
AttachedMetric(
"RetrievalReciprocalRank",
RetrievalReciprocalRank(),
prog_bar=True,
on_epoch=True
),
]
```
### Fast training with Cache
Quaterion has one more cherry on top of the cake when it comes to non-trainable encoders.
If encoders are frozen, they are deterministic and emit the exact embeddings for the same input data on each epoch.
It provides a way to avoid repeated calculations and reduce training time.
For this purpose Quaterion has a cache functionality.
Before training starts, the cache runs one epoch to pre-calculate all embeddings with frozen encoders and then store them on a device you chose (currently CPU or GPU).
Everything you need is to define which encoders are trainable or not and set cache settings.
And that's it: everything else Quaterion will handle for you.
To configure cache you need to override `configure_cache` method in `TrainableModel`.
This method should return an instance of [CacheConfig](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheConfig).
Let's add cache to our model:
```python
...
from quaterion.train.cache import CacheConfig, CacheType
...
class FAQModel(TrainableModel):
...
def configure_caches(self) -> Optional[CacheConfig]:
return CacheConfig(CacheType.AUTO)
...
```
[CacheType](https://quaterion.qdrant.tech/quaterion.train.cache.cache_config.html#quaterion.train.cache.cache_config.CacheType) determines how the cache will be stored in memory.
### Training
Now we need to combine all our code together in `train.py` and launch a training process.
```python
import torch
import pytorch_lightning as pl
from quaterion import Quaterion
from quaterion.dataset import PairsSimilarityDataLoader
from faq.dataset import FAQDataset
def train(model, train_dataset_path, val_dataset_path, params):
use_gpu = params.get("cuda", torch.cuda.is_available())
trainer = pl.Trainer(
min_epochs=params.get("min_epochs", 1),
max_epochs=params.get("max_epochs", 500),
auto_select_gpus=use_gpu,
log_every_n_steps=params.get("log_every_n_steps", 1),
gpus=int(use_gpu),
)
train_dataset = FAQDataset(train_dataset_path)
val_dataset = FAQDataset(val_dataset_path)
train_dataloader = PairsSimilarityDataLoader(
train_dataset, batch_size=1024
)
val_dataloader = PairsSimilarityDataLoader(
val_dataset, batch_size=1024
)
Quaterion.fit(model, trainer, train_dataloader, val_dataloader)
if __name__ == "__main__":
import os
from pytorch_lightning import seed_everything
from faq.model import FAQModel
from faq.config import DATA_DIR, ROOT_DIR
seed_everything(42, workers=True)
faq_model = FAQModel()
train_path = os.path.join(
DATA_DIR,
"train_cloud_faq_dataset.jsonl"
)
val_path = os.path.join(
DATA_DIR,
"val_cloud_faq_dataset.jsonl"
)
train(faq_model, train_path, val_path, {})
faq_model.save_servable(os.path.join(ROOT_DIR, "servable"))
```
Here are a couple of unseen classes, `PairsSimilarityDataLoader`, which is a native dataloader for
`SimilarityPairSample` objects, and `Quaterion` is an entry point to the training process.
### Dataset-wise evaluation
Up to this moment we've calculated only batch-wise metrics.
Such metrics can fluctuate a lot depending on a batch size and can be misleading.
It might be helpful if we can calculate a metric on a whole dataset or some large part of it.
Raw data may consume a huge amount of memory, and usually we can't fit it into one batch.
Embeddings, on the contrary, most probably will consume less.
That's where `Evaluator` enters the scene.
At first, having dataset of `SimilaritySample`, `Evaluator` encodes it via `SimilarityModel` and compute corresponding labels.
After that, it calculates a metric value, which could be more representative than batch-wise ones.
However, you still can find yourself in a situation where evaluation becomes too slow, or there is no enough space left in the memory.
A bottleneck might be a squared distance matrix, which one needs to calculate to compute a retrieval metric.
You can mitigate this bottleneck by calculating a rectangle matrix with reduced size.
`Evaluator` accepts `sampler` with a sample size to select only specified amount of embeddings.
If sample size is not specified, evaluation is performed on all embeddings.
Fewer words! Let's add evaluator to our code and finish `train.py`.
```python
...
from quaterion.eval.evaluator import Evaluator
from quaterion.eval.pair import RetrievalReciprocalRank, RetrievalPrecision
from quaterion.eval.samplers.pair_sampler import PairSampler
...
def train(model, train_dataset_path, val_dataset_path, params):
...
metrics = {
"rrk": RetrievalReciprocalRank(),
"rp@1": RetrievalPrecision(k=1)
}
sampler = PairSampler()
evaluator = Evaluator(metrics, sampler)
results = Quaterion.evaluate(evaluator, val_dataset, model.model)
print(f"results: {results}")
```
### Train Results
At this point we can train our model, I do it via `python3 -m faq.train`.
<div class="table-responsive">
|epoch|train_precision@1|train_reciprocal_rank|val_precision@1|val_reciprocal_rank|
|-----|-----------------|---------------------|---------------|-------------------|
|0 |0.650 |0.732 |0.659 |0.741 |
|100 |0.665 |0.746 |0.673 |0.754 |
|200 |0.677 |0.757 |0.682 |0.763 |
|300 |0.686 |0.765 |0.688 |0.768 |
|400 |0.695 |0.772 |0.694 |0.773 |
|500 |0.701 |0.778 |0.700 |0.777 |
</div>
Results obtained with `Evaluator`:
<div class="table-responsive">
| precision@1 | reciprocal_rank |
|-------------|-----------------|
| 0.577 | 0.675 |
</div>
After training all the metrics have been increased.
And this training was done in just 3 minutes on a single gpu!
There is no overfitting and the results are steadily growing, although I think there is still room for improvement and experimentation.
## Model serving
As you could already notice, Quaterion framework is split into two separate libraries: `quaterion`
and [quaterion-models](https://quaterion-models.qdrant.tech/).
The former one contains training related stuff like losses, cache, `pytorch-lightning` dependency, etc.
While the latter one contains only modules necessary for serving: encoders, heads and `SimilarityModel` itself.
The reasons for this separation are:
- less amount of entities you need to operate in a production environment
- reduced memory footprint
It is essential to isolate training dependencies from the serving environment cause the training step is usually more complicated.
Training dependencies are quickly going out of control, significantly slowing down the deployment and serving timings and increasing unnecessary resource usage.
The very last row of `train.py` - `faq_model.save_servable(...)` saves encoders and the model in a fashion that eliminates all Quaterion dependencies and stores only the most necessary data to run a model in production.
In `serve.py` we load and encode all the answers and then look for the closest vectors to the questions we are interested in:
```python
import os
import json
import torch
from quaterion_models.model import SimilarityModel
from quaterion.distances import Distance
from faq.config import DATA_DIR, ROOT_DIR
if __name__ == "__main__":
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = SimilarityModel.load(os.path.join(ROOT_DIR, "servable"))
model.to(device)
dataset_path = os.path.join(DATA_DIR, "val_cloud_faq_dataset.jsonl")
with open(dataset_path) as fd:
answers = [json.loads(json_line)["answer"] for json_line in fd]
# everything is ready, let's encode our answers
answer_embeddings = model.encode(answers, to_numpy=False)
# Some prepared questions and answers to ensure that our model works as intended
questions = [
"what is the pricing of aws lambda functions powered by aws graviton2 processors?",
"can i run a cluster or job for a long time?",
"what is the dell open manage system administrator suite (omsa)?",
"what are the differences between the event streams standard and event streams enterprise plans?",
]
ground_truth_answers = [
"aws lambda functions powered by aws graviton2 processors are 20% cheaper compared to x86-based lambda functions",
"yes, you can run a cluster for as long as is required",
"omsa enables you to perform certain hardware configuration tasks and to monitor the hardware directly via the operating system",
"to find out more information about the different event streams plans, see choosing your plan",
]
# encode our questions and find the closest to them answer embeddings
question_embeddings = model.encode(questions, to_numpy=False)
distance = Distance.get_by_name(Distance.COSINE)
question_answers_distances = distance.distance_matrix(
question_embeddings, answer_embeddings
)
answers_indices = question_answers_distances.min(dim=1)[1]
for q_ind, a_ind in enumerate(answers_indices):
print("Q:", questions[q_ind])
print("A:", answers[a_ind], end="\n\n")
assert (
answers[a_ind] == ground_truth_answers[q_ind]
), f"<{answers[a_ind]}> != <{ground_truth_answers[q_ind]}>"
```
We stored our collection of answer embeddings in memory and perform search directly in Python.
For production purposes, it's better to use some sort of vector search engine like [Qdrant](https://github.com/qdrant/qdrant).
It provides durability, speed boost, and a bunch of other features.
So far, we've implemented a whole training process, prepared model for serving and even applied a
trained model today with `Quaterion`.
Thank you for your time and attention!
I hope you enjoyed this huge tutorial and will use `Quaterion` for your similarity learning projects.
All ready to use code can be found [here](https://github.com/qdrant/demo-cloud-faq/tree/tutorial).
Stay tuned!:) |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 45