content
stringlengths 2.14k
402k
| url
stringlengths 25
141
|
---|---|
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [π¦π οΈ LangSmith](/v0.2/docs/langsmith/)
* [π¦πΈοΈLangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* Introduction
On this page
Introduction
============
**LangChain** is a framework for developing applications powered by large language models (LLMs).
LangChain simplifies every stage of the LLM application lifecycle:
* **Development**: Build your applications using LangChain's open-source [building blocks](/v0.2/docs/how_to/#langchain-expression-language-lcel) and [components](/v0.2/docs/how_to/). Hit the ground running using [third-party integrations](/v0.2/docs/integrations/platforms/).
* **Productionization**: Use [LangSmith](/v0.2/docs/langsmith/) to inspect, monitor and evaluate your chains, so that you can continuously optimize and deploy with confidence.
* **Deployment**: Turn any chain into an API with [LangServe](https://www.langchain.com/langserve).
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](/v0.2/svg/langchain_stack.svg "LangChain Framework Overview")![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](/v0.2/svg/langchain_stack_dark.svg "LangChain Framework Overview")
Concretely, the framework consists of the following open-source libraries:
* **`@langchain/core`**: Base abstractions and LangChain Expression Language.
* **`@langchain/community`**: Third party integrations.
* Partner packages (e.g. **`@langchain/openai`**, **`@langchain/anthropic`**, etc.): Some integrations have been further split into their own lightweight packages that only depend on **`@langchain/core`**.
* **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
* **[langgraph](https://www.langchain.com/langserveh)**: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
* **[LangSmith](/v0.2/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor LLM applications.
note
These docs focus on the JavaScript LangChain library. [Head here](https://python.langchain.com) for docs on the Python LangChain library.
[Tutorials](/v0.2/docs/tutorials)[β](#tutorials "Direct link to tutorials")
---------------------------------------------------------------------------
If you're looking to build something specific or are more of a hands-on learner, check out our [tutorials](/v0.2/docs/tutorials). This is the best place to get started.
These are the best ones to get started with:
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
Explore the full list of tutorials [here](/v0.2/docs/tutorials).
[How-To Guides](/v0.2/docs/how_to/)[β](#how-to-guides "Direct link to how-to-guides")
-------------------------------------------------------------------------------------
[Here](/v0.2/docs/how_to/) you'll find short answers to βHow do Iβ¦.?β types of questions. These how-to guides don't cover topics in depth - you'll find that material in the [Tutorials](/v0.2/docs/tutorials) and the [API Reference](https://v02.api.js.langchain.com). However, these guides will help you quickly accomplish common tasks.
[Conceptual Guide](/v0.2/docs/concepts)[β](#conceptual-guide "Direct link to conceptual-guide")
-----------------------------------------------------------------------------------------------
Introductions to all the key parts of LangChain you'll need to know! [Here](/v0.2/docs/concepts) you'll find high level explanations of all LangChain concepts.
[API reference](https://v02.api.js.langchain.com)[β](#api-reference "Direct link to api-reference")
---------------------------------------------------------------------------------------------------
Head to the reference section for full documentation of all classes and methods in the LangChain Python packages.
Ecosystem[β](#ecosystem "Direct link to Ecosystem")
---------------------------------------------------
### [π¦π οΈ LangSmith](/v0.2/docs/langsmith)[β](#οΈ-langsmith "Direct link to οΈ-langsmith")
Trace and evaluate your language model applications and intelligent agents to help you move from prototype to production.
### [π¦πΈοΈ LangGraph](/v0.2/docs/langgraph)[β](#οΈ-langgraph "Direct link to οΈ-langgraph")
Build stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain primitives.
Additional resources[β](#additional-resources "Direct link to Additional resources")
------------------------------------------------------------------------------------
[Security](/v0.2/docs/security)[β](#security "Direct link to security")
-----------------------------------------------------------------------
Read up on our [Security](/v0.2/docs/security) best practices to make sure you're developing safely with LangChain.
### [Integrations](/v0.2/docs/integrations/platforms/)[β](#integrations "Direct link to integrations")
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/v0.2/docs/integrations/platforms/).
### [Contributing](/v0.2/docs/contributing)[β](#contributing "Direct link to contributing")
Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Next
Tutorials
](/v0.2/docs/tutorials/)
* [Tutorials](#tutorials)
* [How-To Guides](#how-to-guides)
* [Conceptual Guide](#conceptual-guide)
* [API reference](#api-reference)
* [Ecosystem](#ecosystem)
* [π¦π οΈ LangSmith](#οΈ-langsmith)
* [π¦πΈοΈ LangGraph](#οΈ-langgraph)
* [Additional resources](#additional-resources)
* [Security](#security)
* [Integrations](#integrations)
* [Contributing](#contributing)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. | https://js.langchain.com/v0.2/docs/ |
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [π¦π οΈ LangSmith](/v0.2/docs/langsmith/)
* [π¦πΈοΈLangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to create a time-weighted retriever
On this page
How to create a time-weighted retriever
=======================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Retrievers](/v0.2/docs/concepts/#retrievers)
* [Vector stores](/v0.2/docs/concepts/#vectorstores)
* [Retrieval-augmented generation (RAG)](/v0.2/docs/tutorials/rag)
This guide covers the [`TimeWeightedVectorStoreRetriever`](https://v02.api.js.langchain.com/classes/langchain_retrievers_time_weighted.TimeWeightedVectorStoreRetriever.html), which uses a combination of semantic similarity and a time decay.
The algorithm for scoring them is:
semantic_similarity + (1.0 - decay_rate) ^ hours_passed
Notably, `hours_passed` refers to the hours passed since the object in the retriever **was last accessed**, not since it was created. This means that frequently accessed objects remain "fresh."
let score = (1.0 - this.decayRate) ** hoursPassed + vectorRelevance;
`this.decayRate` is a configurable decimal number between 0 and 1. A lower number means that documents will be "remembered" for longer, while a higher number strongly weights more recently accessed documents.
Note that setting a decay rate of exactly 0 or 1 makes `hoursPassed` irrelevant and makes this retriever equivalent to a standard vector lookup.
It is important to note that due to required metadata, all documents must be added to the backing vector store using the `addDocuments` method on the **retriever**, not the vector store itself.
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { TimeWeightedVectorStoreRetriever } from "langchain/retrievers/time_weighted";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = new MemoryVectorStore(new OpenAIEmbeddings());const retriever = new TimeWeightedVectorStoreRetriever({ vectorStore, memoryStream: [], searchKwargs: 2,});const documents = [ "My name is John.", "My name is Bob.", "My favourite food is pizza.", "My favourite food is pasta.", "My favourite food is sushi.",].map((pageContent) => ({ pageContent, metadata: {} }));// All documents must be added using this method on the retriever (not the vector store!)// so that the correct access history metadata is populatedawait retriever.addDocuments(documents);const results1 = await retriever.invoke("What is my favourite food?");console.log(results1);/*[ Document { pageContent: 'My favourite food is pasta.', metadata: {} }] */const results2 = await retriever.invoke("What is my favourite food?");console.log(results2);/*[ Document { pageContent: 'My favourite food is pasta.', metadata: {} }] */
#### API Reference:
* [TimeWeightedVectorStoreRetriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_time_weighted.TimeWeightedVectorStoreRetriever.html) from `langchain/retrievers/time_weighted`
* [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
Next steps[β](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to use time as a factor when performing retrieval.
Next, check out the [broader tutorial on RAG](/v0.2/docs/tutorials/rag), or this section to learn how to [create your own custom retriever over any data source](/v0.2/docs/how_to/custom_retriever/).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to stream
](/v0.2/docs/how_to/streaming)[
Next
How to use a chat model to call tools
](/v0.2/docs/how_to/tool_calling)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. | https://js.langchain.com/v0.2/docs/how_to/time_weighted_vectorstore |
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [π¦π οΈ LangSmith](/v0.2/docs/langsmith/)
* [π¦π οΈ LangSmith](/v0.2/docs/langsmith/)
* [walkthrough](/v0.2/docs/langsmith/walkthrough)
* [π¦πΈοΈLangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* Ecosystem
* π¦π οΈ LangSmith
π¦π οΈ LangSmith
===============
[LangSmith](https://smith.langchain.com) helps you trace and evaluate your language model applications and intelligent agents to help you move from prototype to production.
Check out the [interactive walkthrough](/v0.2/docs/langsmith/walkthrough) to get started.
For more information, please refer to the [LangSmith documentation](https://docs.smith.langchain.com/).
For tutorials and other end-to-end examples demonstrating ways to integrate LangSmith in your workflow, check out the [LangSmith Cookbook](https://github.com/langchain-ai/langsmith-cookbook). Some of the guides therein include:
* Leveraging user feedback in your JS application ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/feedback-examples/nextjs/README.md)).
* Building an automated feedback pipeline ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/feedback-examples/algorithmic-feedback/algorithmic_feedback.ipynb)).
* How to evaluate and audit your RAG workflows ([link](https://github.com/langchain-ai/langsmith-cookbook/tree/main/testing-examples/qa-correctness)).
* How to fine-tune an LLM on real usage data ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/fine-tuning-examples/export-to-openai/fine-tuning-on-chat-runs.ipynb)).
* How to use the [LangChain Hub](https://smith.langchain.com/hub) to version your prompts ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/hub-examples/retrieval-qa-chain/retrieval-qa.ipynb))
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
Conceptual guide
](/v0.2/docs/concepts)[
Next
π¦π οΈ LangSmith
](/v0.2/docs/langsmith/)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. | https://js.langchain.com/v0.2/docs/langsmith/ |
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [π¦π οΈ LangSmith](/v0.2/docs/langsmith/)
* [π¦πΈοΈLangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to create and query vector stores
On this page
How to create and query vector stores
=====================================
info
Head to [Integrations](/v0.2/docs/integrations/vectorstores) for documentation on built-in integrations with vectorstore providers.
Prerequisites
This guide assumes familiarity with the following concepts:
* [Vector stores](/v0.2/docs/concepts/#vectorstores)
* [Embeddings](/v0.2/docs/concepts/#embedding-models)
* [Document loaders](/v0.2/docs/concepts#document-loaders)
One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search for you.
This walkthrough uses a basic, unoptimized implementation called [`MemoryVectorStore`](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) that stores embeddings in-memory and does an exact, linear search for the most similar embeddings. LangChain contains many built-in integrations - see [this section](/v0.2/docs/how_to/vectorstores/#which-one-to-pick) for more, or the [full list of integrations](/v0.2/docs/integrations/vectorstores/).
Creating a new index[β](#creating-a-new-index "Direct link to Creating a new index")
------------------------------------------------------------------------------------
Most of the time, you'll need to load and prepare the data you want to search over. Here's an example that loads a recent speech from a file:
import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";import { TextLoader } from "langchain/document_loaders/fs/text";// Create docs with a loaderconst loader = new TextLoader("src/document_loaders/example_data/example.txt");const docs = await loader.load();// Load the docs into the vector storeconst vectorStore = await MemoryVectorStore.fromDocuments( docs, new OpenAIEmbeddings());// Search for the most similar documentconst resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/
#### API Reference:
* [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [TextLoader](https://v02.api.js.langchain.com/classes/langchain_document_loaders_fs_text.TextLoader.html) from `langchain/document_loaders/fs/text`
Most of the time, you'll need to split the loaded text as a preparation step. See [this section](/v0.2/docs/concepts/#text-splitters) to learn more about text splitters.
Creating a new index from texts[β](#creating-a-new-index-from-texts "Direct link to Creating a new index from texts")
---------------------------------------------------------------------------------------------------------------------
If you have already prepared the data you want to search over, you can initialize a vector store directly from text chunks:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
import { MemoryVectorStore } from "langchain/vectorstores/memory";import { OpenAIEmbeddings } from "@langchain/openai";const vectorStore = await MemoryVectorStore.fromTexts( ["Hello world", "Bye bye", "hello nice world"], [{ id: 2 }, { id: 1 }, { id: 3 }], new OpenAIEmbeddings());const resultOne = await vectorStore.similaritySearch("hello world", 1);console.log(resultOne);/* [ Document { pageContent: "Hello world", metadata: { id: 2 } } ]*/
#### API Reference:
* [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
Which one to pick?[β](#which-one-to-pick "Direct link to Which one to pick?")
-----------------------------------------------------------------------------
Here's a quick guide to help you pick the right vector store for your use case:
* If you're after something that can just run inside your Node.js application, in-memory, without any other servers to stand up, then go for [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib), [Faiss](/v0.2/docs/integrations/vectorstores/faiss), [LanceDB](/v0.2/docs/integrations/vectorstores/lancedb) or [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* If you're looking for something that can run in-memory in browser-like environments, then go for [MemoryVectorStore](/v0.2/docs/integrations/vectorstores/memory) or [CloseVector](/v0.2/docs/integrations/vectorstores/closevector)
* If you come from Python and you were looking for something similar to FAISS, try [HNSWLib](/v0.2/docs/integrations/vectorstores/hnswlib) or [Faiss](/v0.2/docs/integrations/vectorstores/faiss)
* If you're looking for an open-source full-featured vector database that you can run locally in a docker container, then go for [Chroma](/v0.2/docs/integrations/vectorstores/chroma)
* If you're looking for an open-source vector database that offers low-latency, local embedding of documents and supports apps on the edge, then go for [Zep](/v0.2/docs/integrations/vectorstores/zep)
* If you're looking for an open-source production-ready vector database that you can run locally (in a docker container) or hosted in the cloud, then go for [Weaviate](/v0.2/docs/integrations/vectorstores/weaviate).
* If you're using Supabase already then look at the [Supabase](/v0.2/docs/integrations/vectorstores/supabase) vector store to use the same Postgres database for your embeddings too
* If you're looking for a production-ready vector store you don't have to worry about hosting yourself, then go for [Pinecone](/v0.2/docs/integrations/vectorstores/pinecone)
* If you are already utilizing SingleStore, or if you find yourself in need of a distributed, high-performance database, you might want to consider the [SingleStore](/v0.2/docs/integrations/vectorstores/singlestore) vector store.
* If you are looking for an online MPP (Massively Parallel Processing) data warehousing service, you might want to consider the [AnalyticDB](/v0.2/docs/integrations/vectorstores/analyticdb) vector store.
* If you're in search of a cost-effective vector database that allows run vector search with SQL, look no further than [MyScale](/v0.2/docs/integrations/vectorstores/myscale).
* If you're in search of a vector database that you can load from both the browser and server side, check out [CloseVector](/v0.2/docs/integrations/vectorstores/closevector). It's a vector database that aims to be cross-platform.
* If you're looking for a scalable, open-source columnar database with excellent performance for analytical queries, then consider [ClickHouse](/v0.2/docs/integrations/vectorstores/clickhouse).
Next steps[β](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to load data into a vectorstore.
Next, check out the [full tutorial on retrieval-augmented generation](/v0.2/docs/tutorials/rag).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How use a vector store to retrieve data
](/v0.2/docs/how_to/vectorstore_retriever)[
Next
Conceptual guide
](/v0.2/docs/concepts)
* [Creating a new index](#creating-a-new-index)
* [Creating a new index from texts](#creating-a-new-index-from-texts)
* [Which one to pick?](#which-one-to-pick)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. | https://js.langchain.com/v0.2/docs/how_to/vectorstores |
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [π¦π οΈ LangSmith](/v0.2/docs/langsmith/)
* [π¦πΈοΈLangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to call tools with multi-modal data
On this page
How to call tools with multi-modal data
=======================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Chat models](/v0.2/docs/concepts/#chat-models)
* [LangChain Tools](/v0.2/docs/concepts/#tools)
Here we demonstrate how to call tools with multi-modal data, such as images.
Some multi-modal models, such as those that can reason over images or audio, support [tool calling](/v0.2/docs/concepts/#tool-calling) features as well.
To call tools using such models, simply bind tools to them in the [usual way](/v0.2/docs/how_to/tool_calling), and invoke the model using content blocks of the desired type (e.g., containing image data).
Below, we demonstrate examples using [OpenAI](/v0.2/docs/integrations/platforms/openai) and [Anthropic](/v0.2/docs/integrations/platforms/anthropic). We will use the same image and tool in all cases. Letβs first select an image, and build a placeholder tool that expects as input the string βsunnyβ, βcloudyβ, or βrainyβ. We will ask the models to describe the weather in the image.
import { DynamicStructuredTool } from "@langchain/core/tools";import { z } from "zod";const imageUrl = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg";const weatherTool = new DynamicStructuredTool({ name: "multiply", description: "Describe the weather", schema: z.object({ weather: z.enum(["sunny", "cloudy", "rainy"]), }), func: async ({ weather }) => { console.log(weather); return weather; },});
OpenAI[β](#openai "Direct link to OpenAI")
------------------------------------------
For OpenAI, we can feed the image URL directly in a content block of type βimage\_urlβ:
import { HumanMessage } from "@langchain/core/messages";import { ChatOpenAI } from "@langchain/openai";const model = new ChatOpenAI({ model: "gpt-4o",}).bindTools([weatherTool]);const message = new HumanMessage({ content: [ { type: "text", text: "describe the weather in this image", }, { type: "image_url", image_url: { url: imageUrl, }, }, ],});const response = await model.invoke([message]);console.log(response.tool_calls);
[ { name: "multiply", args: { weather: "sunny" }, id: "call_MbIAYS9ESBG1EWNM2sMlinjR" }]
Note that we recover tool calls with parsed arguments in LangChainβs [standard format](/v0.2/docs/how_to/tool_calling) in the model response.
Anthropic[β](#anthropic "Direct link to Anthropic")
---------------------------------------------------
For Anthropic, we can format a base64-encoded image into a content block of type βimageβ, as below:
import * as fs from "node:fs/promises";import { ChatAnthropic } from "@langchain/anthropic";import { HumanMessage } from "@langchain/core/messages";const imageData = await fs.readFile("../../data/sunny_day.jpeg");const model = new ChatAnthropic({ model: "claude-3-sonnet-20240229",}).bindTools([weatherTool]);const message = new HumanMessage({ content: [ { type: "text", text: "describe the weather in this image", }, { type: "image_url", image_url: { url: `data:image/jpeg;base64,${imageData.toString("base64")}`, }, }, ],});const response = await model.invoke([message]);console.log(response.tool_calls);
[ { name: "multiply", args: { weather: "sunny" }, id: "toolu_01KnRZWQkgWYSzL2x28crXFm" }]
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use a chat model to call tools
](/v0.2/docs/how_to/tool_calling)[
Next
How to use LangChain tools
](/v0.2/docs/how_to/tools_builtin)
* [OpenAI](#openai)
* [Anthropic](#anthropic)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. | https://js.langchain.com/v0.2/docs/how_to/tool_calls_multi_modal |
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [π¦π οΈ LangSmith](/v0.2/docs/langsmith/)
* [π¦πΈοΈLangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How use a vector store to retrieve data
On this page
How use a vector store to retrieve data
=======================================
Prerequisites
This guide assumes familiarity with the following concepts:
* [Vector stores](/v0.2/docs/concepts/#vectorstores)
* [Retrievers](/v0.2/docs/concepts/#retrievers)
* [Text splitters](/v0.2/docs/concepts#text-splitters)
* [Chaining runnables](/v0.2/docs/how_to/sequence/)
Vector stores can be converted into retrievers using the [`.asRetriever()`](https://v02.api.js.langchain.com/classes/langchain_core_vectorstores.VectorStore.html#asRetriever) method, which allows you to more easily compose them in chains.
Below, we show a retrieval-augmented generation (RAG) chain that performs question answering over documents using the following steps:
1. Initialize an vector store
2. Create a retriever from that vector store
3. Compose a question answering chain
4. Ask questions!
Each of the steps has multiple sub steps and potential configurations, but we'll go through one common flow. First, install the required dependency:
tip
See [this section for general instructions on installing integration packages](/v0.2/docs/how_to/installation#installing-integration-packages).
* npm
* Yarn
* pnpm
npm install @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
You can download the `state_of_the_union.txt` file [here](https://github.com/langchain-ai/langchain/blob/master/docs/docs/modules/state_of_the_union.txt).
import * as fs from "node:fs";import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";import { MemoryVectorStore } from "langchain/vectorstores/memory";import { RunnablePassthrough, RunnableSequence,} from "@langchain/core/runnables";import { StringOutputParser } from "@langchain/core/output_parsers";import { ChatPromptTemplate } from "@langchain/core/prompts";import type { Document } from "@langchain/core/documents";const formatDocumentsAsString = (documents: Document[]) => { return documents.map((document) => document.pageContent).join("\n\n");};// Initialize the LLM to use to answer the question.const model = new ChatOpenAI({ model: "gpt-4o",});const text = fs.readFileSync("state_of_the_union.txt", "utf8");const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });const docs = await textSplitter.createDocuments([text]);// Create a vector store from the documents.const vectorStore = await MemoryVectorStore.fromDocuments( docs, new OpenAIEmbeddings());// Initialize a retriever wrapper around the vector storeconst vectorStoreRetriever = vectorStore.asRetriever();// Create a system & human prompt for the chat modelconst SYSTEM_TEMPLATE = `Use the following pieces of context to answer the question at the end.If you don't know the answer, just say that you don't know, don't try to make up an answer.----------------{context}`;const prompt = ChatPromptTemplate.fromMessages([ ["system", SYSTEM_TEMPLATE], ["human", "{question}"],]);const chain = RunnableSequence.from([ { context: vectorStoreRetriever.pipe(formatDocumentsAsString), question: new RunnablePassthrough(), }, prompt, model, new StringOutputParser(),]);const answer = await chain.invoke( "What did the president say about Justice Breyer?");console.log({ answer });/* { answer: 'The president honored Justice Stephen Breyer by recognizing his dedication to serving the country as an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. He thanked Justice Breyer for his service.' }*/
#### API Reference:
* [OpenAIEmbeddings](https://v02.api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html) from `@langchain/openai`
* [ChatOpenAI](https://v02.api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html) from `@langchain/openai`
* [RecursiveCharacterTextSplitter](https://v02.api.js.langchain.com/classes/langchain_textsplitters.RecursiveCharacterTextSplitter.html) from `@langchain/textsplitters`
* [MemoryVectorStore](https://v02.api.js.langchain.com/classes/langchain_vectorstores_memory.MemoryVectorStore.html) from `langchain/vectorstores/memory`
* [RunnablePassthrough](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnablePassthrough.html) from `@langchain/core/runnables`
* [RunnableSequence](https://v02.api.js.langchain.com/classes/langchain_core_runnables.RunnableSequence.html) from `@langchain/core/runnables`
* [StringOutputParser](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StringOutputParser.html) from `@langchain/core/output_parsers`
* [ChatPromptTemplate](https://v02.api.js.langchain.com/classes/langchain_core_prompts.ChatPromptTemplate.html) from `@langchain/core/prompts`
* [Document](https://v02.api.js.langchain.com/classes/langchain_core_documents.Document.html) from `@langchain/core/documents`
Let's walk through what's happening here.
1. We first load a long text and split it into smaller documents using a text splitter. We then load those documents (which also embeds the documents using the passed `OpenAIEmbeddings` instance) into HNSWLib, our vector store, creating our index.
2. Though we can query the vector store directly, we convert the vector store into a retriever to return retrieved documents in the right format for the question answering chain.
3. We initialize a retrieval chain, which we'll call later in step 4.
4. We ask questions!
Next steps[β](#next-steps "Direct link to Next steps")
------------------------------------------------------
You've now learned how to convert a vector store as a retriever.
See the individual sections for deeper dives on specific retrievers, the [broader tutorial on RAG](/v0.2/docs/tutorials/rag), or this section to learn how to [create your own custom retriever over any data source](/v0.2/docs/how_to/custom_retriever/).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to use LangChain tools
](/v0.2/docs/how_to/tools_builtin)[
Next
How to create and query vector stores
](/v0.2/docs/how_to/vectorstores)
* [Next steps](#next-steps)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. | https://js.langchain.com/v0.2/docs/how_to/vectorstore_retriever |
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [π¦π οΈ LangSmith](/v0.2/docs/langsmith/)
* [π¦πΈοΈLangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* Conceptual guide
On this page
Conceptual guide
================
This section contains introductions to key parts of LangChain.
Architecture[β](#architecture "Direct link to Architecture")
------------------------------------------------------------
LangChain as a framework consists of several pieces. The below diagram shows how they relate.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](/v0.2/svg/langchain_stack.svg "LangChain Framework Overview")![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](/v0.2/svg/langchain_stack_dark.svg "LangChain Framework Overview")
### `@langchain/core`[β](#langchaincore "Direct link to langchaincore")
This package contains base abstractions of different components and ways to compose them together. The interfaces for core components like LLMs, vectorstores, retrievers and more are defined here. No third party integrations are defined here. The dependencies are kept purposefully very lightweight.
### `@langchain/community`[β](#langchaincommunity "Direct link to langchaincommunity")
This package contains third party integrations that are maintained by the LangChain community. Key partner packages are separated out (see below). This contains all integrations for various components (LLMs, vectorstores, retrievers). All dependencies in this package are optional to keep the package as lightweight as possible.
### Partner packages[β](#partner-packages "Direct link to Partner packages")
While the long tail of integrations are in `@langchain/community`, we split popular integrations into their own packages (e.g. `langchain-openai`, `langchain-anthropic`, etc). This was done in order to improve support for these important integrations.
### `langchain`[β](#langchain "Direct link to langchain")
The main `langchain` package contains chains, agents, and retrieval strategies that make up an application's cognitive architecture. These are NOT third party integrations. All chains, agents, and retrieval strategies here are NOT specific to any one integration, but rather generic across all integrations.
### [LangGraph](/v0.2/docs/langgraph)[β](#langgraph "Direct link to langgraph")
Not currently in this repo, `langgraph` is an extension of `langchain` aimed at building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
LangGraph exposes high level interfaces for creating common types of agents, as well as a low-level API for constructing more contr
### [LangSmith](/v0.2/docs/langsmith)[β](#langsmith "Direct link to langsmith")
A developer platform that lets you debug, test, evaluate, and monitor LLM applications.
Installation[β](#installation "Direct link to Installation")
------------------------------------------------------------
If you want to work with high level abstractions, you should install the `langchain` package.
* npm
* Yarn
* pnpm
npm i langchain
yarn add langchain
pnpm add langchain
If you want to work with specific integrations, you will need to install them separately. See [here](/v0.2/docs/integrations/platforms/) for a list of integrations and how to install them.
For working with LangSmith, you will need to set up a LangSmith developer account [here](https://smith.langchain.com) and get an API key. After that, you can enable it by setting environment variables:
export LANGCHAIN_TRACING_V2=trueexport LANGCHAIN_API_KEY=ls__...
LangChain Expression Language[β](#langchain-expression-language "Direct link to LangChain Expression Language")
---------------------------------------------------------------------------------------------------------------
LangChain Expression Language, or LCEL, is a declarative way to easily compose chains together. LCEL was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest βprompt + LLMβ chain to the most complex chains (weβve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL:
**First-class streaming support** When you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens.
**Optimized parallel execution** Whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it for the smallest possible latency.
**Retries and fallbacks** Configure retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. Weβre currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost.
**Access intermediate results** For more complex chains itβs often very useful to access the results of intermediate steps even before the final output is produced. This can be used to let end-users know something is happening, or even just to debug your chain. You can stream intermediate results, and itβs available on every [LangServe](https://www.langchain.com/langserve/) server.
**Input and output schemas** Input and output schemas give every LCEL chain schemas inferred from the structure of your chain. This can be used for validation of inputs and outputs, and is an integral part of LangServe.
[**Seamless LangSmith tracing**](/v0.2/docs/langsmith) As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step. With LCEL, **all** steps are automatically logged to [LangSmith](/v0.2/docs/langsmith/) for maximum observability and debuggability.
[**Seamless LangServe deployment**](https://www.langchain.com/langserve/) Any chain created with LCEL can be easily deployed using [LangServe](https://www.langchain.com/langserve/).
### Interface[β](#interface "Direct link to Interface")
To make it as easy as possible to create custom chains, we've implemented a ["Runnable"](https://v02.api.js.langchain.com/classes/langchain_core_runnables.Runnable.html) protocol. Many LangChain components implement the `Runnable` protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about below.
This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way. The standard interface includes:
* [`stream`](#stream): stream back chunks of the response
* [`invoke`](#invoke): call the chain on an input
* [`batch`](#batch): call the chain on an array of inputs
The **input type** and **output type** varies by component:
Component
Input Type
Output Type
Prompt
Object
PromptValue
ChatModel
Single string, list of chat messages or a PromptValue
ChatMessage
LLM
Single string, list of chat messages or a PromptValue
String
OutputParser
The output of an LLM or ChatModel
Depends on the parser
Retriever
Single string
List of Documents
Tool
Single string or object, depending on the tool
Depends on the tool
Components[β](#components "Direct link to Components")
------------------------------------------------------
LangChain provides standard, extendable interfaces and external integrations for various components useful for building with LLMs. Some components LangChain implements, some components we rely on third-party integrations for, and others are a mix.
### LLMs[β](#llms "Direct link to LLMs")
Language models that takes a string as input and returns a string. These are traditionally older models (newer models generally are `ChatModels`, see below).
Although the underlying models are string in, string out, the LangChain wrappers also allow these models to take messages as input. This makes them interchangeable with ChatModels. When messages are passed in as input, they will be formatted into a string under the hood before being passed to the underlying model.
LangChain does not provide any LLMs, rather we rely on third party integrations.
### Chat models[β](#chat-models "Direct link to Chat models")
Language models that use a sequence of messages as inputs and return chat messages as outputs (as opposed to using plain text). These are traditionally newer models (older models are generally `LLMs`, see above). Chat models support the assignment of distinct roles to conversation messages, helping to distinguish messages from the AI, users, and instructions such as system messages.
Although the underlying models are messages in, message out, the LangChain wrappers also allow these models to take a string as input. This makes them interchangeable with LLMs (and simpler to use). When a string is passed in as input, it will be converted to a HumanMessage under the hood before being passed to the underlying model.
LangChain does not provide any ChatModels, rather we rely on third party integrations.
We have some standardized parameters when constructing ChatModels:
* `model`: the name of the model
ChatModels also accept other parameters that are specific to that integration.
### Function/Tool Calling[β](#functiontool-calling "Direct link to Function/Tool Calling")
info
We use the term tool calling interchangeably with function calling. Although function calling is sometimes meant to refer to invocations of a single function, we treat all models as though they can return multiple tool or function calls in each message.
Tool calling allows a model to respond to a given prompt by generating output that matches a user-defined schema. While the name implies that the model is performing some action, this is actually not the case! The model is coming up with the arguments to a tool, and actually running the tool (or not) is up to the user - for example, if you want to [extract output matching some schema](/v0.2/docs/tutorials/extraction/) from unstructured text, you could give the model an "extraction" tool that takes parameters matching the desired schema, then treat the generated output as your final result.
A tool call includes a name, arguments object, and an optional identifier. The arguments object is structured `{ argumentName: argumentValue }`.
Many LLM providers, including [Anthropic](https://www.anthropic.com/), [Cohere](https://cohere.com/), [Google](https://cloud.google.com/vertex-ai), [Mistral](https://mistral.ai/), [OpenAI](https://openai.com/), and others, support variants of a tool calling feature. These features typically allow requests to the LLM to include available tools and their schemas, and for responses to include calls to these tools. For instance, given a search engine tool, an LLM might handle a query by first issuing a call to the search engine. The system calling the LLM can receive the tool call, execute it, and return the output to the LLM to inform its response. LangChain includes a suite of [built-in tools](/v0.2/docs/integrations/tools/) and supports several methods for defining your own [custom tools](/v0.2/docs/how_to/custom_tools).
There are two main use cases for function/tool calling:
* [How to return structured data from an LLM](/v0.2/docs/how_to/structured_output/)
* [How to use a model to call tools](/v0.2/docs/how_to/tool_calling/)
### Message types[β](#message-types "Direct link to Message types")
Some language models take an array of messages as input and return a message. There are a few different types of messages. All messages have a `role`, `content`, and `response_metadata` property.
The `role` describes WHO is saying the message. LangChain has different message classes for different roles.
The `content` property describes the content of the message. This can be a few different things:
* A string (most models deal this type of content)
* A List of objects (this is used for multi-modal input, where the object contains information about that input type and that input location)
#### HumanMessage[β](#humanmessage "Direct link to HumanMessage")
This represents a message from the user.
#### AIMessage[β](#aimessage "Direct link to AIMessage")
This represents a message from the model. In addition to the `content` property, these messages also have:
**`response_metadata`**
The `response_metadata` property contains additional metadata about the response. The data here is often specific to each model provider. This is where information like log-probs and token usage may be stored.
**`tool_calls`**
These represent a decision from an language model to call a tool. They are included as part of an `AIMessage` output. They can be accessed from there with the `.tool_calls` property.
This property returns an array of objects. Each object has the following keys:
* `name`: The name of the tool that should be called.
* `args`: The arguments to that tool.
* `id`: The id of that tool call.
#### SystemMessage[β](#systemmessage "Direct link to SystemMessage")
This represents a system message, which tells the model how to behave. Not every model provider supports this.
#### FunctionMessage[β](#functionmessage "Direct link to FunctionMessage")
This represents the result of a function call. In addition to `role` and `content`, this message has a `name` parameter which conveys the name of the function that was called to produce this result.
#### ToolMessage[β](#toolmessage "Direct link to ToolMessage")
This represents the result of a tool call. This is distinct from a FunctionMessage in order to match OpenAI's `function` and `tool` message types. In addition to `role` and `content`, this message has a `tool_call_id` parameter which conveys the id of the call to the tool that was called to produce this result.
### Prompt templates[β](#prompt-templates "Direct link to Prompt templates")
Prompt templates help to translate user input and parameters into instructions for a language model. This can be used to guide a model's response, helping it understand the context and generate relevant and coherent language-based output.
Prompt Templates take as input an object, where each key represents a variable in the prompt template to fill in.
Prompt Templates output a PromptValue. This PromptValue can be passed to an LLM or a ChatModel, and can also be cast to a string or an array of messages. The reason this PromptValue exists is to make it easy to switch between strings and messages.
There are a few different types of prompt templates
#### String PromptTemplates[β](#string-prompttemplates "Direct link to String PromptTemplates")
These prompt templates are used to format a single string, and generally are used for simpler inputs. For example, a common way to construct and use a PromptTemplate is as follows:
import { PromptTemplate } from "@langchain/core/prompts";const promptTemplate = PromptTemplate.fromTemplate( "Tell me a joke about {topic}");await promptTemplate.invoke({ topic: "cats" });
#### ChatPromptTemplates[β](#chatprompttemplates "Direct link to ChatPromptTemplates")
These prompt templates are used to format an array of messages. These "templates" consist of an array of templates themselves. For example, a common way to construct and use a ChatPromptTemplate is as follows:
import { ChatPromptTemplate } from "@langchain/core/prompts";const promptTemplate = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["user", "Tell me a joke about {topic}"],]);await promptTemplate.invoke({ topic: "cats" });
In the above example, this ChatPromptTemplate will construct two messages when called. The first is a system message, that has no variables to format. The second is a HumanMessage, and will be formatted by the `topic` variable the user passes in.
#### MessagesPlaceholder[β](#messagesplaceholder "Direct link to MessagesPlaceholder")
This prompt template is responsible for adding an array of messages in a particular place. In the above ChatPromptTemplate, we saw how we could format two messages, each one a string. But what if we wanted the user to pass in an array of messages that we would slot into a particular spot? This is how you use MessagesPlaceholder.
import { ChatPromptTemplate, MessagesPlaceholder,} from "@langchain/core/prompts";import { HumanMessage } from "@langchain/core/messages";const promptTemplate = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], new MessagesPlaceholder("msgs"),]);promptTemplate.invoke({ msgs: [new HumanMessage({ content: "hi!" })] });
This will produce an array of two messages, the first one being a system message, and the second one being the HumanMessage we passed in. If we had passed in 5 messages, then it would have produced 6 messages in total (the system message plus the 5 passed in). This is useful for letting an array of messages be slotted into a particular spot.
An alternative way to accomplish the same thing without using the `MessagesPlaceholder` class explicitly is:
const promptTemplate = ChatPromptTemplate.fromMessages([ ["system", "You are a helpful assistant"], ["placeholder", "{msgs}"], // <-- This is the changed part]);
### Example Selectors[β](#example-selectors "Direct link to Example Selectors")
One common prompting technique for achieving better performance is to include examples as part of the prompt. This gives the language model concrete examples of how it should behave. Sometimes these examples are hardcoded into the prompt, but for more advanced situations it may be nice to dynamically select them. Example Selectors are classes responsible for selecting and then formatting examples into prompts.
### Output parsers[β](#output-parsers "Direct link to Output parsers")
note
The information here refers to parsers that take a text output from a model try to parse it into a more structured representation. More and more models are supporting function (or tool) calling, which handles this automatically. It is recommended to use function/tool calling rather than output parsing. See documentation for that [here](/v0.2/docs/concepts/#function-tool-calling).
Responsible for taking the output of a model and transforming it to a more suitable format for downstream tasks. Useful when you are using LLMs to generate structured data, or to normalize output from chat models and LLMs.
LangChain has lots of different types of output parsers. This is a list of output parsers LangChain supports. The table below has various pieces of information:
**Name**: The name of the output parser
**Supports Streaming**: Whether the output parser supports streaming.
**Input Type**: Expected input type. Most output parsers work on both strings and messages, but some (like OpenAI Functions) need a message with specific arguments.
**Output Type**: The output type of the object returned by the parser.
**Description**: Our commentary on this output parser and when to use it.
Name
Supports Streaming
Input Type
Output Type
Description
[JSON](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.JsonOutputParser.html)
β
`string` | `BaseMessage`
`Promise<T>`
Returns a JSON object as specified. You can specify a Zod schema and it will return JSON for that model.
[XML](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.XMLOutputParser.html)
β
`string` | `BaseMessage`
`Promise<XMLResult>`
Returns a object of tags. Use when XML output is needed. Use with models that are good at writing XML (like Anthropic's).
[CSV](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.CommaSeparatedListOutputParser.html)
β
`string` | `BaseMessage`
`Array[string]`
Returns an array of comma separated values.
[Structured](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.StructuredOutputParser.html)
`string` | `BaseMessage`
`Promise<TypeOf<T>>`
Parse structured JSON from an LLM response.
[HTTP](https://v02.api.js.langchain.com/classes/langchain_output_parsers.HttpResponseOutputParser.html)
β
`string`
`Promise<Uint8Array>`
Parse an LLM response to then send over HTTP(s). Useful when invoking the LLM on the server/edge, and then sending the content/stream back to the client.
[Bytes](https://v02.api.js.langchain.com/classes/langchain_core_output_parsers.BytesOutputParser.html)
β
`string` | `BaseMessage`
`Promise<Uint8Array>`
Parse an LLM response to then send over HTTP(s). Useful for streaming LLM responses from the server/edge to the client.
[Datetime](https://v02.api.js.langchain.com/classes/langchain_output_parsers.DatetimeOutputParser.html)
`string`
`Promise<Date>`
Parses response into a `Date`.
[Regex](https://v02.api.js.langchain.com/classes/langchain_output_parsers.RegexParser.html)
`string`
`Promise<Record<string, string>>`
Parses the given text using the regex pattern and returns a object with the parsed output.
### Chat History[β](#chat-history "Direct link to Chat History")
Most LLM applications have a conversational interface. An essential component of a conversation is being able to refer to information introduced earlier in the conversation. At bare minimum, a conversational system should be able to access some window of past messages directly.
The concept of `ChatHistory` refers to a class in LangChain which can be used to wrap an arbitrary chain. This `ChatHistory` will keep track of inputs and outputs of the underlying chain, and append them as messages to a message database Future interactions will then load those messages and pass them into the chain as part of the input.
### Document[β](#document "Direct link to Document")
A Document object in LangChain contains information about some data. It has two attributes:
* `pageContent: string`: The content of this document. Currently is only a string.
* `metadata: Record<string, any>`: Arbitrary metadata associated with this document. Can track the document id, file name, etc.
### Document loaders[β](#document-loaders "Direct link to Document loaders")
These classes load Document objects. LangChain has hundreds of integrations with various data sources to load data from: Slack, Notion, Google Drive, etc.
Each DocumentLoader has its own specific parameters, but they can all be invoked in the same way with the `.load` method. An example use case is as follows:
import { CSVLoader } from "langchain/document_loaders/fs/csv";const loader = new CSVLoader();// <-- Integration specific parameters hereconst docs = await loader.load();
### Text splitters[β](#text-splitters "Direct link to Text splitters")
Once you've loaded documents, you'll often want to transform them to better suit your application. The simplest example is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents.
When you want to deal with long pieces of text, it is necessary to split up that text into chunks. As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What "semantically related" means could depend on the type of text. This notebook showcases several ways to do that.
At a high level, text splitters work as following:
1. Split the text up into small, semantically meaningful chunks (often sentences).
2. Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function).
3. Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks).
That means there are two different axes along which you can customize your text splitter:
1. How the text is split
2. How the chunk size is measured
### Embedding models[β](#embedding-models "Direct link to Embedding models")
The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.
Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.
The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).
### Vectorstores[β](#vectorstores "Direct link to Vectorstores")
One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors, and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query. A vector store takes care of storing embedded data and performing vector search for you.
Vectorstores can be converted to the retriever interface by doing:
const vectorstore = new MyVectorStore();const retriever = vectorstore.asRetriever();
### Retrievers[β](#retrievers "Direct link to Retrievers")
A retriever is an interface that returns relevant documents given an unstructured query. They are more general than a vector store. A retriever does not need to be able to store documents, only to return (or retrieve) them. Retrievers can be created from vector stores, but are also broad enough to include [Exa search](/v0.2/docs/integrations/retrievers/exa/)(web search) and [Amazon Kendra](/v0.2/docs/integrations/retrievers/kendra-retriever/).
Retrievers accept a string query as input and return an array of Document's as output.
### Advanced Retrieval Types[β](#advanced-retrieval-types "Direct link to Advanced Retrieval Types")
LangChain provides several advanced retrieval types. A full list is below, along with the following information:
**Name**: Name of the retrieval algorithm.
**Index Type**: Which index type (if any) this relies on.
**Uses an LLM**: Whether this retrieval method uses an LLM.
**When to Use**: Our commentary on when you should considering using this retrieval method.
**Description**: Description of what this retrieval algorithm is doing.
Name
Index Type
Uses an LLM
When to Use
Description
[Vectorstore](https://v02.api.js.langchain.com/classes/langchain_core_vectorstores.VectorStoreRetriever.html)
Vectorstore
No
If you are just getting started and looking for something quick and easy.
This is the simplest method and the one that is easiest to get started with. It involves creating embeddings for each piece of text.
[ParentDocument](https://v02.api.js.langchain.com/classes/langchain_retrievers_parent_document.ParentDocumentRetriever.html)
Vectorstore + Document Store
No
If your pages have lots of smaller pieces of distinct information that are best indexed by themselves, but best retrieved all together.
This involves indexing multiple chunks for each document. Then you find the chunks that are most similar in embedding space, but you retrieve the whole parent document and return that (rather than individual chunks).
[Multi Vector](https://v02.api.js.langchain.com/classes/langchain_retrievers_multi_vector.MultiVectorRetriever.html)
Vectorstore + Document Store
Sometimes during indexing
If you are able to extract information from documents that you think is more relevant to index than the text itself.
This involves creating multiple vectors for each document. Each vector could be created in a myriad of ways - examples include summaries of the text and hypothetical questions.
[Self Query](https://v02.api.js.langchain.com/classes/langchain_retrievers_self_query.SelfQueryRetriever.html)
Vectorstore
Yes
If users are asking questions that are better answered by fetching documents based on metadata rather than similarity with the text.
This uses an LLM to transform user input into two things: (1) a string to look up semantically, (2) a metadata filer to go along with it. This is useful because oftentimes questions are about the METADATA of documents (not the content itself).
[Contextual Compression](https://v02.api.js.langchain.com/classes/langchain_retrievers_contextual_compression.ContextualCompressionRetriever.html)
Any
Sometimes
If you are finding that your retrieved documents contain too much irrelevant information and are distracting the LLM.
This puts a post-processing step on top of another retriever and extracts only the most relevant information from retrieved documents. This can be done with embeddings or an LLM.
[Time-Weighted Vectorstore](https://v02.api.js.langchain.com/classes/langchain_retrievers_time_weighted.TimeWeightedVectorStoreRetriever.html)
Vectorstore
No
If you have timestamps associated with your documents, and you want to retrieve the most recent ones
This fetches documents based on a combination of semantic similarity (as in normal vector retrieval) and recency (looking at timestamps of indexed documents)
[Multi-Query Retriever](https://v02.api.js.langchain.com/classes/langchain_retrievers_multi_query.MultiQueryRetriever.html)
Any
Yes
If users are asking questions that are complex and require multiple pieces of distinct information to respond
This uses an LLM to generate multiple queries from the original one. This is useful when the original query needs pieces of information about multiple topics to be properly answered. By generating multiple queries, we can then fetch documents for each of them.
### Tools[β](#tools "Direct link to Tools")
Tools are interfaces that an agent, chain, or LLM can use to interact with the world. They combine a few things:
1. The name of the tool
2. A description of what the tool is
3. JSON schema of what the inputs to the tool are
4. The function to call
5. Whether the result of a tool should be returned directly to the user
It is useful to have all this information because this information can be used to build action-taking systems! The name, description, and JSON schema can be used to prompt the LLM so it knows how to specify what action to take, and then the function to call is equivalent to taking that action.
The simpler the input to a tool is, the easier it is for an LLM to be able to use it. Many agents will only work with tools that have a single string input.
Importantly, the name, description, and JSON schema (if used) are all used in the prompt. Therefore, it is really important that they are clear and describe exactly how the tool should be used. You may need to change the default name, description, or JSON schema if the LLM is not understanding how to use the tool.
### Toolkits[β](#toolkits "Direct link to Toolkits")
Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods.
All Toolkits expose a `getTools` method which returns an array of tools. You can therefore do:
// Initialize a toolkitconst toolkit = new ExampleTookit(...)// Get list of toolsconst tools = toolkit.getTools()
### Agents[β](#agents "Direct link to Agents")
By themselves, language models can't take actions - they just output text. A big use case for LangChain is creating **agents**. Agents are systems that use an LLM as a reasoning enginer to determine which actions to take and what the inputs to those actions should be. The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish.
[LangGraph](https://github.com/langchain-ai/langgraphjs) is an extension of LangChain specifically aimed at creating highly controllable and customizable agents. Please check out that [documentation](https://langchain-ai.github.io/langgraphjs/) for a more in depth overview of agent concepts.
There is a legacy agent concept in LangChain that we are moving towards deprecating: `AgentExecutor`. AgentExecutor was essentially a runtime for agents. It was a great place to get started, however, it was not flexible enough as you started to have more customized agents. In order to solve that we built LangGraph to be this flexible, highly-controllable runtime.
If you are still using AgentExecutor, do not fear: we still have a guide on [how to use AgentExecutor](/v0.2/docs/how_to/agent_executor). It is recommended, however, that you start to transition to [LangGraph](https://github.com/langchain-ai/langgraphjs).
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to create and query vector stores
](/v0.2/docs/how_to/vectorstores)[
Next
π¦π οΈ LangSmith
](/v0.2/docs/langsmith/)
* [Architecture](#architecture)
* [`@langchain/core`](#langchaincore)
* [`@langchain/community`](#langchaincommunity)
* [Partner packages](#partner-packages)
* [`langchain`](#langchain)
* [LangGraph](#langgraph)
* [LangSmith](#langsmith)
* [Installation](#installation)
* [LangChain Expression Language](#langchain-expression-language)
* [Interface](#interface)
* [Components](#components)
* [LLMs](#llms)
* [Chat models](#chat-models)
* [Function/Tool Calling](#functiontool-calling)
* [Message types](#message-types)
* [Prompt templates](#prompt-templates)
* [Example Selectors](#example-selectors)
* [Output parsers](#output-parsers)
* [Chat History](#chat-history)
* [Document](#document)
* [Document loaders](#document-loaders)
* [Text splitters](#text-splitters)
* [Embedding models](#embedding-models)
* [Vectorstores](#vectorstores)
* [Retrievers](#retrievers)
* [Advanced Retrieval Types](#advanced-retrieval-types)
* [Tools](#tools)
* [Toolkits](#toolkits)
* [Agents](#agents)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. | https://js.langchain.com/v0.2/docs/concepts |
!function(){function t(t){document.documentElement.setAttribute("data-theme",t)}var e=function(){var t=null;try{t=new URLSearchParams(window.location.search).get("docusaurus-theme")}catch(t){}return t}()||function(){var t=null;try{t=localStorage.getItem("theme")}catch(t){}return t}();t(null!==e?e:"light")}(),document.documentElement.setAttribute("data-announcement-bar-initially-dismissed",function(){try{return"true"===localStorage.getItem("docusaurus.announcement.dismiss")}catch(t){}return!1}())
[Skip to main content](#__docusaurus_skipToContent_fallback)
You are viewing the **preview** v0.2 docs. View the **stable** v0.1 docs [here](/v0.1/docs/get_started/introduction/). Leave feedback on the v0.2 docs [here](https://github.com/langchain-ai/langchainjs/discussions/5386).
[
![π¦οΈπ Langchain](/v0.2/img/brand/wordmark.png)![π¦οΈπ Langchain](/v0.2/img/brand/wordmark-dark.png)
](/v0.2/)[Integrations](/v0.2/docs/integrations/platforms/)[API Reference](https://v02.api.js.langchain.com)
[More](#)
* [People](/v0.2/docs/people/)
* [Community](/v0.2/docs/community)
* [Tutorials](/v0.2/docs/additional_resources/tutorials)
* [Contributing](/v0.2/docs/contributing)
[v0.2](#)
* [v0.2](/v0.2/docs/introduction)
* [v0.1](https://js.langchain.com/v0.1/docs/get_started/introduction)
[π¦π](#)
* [LangSmith](https://smith.langchain.com)
* [LangSmith Docs](https://docs.smith.langchain.com)
* [LangChain Hub](https://smith.langchain.com/hub)
* [LangServe](https://github.com/langchain-ai/langserve)
* [Python Docs](https://python.langchain.com/)
[Chat](https://chatjs.langchain.com)[](https://github.com/langchain-ai/langchainjs)
Search
* [Introduction](/v0.2/docs/introduction)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Question Answering application over a Graph Database](/v0.2/docs/tutorials/graph)
* [Tutorials](/v0.2/docs/tutorials/)
* [Build a Simple LLM Application](/v0.2/docs/tutorials/llm_chain)
* [Build a Query Analysis System](/v0.2/docs/tutorials/query_analysis)
* [Build a Chatbot](/v0.2/docs/tutorials/chatbot)
* [Build an Agent](/v0.2/docs/tutorials/agents)
* [Build an Extraction Chain](/v0.2/docs/tutorials/extraction)
* [Summarize Text](/v0.2/docs/tutorials/summarization)
* [Tagging](/v0.2/docs/tutorials/classification)
* [Build a Local RAG Application](/v0.2/docs/tutorials/local_rag)
* [Conversational RAG](/v0.2/docs/tutorials/qa_chat_history)
* [Build a Retrieval Augmented Generation (RAG) App](/v0.2/docs/tutorials/rag)
* [Build a Question/Answering system over SQL data](/v0.2/docs/tutorials/sql_qa)
* [How-to guides](/v0.2/docs/how_to/)
* [How-to guides](/v0.2/docs/how_to/)
* [How to use example selectors](/v0.2/docs/how_to/example_selectors)
* [Installation](/v0.2/docs/how_to/installation)
* [How to stream responses from an LLM](/v0.2/docs/how_to/streaming_llm)
* [How to stream chat model responses](/v0.2/docs/how_to/chat_streaming)
* [How to embed text data](/v0.2/docs/how_to/embed_text)
* [How to use few shot examples in chat models](/v0.2/docs/how_to/few_shot_examples_chat)
* [How to cache model responses](/v0.2/docs/how_to/llm_caching)
* [How to cache chat model responses](/v0.2/docs/how_to/chat_model_caching)
* [How to create a custom LLM class](/v0.2/docs/how_to/custom_llm)
* [How to use few shot examples](/v0.2/docs/how_to/few_shot_examples)
* [How to use output parsers to parse an LLM response into structured format](/v0.2/docs/how_to/output_parser_structured)
* [How to return structured data from a model](/v0.2/docs/how_to/structured_output)
* [How to add ad-hoc tool calling capability to LLMs and Chat Models](/v0.2/docs/how_to/tools_prompting)
* [How to create a custom chat model class](/v0.2/docs/how_to/custom_chat)
* [How to do per-user retrieval](/v0.2/docs/how_to/qa_per_user)
* [How to track token usage](/v0.2/docs/how_to/chat_token_usage_tracking)
* [How to track token usage](/v0.2/docs/how_to/llm_token_usage_tracking)
* [How to pass through arguments from one step to the next](/v0.2/docs/how_to/passthrough)
* [How to compose prompts together](/v0.2/docs/how_to/prompts_composition)
* [How to use legacy LangChain Agents (AgentExecutor)](/v0.2/docs/how_to/agent_executor)
* [How to add values to a chain's state](/v0.2/docs/how_to/assign)
* [How to attach runtime arguments to a Runnable](/v0.2/docs/how_to/binding)
* [How to cache embedding results](/v0.2/docs/how_to/caching_embeddings)
* [How to split by character](/v0.2/docs/how_to/character_text_splitter)
* [How to manage memory](/v0.2/docs/how_to/chatbots_memory)
* [How to do retrieval](/v0.2/docs/how_to/chatbots_retrieval)
* [How to use tools](/v0.2/docs/how_to/chatbots_tools)
* [How to split code](/v0.2/docs/how_to/code_splitter)
* [How to do retrieval with contextual compression](/v0.2/docs/how_to/contextual_compression)
* [How to write a custom retriever class](/v0.2/docs/how_to/custom_retriever)
* [How to create custom Tools](/v0.2/docs/how_to/custom_tools)
* [How to debug your LLM apps](/v0.2/docs/how_to/debugging)
* [How to load CSV data](/v0.2/docs/how_to/document_loader_csv)
* [How to write a custom document loader](/v0.2/docs/how_to/document_loader_custom)
* [How to load data from a directory](/v0.2/docs/how_to/document_loader_directory)
* [How to load PDF files](/v0.2/docs/how_to/document_loader_pdf)
* [How to load JSON data](/v0.2/docs/how_to/document_loaders_json)
* [How to select examples by length](/v0.2/docs/how_to/example_selectors_length_based)
* [How to select examples by similarity](/v0.2/docs/how_to/example_selectors_similarity)
* [How to use reference examples](/v0.2/docs/how_to/extraction_examples)
* [How to handle long text](/v0.2/docs/how_to/extraction_long_text)
* [How to do extraction without using function calling](/v0.2/docs/how_to/extraction_parse)
* [Fallbacks](/v0.2/docs/how_to/fallbacks)
* [Few Shot Prompt Templates](/v0.2/docs/how_to/few_shot)
* [How to run custom functions](/v0.2/docs/how_to/functions)
* [How to construct knowledge graphs](/v0.2/docs/how_to/graph_constructing)
* [How to map values to a database](/v0.2/docs/how_to/graph_mapping)
* [How to improve results with prompting](/v0.2/docs/how_to/graph_prompting)
* [How to add a semantic layer over the database](/v0.2/docs/how_to/graph_semantic)
* [How to reindex data to keep your vectorstore in-sync with the underlying data source](/v0.2/docs/how_to/indexing)
* [How to get log probabilities](/v0.2/docs/how_to/logprobs)
* [How to add message history](/v0.2/docs/how_to/message_history)
* [How to generate multiple embeddings per document](/v0.2/docs/how_to/multi_vector)
* [How to generate multiple queries to retrieve data for](/v0.2/docs/how_to/multiple_queries)
* [How to parse JSON output](/v0.2/docs/how_to/output_parser_json)
* [How to retry when output parsing errors occur](/v0.2/docs/how_to/output_parser_retry)
* [How to parse XML output](/v0.2/docs/how_to/output_parser_xml)
* [How to invoke runnables in parallel](/v0.2/docs/how_to/parallel)
* [How to retrieve the whole document for a chunk](/v0.2/docs/how_to/parent_document_retriever)
* [How to partially format prompt templates](/v0.2/docs/how_to/prompts_partial)
* [How to add chat history to a question-answering chain](/v0.2/docs/how_to/qa_chat_history_how_to)
* [How to return citations](/v0.2/docs/how_to/qa_citations)
* [How to return sources](/v0.2/docs/how_to/qa_sources)
* [How to stream from a question-answering chain](/v0.2/docs/how_to/qa_streaming)
* [How to construct filters](/v0.2/docs/how_to/query_constructing_filters)
* [How to add examples to the prompt](/v0.2/docs/how_to/query_few_shot)
* [How to deal with high cardinality categorical variables](/v0.2/docs/how_to/query_high_cardinality)
* [How to handle multiple queries](/v0.2/docs/how_to/query_multiple_queries)
* [How to handle multiple retrievers](/v0.2/docs/how_to/query_multiple_retrievers)
* [How to handle cases where no queries are generated](/v0.2/docs/how_to/query_no_queries)
* [How to recursively split text by characters](/v0.2/docs/how_to/recursive_text_splitter)
* [How to reduce retrieval latency](/v0.2/docs/how_to/reduce_retrieval_latency)
* [How to route execution within a chain](/v0.2/docs/how_to/routing)
* [How to chain runnables](/v0.2/docs/how_to/sequence)
* [How to split text by tokens](/v0.2/docs/how_to/split_by_token)
* [How to deal with large databases](/v0.2/docs/how_to/sql_large_db)
* [How to use prompting to improve results](/v0.2/docs/how_to/sql_prompting)
* [How to do query validation](/v0.2/docs/how_to/sql_query_checking)
* [How to stream](/v0.2/docs/how_to/streaming)
* [How to create a time-weighted retriever](/v0.2/docs/how_to/time_weighted_vectorstore)
* [How to use a chat model to call tools](/v0.2/docs/how_to/tool_calling)
* [How to call tools with multi-modal data](/v0.2/docs/how_to/tool_calls_multi_modal)
* [How to use LangChain tools](/v0.2/docs/how_to/tools_builtin)
* [How use a vector store to retrieve data](/v0.2/docs/how_to/vectorstore_retriever)
* [How to create and query vector stores](/v0.2/docs/how_to/vectorstores)
* [Conceptual guide](/v0.2/docs/concepts)
* Ecosystem
* [π¦π οΈ LangSmith](/v0.2/docs/langsmith/)
* [π¦πΈοΈLangGraph.js](/v0.2/docs/langgraph)
* Versions
* [Overview](/v0.2/docs/versions/overview)
* [v0.2](/v0.2/docs/versions/v0_2)
* [Release Policy](/v0.2/docs/versions/release_policy)
* [Packages](/v0.2/docs/versions/packages)
* [Security](/v0.2/docs/security)
* [](/v0.2/)
* [How-to guides](/v0.2/docs/how_to/)
* How to use LangChain tools
On this page
How to use LangChain tools
==========================
Tools are interfaces that an agent, chain, or LLM can use to interact with the world. They combine a few things:
1. The name of the tool
2. A description of what the tool is
3. JSON schema of what the inputs to the tool are
4. The function to call
5. Whether the result of a tool should be returned directly to the user
It is useful to have all this information because this information can be used to build action-taking systems! The name, description, and schema can be used to prompt the LLM so it knows how to specify what action to take, and then the function to call is equivalent to taking that action.
The simpler the input to a tool is, the easier it is for an LLM to be able to use it. Many agents will only work with tools that have a single string input. For a list of agent types and which ones work with more complicated inputs, please see [this documentation](https://js.langchain.com/v0.1/docs/modules/agents/agent_types/)
Importantly, the name, description, and schema (if used) are all used in the prompt. Therefore, it is vitally important that they are clear and describe exactly how the tool should be used.
Default Tools[β](#default-tools "Direct link to Default Tools")
---------------------------------------------------------------
Letβs take a look at how to work with tools. To do this, weβll work with a built in tool.
import { WikipediaQueryRun } from "@langchain/community/tools/wikipedia_query_run";const tool = new WikipediaQueryRun({ topKResults: 1, maxDocContentLength: 100,});
This is the default name:
tool.name;
"wikipedia-api"
This is the default description:
tool.description;
"A tool for interacting with and fetching data from the Wikipedia API."
This is the default schema of the inputs. This is a [Zod](https://zod.dev) schema on the tool class. We convert it to JSON schema for display purposes:
import { zodToJsonSchema } from "zod-to-json-schema";zodToJsonSchema(tool.schema);
{ type: "object", properties: { input: { type: "string" } }, additionalProperties: false, "$schema": "http://json-schema.org/draft-07/schema#"}
We can see if the tool should return directly to the user
tool.returnDirect;
false
We can invoke this tool with an object input:
await tool.invoke({ input: "langchain" });
"Page: LangChain\n" + "Summary: LangChain is a framework designed to simplify the creation of applications "
We can also invoke this tool with a single string input. We can do this because this tool expects only a single input. If it required multiple inputs, we would not be able to do that.
await tool.invoke("langchain");
"Page: LangChain\n" + "Summary: LangChain is a framework designed to simplify the creation of applications "
How to use built-in toolkits[β](#how-to-use-built-in-toolkits "Direct link to How to use built-in toolkits")
------------------------------------------------------------------------------------------------------------
Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods.
For a complete list of available ready-made toolkits, visit [Integrations](/v0.2/docs/integrations/toolkits/).
All Toolkits expose a `getTools()` method which returns a list of tools.
Youβre usually meant to use them this way:
// Initialize a toolkitconst toolkit = new ExampleTookit(...);// Get list of toolsconst tools = toolkit.getTools();
More Topics[β](#more-topics "Direct link to More Topics")
---------------------------------------------------------
This was a quick introduction to tools in LangChain, but there is a lot more to learn
**[Built-In Tools](/v0.2/docs/integrations/tools/)**: For a list of all built-in tools, see [this page](/v0.2/docs/integrations/tools/)
**[Custom Tools](/v0.2/docs/how_to/custom_tools)**: Although built-in tools are useful, itβs highly likely that youβll have to define your own tools. See [this guide](/v0.2/docs/how_to/custom_tools) for instructions on how to do so.
* * *
#### Was this page helpful?
#### You can leave detailed feedback [on GitHub](https://github.com/langchain-ai/langchainjs/issues/new?assignees=&labels=03+-+Documentation&projects=&template=documentation.yml&title=DOC%3A+%3CPlease+write+a+comprehensive+title+after+the+%27DOC%3A+%27+prefix%3E).
[
Previous
How to call tools with multi-modal data
](/v0.2/docs/how_to/tool_calls_multi_modal)[
Next
How use a vector store to retrieve data
](/v0.2/docs/how_to/vectorstore_retriever)
* [Default Tools](#default-tools)
* [How to use built-in toolkits](#how-to-use-built-in-toolkits)
* [More Topics](#more-topics)
Community
* [Discord](https://discord.gg/cU2adEyC7w)
* [Twitter](https://twitter.com/LangChainAI)
GitHub
* [Python](https://github.com/langchain-ai/langchain)
* [JS/TS](https://github.com/langchain-ai/langchainjs)
More
* [Homepage](https://langchain.com)
* [Blog](https://blog.langchain.dev)
Copyright Β© 2024 LangChain, Inc. | https://js.langchain.com/v0.2/docs/how_to/tools_builtin |
"!function(){function t(t){document.documentElement.setAttribute(\"data-theme\",t)}var e=function(){(...TRUNCATED) | https://js.langchain.com/v0.2/docs/langgraph |
"!function(){function t(t){document.documentElement.setAttribute(\"data-theme\",t)}var e=function(){(...TRUNCATED) | https://js.langchain.com/v0.2/docs/versions/overview |
End of preview. Expand
in Dataset Viewer.
This dataset contains 2 fields:
url (str) content (str)
- Downloads last month
- 37