id
stringlengths 14
16
| text
stringlengths 45
2.05k
| source
stringlengths 53
111
|
---|---|---|
77d097f5ae80-53 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.PromptLayerOpenAI[source]#
Wrapper around OpenAI large language models.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
All parameters that can be passed to the OpenAI LLM can also
be passed here. The PromptLayerOpenAI LLM adds two optional
:param pl_tags: List of strings to tag the request with.
:param return_pl_id: If True, the PromptLayer request ID will be
returned in the generation_info field of the
Generation object.
Example | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-54 | returned in the generation_info field of the
Generation object.
Example
from langchain.llms import PromptLayerOpenAI
openai = PromptLayerOpenAI(model_name="text-davinci-003")
Validators
build_extra » all fields
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-55 | update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → langchain.schema.LLMResult#
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Calculate num tokens with tiktoken package.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]]#
Get the sub prompts for llm call.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-56 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
max_tokens_for_prompt(prompt: str) → int#
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt – The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
modelname_to_contextsize(modelname: str) → int#
Calculate the maximum number of tokens possible to generate for a model.
text-davinci-003: 4,097 tokens
text-curie-001: 2,048 tokens
text-babbage-001: 2,048 tokens
text-ada-001: 2,048 tokens
code-davinci-002: 8,000 tokens
code-cushman-001: 2,048 tokens
Parameters
modelname – The modelname we want to know the context size for.
Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
prep_streaming_params(stop: Optional[List[str]] = None) → Dict[str, Any]#
Prepare the params for streaming.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
stream(prompt: str, stop: Optional[List[str]] = None) → Generator#
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction. | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-57 | BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt – The prompts to pass into the model.
stop – Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.PromptLayerOpenAIChat[source]#
Wrapper around OpenAI large language models.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
All parameters that can be passed to the OpenAIChat LLM can also
be passed here. The PromptLayerOpenAIChat adds two optional
:param pl_tags: List of strings to tag the request with.
:param return_pl_id: If True, the PromptLayer request ID will be
returned in the generation_info field of the
Generation object.
Example
from langchain.llms import PromptLayerOpenAIChat
openaichat = PromptLayerOpenAIChat(model_name="gpt-3.5-turbo")
Validators
build_extra » all fields
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field max_retries: int = 6#
Maximum number of retries to make when generating.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified. | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-58 | Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'gpt-3.5-turbo'#
Model name to use.
field prefix_messages: List [Optional]#
Series of messages for Chat input.
field streaming: bool = False#
Whether to stream the results or not.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-59 | update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Calculate num tokens with tiktoken package.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-60 | Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.SagemakerEndpoint[source]#
Wrapper around custom Sagemaker Inference Endpoints.
To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field content_handler: langchain.llms.sagemaker_endpoint.ContentHandlerBase [Required]#
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
field credentials_profile_name: Optional[str] = None#
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
field endpoint_kwargs: Optional[Dict] = None#
Optional attributes passed to the invoke_endpoint | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-61 | field endpoint_kwargs: Optional[Dict] = None#
Optional attributes passed to the invoke_endpoint
function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
field endpoint_name: str = ''#
The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
field model_kwargs: Optional[Dict] = None#
Key word arguments to pass to the model.
field region_name: str = ''#
The aws region where the Sagemaker model is deployed, eg. us-west-2.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-62 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-63 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.SelfHostedHuggingFaceLLM[source]#
Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another cloud
like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Only supports text-generation and text2text-generation for now.
Example using from_model_id:from langchain.llms import SelfHostedHuggingFaceLLM
import runhouse as rh | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-64 | import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceLLM(
model_id="google/flan-t5-large", task="text2text-generation",
hardware=gpu
)
Example passing fn that generates a pipeline (bc the pipeline is not serializable):from langchain.llms import SelfHostedHuggingFaceLLM
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
def get_pipeline():
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer
)
return pipe
hf = SelfHostedHuggingFaceLLM(
model_load_fn=get_pipeline, model_id="gpt2", hardware=gpu)
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field device: int = 0#
Device to use for inference. -1 for CPU, 0 for GPU, 1 for second GPU, etc.
field hardware: Any = None#
Remote hardware to send the inference function to.
field inference_fn: Callable = <function _generate_text>#
Inference function to send to the remote hardware.
field load_fn_kwargs: Optional[dict] = None#
Key word arguments to pass to the model load function.
field model_id: str = 'gpt2'#
Hugging Face model_id to load the model.
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
field model_load_fn: Callable = <function _load_transformer>#
Function to load the model remotely on the server. | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-65 | Function to load the model remotely on the server.
field model_reqs: List[str] = ['./', 'transformers', 'torch']#
Requirements to install on hardware to inference the model.
field task: str = 'text-generation'#
Hugging Face task (either “text-generation” or “text2text-generation”).
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-66 | update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → langchain.llms.base.LLM#
Init the SelfHostedPipeline from a pipeline object or string.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict(). | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-67 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.SelfHostedPipeline[source]#
Run model inference on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example for custom pipeline and inference functions:from langchain.llms import SelfHostedPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
def load_pipeline():
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained("gpt2")
return pipeline(
"text-generation", model=model, tokenizer=tokenizer,
max_new_tokens=10
)
def inference_fn(pipeline, prompt, stop = None):
return pipeline(prompt)[0]["generated_text"]
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
llm = SelfHostedPipeline(
model_load_fn=load_pipeline,
hardware=gpu, | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-68 | model_load_fn=load_pipeline,
hardware=gpu,
model_reqs=model_reqs, inference_fn=inference_fn
)
Example for <2GB model (can be serialized and sent directly to the server):from langchain.llms import SelfHostedPipeline
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
my_model = ...
llm = SelfHostedPipeline.from_pipeline(
pipeline=my_model,
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Example passing model path for larger models:from langchain.llms import SelfHostedPipeline
import runhouse as rh
import pickle
from transformers import pipeline
generator = pipeline(model="gpt2")
rh.blob(pickle.dumps(generator), path="models/pipeline.pkl"
).save().to(gpu, path="models")
llm = SelfHostedPipeline.from_pipeline(
pipeline="models/pipeline.pkl",
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Validators
set_callback_manager » callback_manager
set_verbose » verbose
field hardware: Any = None#
Remote hardware to send the inference function to.
field inference_fn: Callable = <function _generate_text>#
Inference function to send to the remote hardware.
field load_fn_kwargs: Optional[dict] = None#
Key word arguments to pass to the model load function.
field model_load_fn: Callable [Required]#
Function to load the model remotely on the server.
field model_reqs: List[str] = ['./', 'torch']#
Requirements to install on hardware to inference the model.
__call__(prompt: str, stop: Optional[List[str]] = None) → str# | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-69 | __call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM. | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-70 | dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) → langchain.llms.base.LLM[source]#
Init the SelfHostedPipeline from a pipeline object or string.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to. | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-71 | Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.StochasticAI[source]#
Wrapper around StochasticAI large language models.
To use, you should have the environment variable STOCHASTICAI_API_KEY
set with your API key.
Example
from langchain.llms import StochasticAI
stochasticai = StochasticAI(api_url="")
Validators
build_extra » all fields
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field api_url: str = ''#
Model name to use.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not
explicitly specified.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-72 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message. | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-73 | Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
pydantic model langchain.llms.Writer[source]#
Wrapper around Writer large language models.
To use, you should have the environment variable WRITER_API_KEY
set with your API key.
Example
from langchain import Writer
writer = Writer(model_id="palmyra-base")
Validators
set_callback_manager » callback_manager
set_verbose » verbose
validate_environment » all fields
field base_url: Optional[str] = None#
Base url to use, if None decides based on model name.
field beam_search_diversity_rate: float = 1.0#
Only applies to beam search, i.e. when the beam width is >1. | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-74 | Only applies to beam search, i.e. when the beam width is >1.
A higher value encourages beam search to return a more diverse
set of candidates
field beam_width: Optional[int] = None#
The number of concurrent candidates to keep track of during
beam search
field length: int = 256#
The maximum number of tokens to generate in the completion.
field length_pentaly: float = 1.0#
Only applies to beam search, i.e. when the beam width is >1.
Larger values penalize long candidates more heavily, thus preferring
shorter candidates
field logprobs: bool = False#
Whether to return log probabilities.
field model_id: str = 'palmyra-base'#
Model name to use.
field random_seed: int = 0#
The model generates random results.
Changing the random seed alone will produce a different response
with similar characteristics. It is possible to reproduce results
by fixing the random seed (assuming all other hyperparameters
are also fixed)
field repetition_penalty: float = 1.0#
Penalizes repeated tokens according to frequency.
field stop: Optional[List[str]] = None#
Sequences when completion generation will stop
field temperature: float = 1.0#
What sampling temperature to use.
field tokens_to_generate: int = 24#
Max number of tokens to generate.
field top_k: int = 1#
The number of highest probability vocabulary tokens to
keep for top-k-filtering.
field top_p: float = 1.0#
Total probability mass of tokens to consider at each step.
__call__(prompt: str, stop: Optional[List[str]] = None) → str#
Check Cache and run the LLM on the given prompt and input. | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-75 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input. | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
77d097f5ae80-76 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
previous
Streaming with LLMs
next
Document Loaders
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | https://langchain.readthedocs.io\en\latest\reference\modules\llms.html |
81d7c2da595a-0 | .rst
.pdf
PromptTemplates
PromptTemplates#
Prompt template classes.
pydantic model langchain.prompts.BasePromptTemplate[source]#
Base class for all prompt templates, returning a prompt.
field input_variables: List[str] [Required]#
A list of the names of the variables the prompt template expects.
field output_parser: Optional[langchain.schema.BaseOutputParser] = None#
How to parse the output of calling an LLM on this formatted prompt.
dict(**kwargs: Any) → Dict[source]#
Return dictionary representation of prompt.
abstract format(**kwargs: Any) → str[source]#
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
abstract format_prompt(**kwargs: Any) → langchain.schema.PromptValue[source]#
Create Chat Messages.
partial(**kwargs: Union[str, Callable[[], str]]) → langchain.prompts.base.BasePromptTemplate[source]#
Return a partial of the prompt template.
save(file_path: Union[pathlib.Path, str]) → None[source]#
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
pydantic model langchain.prompts.ChatPromptTemplate[source]#
format(**kwargs: Any) → str[source]#
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
format_prompt(**kwargs: Any) → langchain.schema.PromptValue[source]#
Create Chat Messages. | https://langchain.readthedocs.io\en\latest\reference\modules\prompt.html |
81d7c2da595a-1 | Create Chat Messages.
partial(**kwargs: Union[str, Callable[[], str]]) → langchain.prompts.base.BasePromptTemplate[source]#
Return a partial of the prompt template.
save(file_path: Union[pathlib.Path, str]) → None[source]#
Save the prompt.
Parameters
file_path – Path to directory to save prompt to.
Example:
.. code-block:: python
prompt.save(file_path=”path/prompt.yaml”)
pydantic model langchain.prompts.FewShotPromptTemplate[source]#
Prompt template that contains few shot examples.
field example_prompt: langchain.prompts.prompt.PromptTemplate [Required]#
PromptTemplate used to format an individual example.
field example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None#
ExampleSelector to choose the examples to format into the prompt.
Either this or examples should be provided.
field example_separator: str = '\n\n'#
String separator used to join the prefix, the examples, and suffix.
field examples: Optional[List[dict]] = None#
Examples to format into the prompt.
Either this or example_selector should be provided.
field input_variables: List[str] [Required]#
A list of the names of the variables the prompt template expects.
field prefix: str = ''#
A prompt template string to put before the examples.
field suffix: str [Required]#
A prompt template string to put after the examples.
field template_format: str = 'f-string'#
The format of the prompt template. Options are: ‘f-string’, ‘jinja2’.
field validate_template: bool = True#
Whether or not to try validating the template.
dict(**kwargs: Any) → Dict[source]#
Return a dictionary of the prompt.
format(**kwargs: Any) → str[source]# | https://langchain.readthedocs.io\en\latest\reference\modules\prompt.html |
81d7c2da595a-2 | Return a dictionary of the prompt.
format(**kwargs: Any) → str[source]#
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
pydantic model langchain.prompts.FewShotPromptWithTemplates[source]#
Prompt template that contains few shot examples.
field example_prompt: langchain.prompts.prompt.PromptTemplate [Required]#
PromptTemplate used to format an individual example.
field example_selector: Optional[langchain.prompts.example_selector.base.BaseExampleSelector] = None#
ExampleSelector to choose the examples to format into the prompt.
Either this or examples should be provided.
field example_separator: str = '\n\n'#
String separator used to join the prefix, the examples, and suffix.
field examples: Optional[List[dict]] = None#
Examples to format into the prompt.
Either this or example_selector should be provided.
field input_variables: List[str] [Required]#
A list of the names of the variables the prompt template expects.
field prefix: Optional[langchain.prompts.base.StringPromptTemplate] = None#
A PromptTemplate to put before the examples.
field suffix: langchain.prompts.base.StringPromptTemplate [Required]#
A PromptTemplate to put after the examples.
field template_format: str = 'f-string'#
The format of the prompt template. Options are: ‘f-string’, ‘jinja2’.
field validate_template: bool = True#
Whether or not to try validating the template.
dict(**kwargs: Any) → Dict[source]#
Return a dictionary of the prompt.
format(**kwargs: Any) → str[source]#
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns | https://langchain.readthedocs.io\en\latest\reference\modules\prompt.html |
81d7c2da595a-3 | Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
pydantic model langchain.prompts.MessagesPlaceholder[source]#
Prompt template that assumes variable is already list of messages.
format_messages(**kwargs: Any) → List[langchain.schema.BaseMessage][source]#
To a BaseMessage.
property input_variables: List[str]#
Input variables for this prompt template.
langchain.prompts.Prompt#
alias of langchain.prompts.prompt.PromptTemplate
pydantic model langchain.prompts.PromptTemplate[source]#
Schema to represent a prompt for an LLM.
Example
from langchain import PromptTemplate
prompt = PromptTemplate(input_variables=["foo"], template="Say {foo}")
field input_variables: List[str] [Required]#
A list of the names of the variables the prompt template expects.
field template: str [Required]#
The prompt template.
field template_format: str = 'f-string'#
The format of the prompt template. Options are: ‘f-string’, ‘jinja2’.
field validate_template: bool = True#
Whether or not to try validating the template.
format(**kwargs: Any) → str[source]#
Format the prompt with the inputs.
Parameters
kwargs – Any arguments to be passed to the prompt template.
Returns
A formatted string.
Example:
prompt.format(variable1="foo")
classmethod from_examples(examples: List[str], suffix: str, input_variables: List[str], example_separator: str = '\n\n', prefix: str = '') → langchain.prompts.prompt.PromptTemplate[source]#
Take examples in list format with prefix and suffix to create a prompt.
Intended be used as a way to dynamically create a prompt from examples.
Parameters | https://langchain.readthedocs.io\en\latest\reference\modules\prompt.html |
81d7c2da595a-4 | Intended be used as a way to dynamically create a prompt from examples.
Parameters
examples – List of examples to use in the prompt.
suffix – String to go after the list of examples. Should generally
set up the user’s input.
input_variables – A list of variable names the final prompt template
will expect.
example_separator – The separator to use in between examples. Defaults
to two new line characters.
prefix – String that should go before any examples. Generally includes
examples. Default to an empty string.
Returns
The final prompt generated.
classmethod from_file(template_file: Union[str, pathlib.Path], input_variables: List[str]) → langchain.prompts.prompt.PromptTemplate[source]#
Load a prompt from a file.
Parameters
template_file – The path to the file containing the prompt template.
input_variables – A list of variable names the final prompt template
will expect.
Returns
The prompt loaded from the file.
classmethod from_template(template: str) → langchain.prompts.prompt.PromptTemplate[source]#
Load a prompt template from a template.
pydantic model langchain.prompts.StringPromptTemplate[source]#
String prompt should expose the format method, returning a prompt.
format_prompt(**kwargs: Any) → langchain.schema.PromptValue[source]#
Create Chat Messages.
langchain.prompts.load_prompt(path: Union[str, pathlib.Path]) → langchain.prompts.base.BasePromptTemplate[source]#
Unified method for loading a prompt from LangChainHub or local fs.
previous
Prompts
next
Example Selector
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | https://langchain.readthedocs.io\en\latest\reference\modules\prompt.html |
85730fed5fa7-0 | .rst
.pdf
Python REPL
Python REPL#
Mock Python REPL.
pydantic model langchain.python.PythonREPL[source]#
Simulates a standalone Python REPL.
field globals: Optional[Dict] [Optional] (alias '_globals')#
field locals: Optional[Dict] [Optional] (alias '_locals')#
run(command: str) → str[source]#
Run command with own globals/locals and returns anything printed.
previous
Utilities
next
SerpAPI
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | https://langchain.readthedocs.io\en\latest\reference\modules\python.html |
47cd46b5983e-0 | .rst
.pdf
SearxNG Search
Contents
Quick Start
Searching
Engine Parameters
Search Tips
SearxNG Search#
Utility for using SearxNG meta search API.
SearxNG is a privacy-friendly free metasearch engine that aggregates results from
multiple search engines and databases and
supports the OpenSearch
specification.
More detailes on the installtion instructions here.
For the search API refer to https://docs.searxng.org/dev/search_api.html
Quick Start#
In order to use this tool you need to provide the searx host. This can be done
by passing the named parameter searx_host
or exporting the environment variable SEARX_HOST.
Note: this is the only required parameter.
Then create a searx search instance like this:
from langchain.utilities import SearxSearchWrapper
# when the host starts with `http` SSL is disabled and the connection
# is assumed to be on a private network
searx_host='http://self.hosted'
search = SearxSearchWrapper(searx_host=searx_host)
You can now use the search instance to query the searx API.
Searching#
Use the run() and
results() methods to query the searx API.
Other methods are are available for convenience.
SearxResults is a convenience wrapper around the raw json result.
Example usage of the run method to make a search:
s.run(query="what is the best search engine?")
Engine Parameters#
You can pass any accepted searx search API parameters to the
SearxSearchWrapper instance.
In the following example we are using the
engines and the language parameters:
# assuming the searx host is set as above or exported as an env variable | https://langchain.readthedocs.io\en\latest\reference\modules\searx_search.html |
47cd46b5983e-1 | # assuming the searx host is set as above or exported as an env variable
s = SearxSearchWrapper(engines=['google', 'bing'],
language='es')
Search Tips#
Searx offers a special
search syntax
that can also be used instead of passing engine parameters.
For example the following query:
s = SearxSearchWrapper("langchain library", engines=['github'])
# can also be written as:
s = SearxSearchWrapper("langchain library !github")
# or even:
s = SearxSearchWrapper("langchain library !gh")
In some situations you might want to pass an extra string to the search query.
For example when the run() method is called by an agent. The search suffix can
also be used as a way to pass extra parameters to searx or the underlying search
engines.
# select the github engine and pass the search suffix
s = SearchWrapper("langchain library", query_suffix="!gh")
s = SearchWrapper("langchain library")
# select github the conventional google search syntax
s.run("large language models", query_suffix="site:github.com")
NOTE: A search suffix can be defined on both the instance and the method level.
The resulting query will be the concatenation of the two with the former taking
precedence.
See SearxNG Configured Engines and
SearxNG Search Syntax
for more details.
Notes
This wrapper is based on the SearxNG fork searxng/searxng which is
better maintained than the original Searx project and offers more features.
Public searxNG instances often use a rate limiter for API usage, so you might want to
use a self hosted instance and disable the rate limiter. | https://langchain.readthedocs.io\en\latest\reference\modules\searx_search.html |
47cd46b5983e-2 | use a self hosted instance and disable the rate limiter.
If you are self-hosting an instance you can customize the rate limiter for your
own network as described here.
For a list of public SearxNG instances see https://searx.space/
class langchain.utilities.searx_search.SearxResults(data: str)[source]#
Dict like wrapper around search api results.
property answers: Any#
Helper accessor on the json result.
pydantic model langchain.utilities.searx_search.SearxSearchWrapper[source]#
Wrapper for Searx API.
To use you need to provide the searx host by passing the named parameter
searx_host or exporting the environment variable SEARX_HOST.
In some situations you might want to disable SSL verification, for example
if you are running searx locally. You can do this by passing the named parameter
unsecure. You can also pass the host url scheme as http to disable SSL.
Example
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://localhost:8888")
Example with SSL disabled:from langchain.utilities import SearxSearchWrapper
# note the unsecure parameter is not needed if you pass the url scheme as
# http
searx = SearxSearchWrapper(searx_host="http://localhost:8888",
unsecure=True)
Validators
disable_ssl_warnings » unsecure
validate_params » all fields
field engines: Optional[List[str]] = []#
field headers: Optional[dict] = None#
field k: int = 10#
field params: dict [Optional]#
field query_suffix: Optional[str] = ''#
field searx_host: str = ''#
field unsecure: bool = False# | https://langchain.readthedocs.io\en\latest\reference\modules\searx_search.html |
47cd46b5983e-3 | field searx_host: str = ''#
field unsecure: bool = False#
results(query: str, num_results: int, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → List[Dict][source]#
Run query through Searx API and returns the results with metadata.
Parameters
query – The query to search for.
query_suffix – Extra suffix appended to the query.
num_results – Limit the number of results to return.
engines – List of engines to use for the query.
**kwargs – extra parameters to pass to the searx API.
Returns
{snippet: The description of the result.
title: The title of the result.
link: The link to the result.
engines: The engines used for the result.
category: Searx category of the result.
}
Return type
Dict with the following keys
run(query: str, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → str[source]#
Run query through Searx API and parse results.
You can pass any other params to the searx query API.
Parameters
query – The query to search for.
query_suffix – Extra suffix appended to the query.
engines – List of engines to use for the query.
**kwargs – extra parameters to pass to the searx API.
Example
This will make a query to the qwant engine:
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://my.searx.host")
searx.run("what is the weather in France ?", engine="qwant")
# the same result can be achieved using the `!` syntax of searx | https://langchain.readthedocs.io\en\latest\reference\modules\searx_search.html |
47cd46b5983e-4 | # the same result can be achieved using the `!` syntax of searx
# to select the engine using `query_suffix`
searx.run("what is the weather in France ?", query_suffix="!qwant")
previous
SerpAPI
next
Docstore
Contents
Quick Start
Searching
Engine Parameters
Search Tips
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | https://langchain.readthedocs.io\en\latest\reference\modules\searx_search.html |
65e357832e63-0 | .rst
.pdf
SerpAPI
SerpAPI#
For backwards compatiblity.
pydantic model langchain.serpapi.SerpAPIWrapper[source]#
Wrapper around SerpAPI.
To use, you should have the google-search-results python package installed,
and the environment variable SERPAPI_API_KEY set with your API key, or pass
serpapi_api_key as a named parameter to the constructor.
Example
from langchain import SerpAPIWrapper
serpapi = SerpAPIWrapper()
field aiosession: Optional[aiohttp.client.ClientSession] = None#
field params: dict = {'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}#
field serpapi_api_key: Optional[str] = None#
async arun(query: str) → str[source]#
Use aiohttp to run query through SerpAPI and parse result.
get_params(query: str) → Dict[str, str][source]#
Get parameters for SerpAPI.
results(query: str) → dict[source]#
Run query through SerpAPI and return the raw result.
run(query: str) → str[source]#
Run query through SerpAPI and parse result.
previous
Python REPL
next
SearxNG Search
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | https://langchain.readthedocs.io\en\latest\reference\modules\serpapi.html |
dc2ddc033c4a-0 | .rst
.pdf
Text Splitter
Text Splitter#
Functionality for splitting text.
class langchain.text_splitter.CharacterTextSplitter(separator: str = '\n\n', **kwargs: Any)[source]#
Implementation of splitting text that looks at characters.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.LatexTextSplitter(**kwargs: Any)[source]#
Attempts to split the text along Latex-formatted layout elements.
class langchain.text_splitter.MarkdownTextSplitter(**kwargs: Any)[source]#
Attempts to split the text along Markdown-formatted headings.
class langchain.text_splitter.NLTKTextSplitter(separator: str = '\n\n', **kwargs: Any)[source]#
Implementation of splitting text that looks at sentences using NLTK.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.PythonCodeTextSplitter(**kwargs: Any)[source]#
Attempts to split the text along Python syntax.
class langchain.text_splitter.RecursiveCharacterTextSplitter(separators: Optional[List[str]] = None, **kwargs: Any)[source]#
Implementation of splitting text that looks at characters.
Recursively tries to split by different characters to find one
that works.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
class langchain.text_splitter.SpacyTextSplitter(separator: str = '\n\n', pipeline: str = 'en_core_web_sm', **kwargs: Any)[source]#
Implementation of splitting text that looks at sentences using Spacy.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks. | https://langchain.readthedocs.io\en\latest\reference\modules\text_splitter.html |
dc2ddc033c4a-1 | Split incoming text and return chunks.
class langchain.text_splitter.TextSplitter(chunk_size: int = 4000, chunk_overlap: int = 200, length_function: typing.Callable[[str], int] = <built-in function len>)[source]#
Interface for splitting text into chunks.
create_documents(texts: List[str], metadatas: Optional[List[dict]] = None) → List[langchain.schema.Document][source]#
Create documents from a list of texts.
classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) → langchain.text_splitter.TextSplitter[source]#
Text splitter that uses HuggingFace tokenizer to count length.
classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any) → langchain.text_splitter.TextSplitter[source]#
Text splitter that uses tiktoken encoder to count length.
split_documents(documents: List[langchain.schema.Document]) → List[langchain.schema.Document][source]#
Split documents.
abstract split_text(text: str) → List[str][source]#
Split text into multiple components.
class langchain.text_splitter.TokenTextSplitter(encoding_name: str = 'gpt2', allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', **kwargs: Any)[source]#
Implementation of splitting text that looks at tokens.
split_text(text: str) → List[str][source]#
Split incoming text and return chunks.
previous
Docstore
next
Embeddings
By Harrison Chase
© Copyright 2023, Harrison Chase. | https://langchain.readthedocs.io\en\latest\reference\modules\text_splitter.html |
dc2ddc033c4a-2 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | https://langchain.readthedocs.io\en\latest\reference\modules\text_splitter.html |
e7d7c6df04cc-0 | .rst
.pdf
VectorStores
VectorStores#
Wrappers on top of vector stores.
class langchain.vectorstores.AtlasDB(name: str, embedding_function: Optional[langchain.embeddings.base.Embeddings] = None, api_key: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False)[source]#
Wrapper around Atlas: Nomic’s neural database and rhizomatic instrument.
To use, you should have the nomic python package installed.
Example
from langchain.vectorstores import AtlasDB
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = AtlasDB("my_project", embeddings.embed_query)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, refresh: bool = True, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) – Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) – Optional list of metadatas.
ids (Optional[List[str]]) – An optional list of ids.
refresh (bool) – Whether or not to refresh indices with the updated data.
Default True.
Returns
List of IDs of the added texts.
Return type
List[str]
create_index(**kwargs: Any) → Any[source]#
Creates an index in your project.
See
https://docs.nomic.ai/atlas_api.html#nomic.project.AtlasProject.create_index
for full detail. | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-1 | for full detail.
classmethod from_documents(documents: List[langchain.schema.Document], embedding: Optional[langchain.embeddings.base.Embeddings] = None, ids: Optional[List[str]] = None, name: Optional[str] = None, api_key: Optional[str] = None, persist_directory: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False, index_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.vectorstores.atlas.AtlasDB[source]#
Create an AtlasDB vectorstore from a list of documents.
Parameters
name (str) – Name of the collection to create.
api_key (str) – Your nomic API key,
documents (List[Document]) – List of documents to add to the vectorstore.
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
ids (Optional[List[str]]) – Optional list of document IDs. If None,
ids will be auto created
description (str) – A description for your project.
is_public (bool) – Whether your project is publicly accessible.
True by default.
reset_project_if_exists (bool) – Whether to reset this project if
it already exists. Default False.
Generally userful during development and testing.
index_kwargs (Optional[dict]) – Dict of kwargs for index creation.
See https://docs.nomic.ai/atlas_api.html
Returns
Nomic’s neural database and finest rhizomatic instrument
Return type
AtlasDB | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-2 | Returns
Nomic’s neural database and finest rhizomatic instrument
Return type
AtlasDB
classmethod from_texts(texts: List[str], embedding: Optional[langchain.embeddings.base.Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, name: Optional[str] = None, api_key: Optional[str] = None, description: str = 'A description for your project', is_public: bool = True, reset_project_if_exists: bool = False, index_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.vectorstores.atlas.AtlasDB[source]#
Create an AtlasDB vectorstore from a raw documents.
Parameters
texts (List[str]) – The list of texts to ingest.
name (str) – Name of the project to create.
api_key (str) – Your nomic API key,
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) – List of metadatas. Defaults to None.
ids (Optional[List[str]]) – Optional list of document IDs. If None,
ids will be auto created
description (str) – A description for your project.
is_public (bool) – Whether your project is publicly accessible.
True by default.
reset_project_if_exists (bool) – Whether to reset this project if it
already exists. Default False.
Generally userful during development and testing.
index_kwargs (Optional[dict]) – Dict of kwargs for index creation.
See https://docs.nomic.ai/atlas_api.html
Returns
Nomic’s neural database and finest rhizomatic instrument
Return type
AtlasDB
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]# | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-3 | Run similarity search with AtlasDB
Parameters
query (str) – Query text to search for.
k (int) – Number of results to return. Defaults to 4.
Returns
List of documents most similar to the query text.
Return type
List[Document]
class langchain.vectorstores.Chroma(collection_name: str = 'langchain', embedding_function: Optional[Embeddings] = None, persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None)[source]#
Wrapper around ChromaDB embeddings platform.
To use, you should have the chromadb python package installed.
Example
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Chroma("langchain_store", embeddings.embed_query)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) – Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) – Optional list of metadatas.
ids (Optional[List[str]], optional) – Optional list of IDs.
Returns
List of IDs of the added texts.
Return type
List[str]
delete_collection() → None[source]#
Delete the collection.
classmethod from_documents(documents: List[Document], embedding: Optional[Embeddings] = None, ids: Optional[List[str]] = None, collection_name: str = 'langchain', persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, **kwargs: Any) → Chroma[source]# | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-4 | Create a Chroma vectorstore from a list of documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters
collection_name (str) – Name of the collection to create.
persist_directory (Optional[str]) – Directory to persist the collection.
ids (Optional[List[str]]) – List of document IDs. Defaults to None.
documents (List[Document]) – List of documents to add to the vectorstore.
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
client_settings (Optional[chromadb.config.Settings]) – Chroma client settings
Returns
Chroma vectorstore.
Return type
Chroma
classmethod from_texts(texts: List[str], embedding: Optional[Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, collection_name: str = 'langchain', persist_directory: Optional[str] = None, client_settings: Optional[chromadb.config.Settings] = None, **kwargs: Any) → Chroma[source]#
Create a Chroma vectorstore from a raw documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters
texts (List[str]) – List of texts to add to the collection.
collection_name (str) – Name of the collection to create.
persist_directory (Optional[str]) – Directory to persist the collection.
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) – List of metadatas. Defaults to None.
ids (Optional[List[str]]) – List of document IDs. Defaults to None.
client_settings (Optional[chromadb.config.Settings]) – Chroma client settings
Returns
Chroma vectorstore. | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-5 | Returns
Chroma vectorstore.
Return type
Chroma
persist() → None[source]#
Persist the collection.
This can be used to explicitly persist the data to disk.
It will also be called automatically when the object is destroyed.
similarity_search(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Run similarity search with Chroma.
Parameters
query (str) – Query text to search for.
k (int) – Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None.
Returns
List of documents most simmilar to the query text.
Return type
List[Document]
similarity_search_by_vector(embedding: List[float], k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
:param embedding: Embedding to look up documents similar to.
:param k: Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
similarity_search_with_score(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) → List[Tuple[langchain.schema.Document, float]][source]#
Run similarity search with Chroma with distance.
Parameters
query (str) – Query text to search for.
k (int) – Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None.
Returns
List of documents most similar to the querytext with distance in float. | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-6 | Returns
List of documents most similar to the querytext with distance in float.
Return type
List[Tuple[Document, float]]
class langchain.vectorstores.DeepLake(dataset_path: str = 'mem://langchain', token: Optional[str] = None, embedding_function: Optional[langchain.embeddings.base.Embeddings] = None)[source]#
Wrapper around Deep Lake, a data lake for deep learning applications.
It not only stores embeddings, but also the original data and queries with
version control automatically enabled.
It is more than just a vector store. You can use the dataset to fine-tune
your own LLM models or use it for other downstream tasks.
We implement naive similiarity search, but it can be extended with Tensor
Query Language (TQL for production use cases) over billion rows.
To use, you should have the deeplake python package installed.
Example
from langchain.vectorstores import DeepLake
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = DeepLake("langchain_store", embeddings.embed_query)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) – Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional) – Optional list of metadatas.
ids (Optional[List[str]], optional) – Optional list of IDs.
Returns
List of IDs of the added texts.
Return type
List[str]
delete_dataset() → None[source]#
Delete the collection. | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-7 | Return type
List[str]
delete_dataset() → None[source]#
Delete the collection.
classmethod from_texts(texts: List[str], embedding: Optional[langchain.embeddings.base.Embeddings] = None, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, dataset_path: str = 'mem://langchain', **kwargs: Any) → langchain.vectorstores.deeplake.DeepLake[source]#
Create a Deep Lake dataset from a raw documents.
If a persist_directory is specified, the collection will be persisted there.
Otherwise, the data will be ephemeral in-memory.
Parameters
path (str, pathlib.Path) –
The full path to the dataset. Can be:
a Deep Lake cloud path of the form hub://username/datasetname.To write to Deep Lake cloud datasets,
ensure that you are logged in to Deep Lake
(use ‘activeloop login’ from command line)
an s3 path of the form s3://bucketname/path/to/dataset.Credentials are required in either the environment or
passed to the creds argument.
a local file system path of the form ./path/to/dataset or~/path/to/dataset or path/to/dataset.
a memory path of the form mem://path/to/dataset which doesn’tsave the dataset but keeps it in memory instead.
Should be used only for testing as it does not persist.
documents (List[Document]) – List of documents to add.
embedding (Optional[Embeddings]) – Embedding function. Defaults to None.
metadatas (Optional[List[dict]]) – List of metadatas. Defaults to None.
ids (Optional[List[str]]) – List of document IDs. Defaults to None.
Returns
Deep Lake dataset.
Return type
DeepLake
persist() → None[source]#
Persist the collection. | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-8 | Return type
DeepLake
persist() → None[source]#
Persist the collection.
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
class langchain.vectorstores.ElasticVectorSearch(elasticsearch_url: str, index_name: str, embedding: langchain.embeddings.base.Embeddings)[source]#
Wrapper around Elasticsearch as a vector database.
Example
from langchain import ElasticVectorSearch
elastic_vector_search = ElasticVectorSearch(
"http://localhost:9200",
"embeddings",
embedding
)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.elastic_vector_search.ElasticVectorSearch[source]#
Construct ElasticVectorSearch wrapper from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in the Elasticsearch instance.
Adds the documents to the newly created Elasticsearch index.
This is intended to be a quick way to get started.
Example
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch.from_texts(
texts,
embeddings, | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-9 | elastic_vector_search = ElasticVectorSearch.from_texts(
texts,
embeddings,
elasticsearch_url="http://localhost:9200"
)
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
class langchain.vectorstores.FAISS(embedding_function: Callable, index: Any, docstore: langchain.docstore.base.Docstore, index_to_docstore_id: Dict[int, str])[source]#
Wrapper around FAISS vector database.
To use, you should have the faiss python package installed.
Example
from langchain import FAISS
faiss = FAISS(embedding_function, index, docstore, index_to_docstore_id)
add_embeddings(text_embeddings: Iterable[Tuple[str, List[float]]], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
text_embeddings – Iterable pairs of string and embedding to
add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
Returns
List of ids from adding the texts into the vectorstore.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
Returns | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-10 | metadatas – Optional list of metadatas associated with the texts.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_embeddings(text_embeddings: List[Tuple[str, List[float]]], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.faiss.FAISS[source]#
Construct FAISS wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the FAISS database
This is intended to be a quick way to get started.
Example
from langchain import FAISS
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
faiss = FAISS.from_texts(texts, embeddings)
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.faiss.FAISS[source]#
Construct FAISS wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the FAISS database
This is intended to be a quick way to get started.
Example
from langchain import FAISS
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
faiss = FAISS.from_texts(texts, embeddings)
classmethod load_local(folder_path: str, embeddings: langchain.embeddings.base.Embeddings) → langchain.vectorstores.faiss.FAISS[source]#
Load FAISS index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path – folder path to load index, docstore,
and index_to_docstore_id from. | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-11 | and index_to_docstore_id from.
embeddings – Embeddings to use when generating queries
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Returns
List of Documents selected by maximal marginal relevance.
merge_from(target: langchain.vectorstores.faiss.FAISS) → None[source]#
Merge another FAISS object with the current one.
Add the target FAISS to the current one.
Parameters
target – FAISS object you wish to merge into the current one
Returns
None.
save_local(folder_path: str) → None[source]#
Save FAISS index, docstore, and index_to_docstore_id to disk.
Parameters
folder_path – folder path to save index, docstore,
and index_to_docstore_id to. | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-12 | and index_to_docstore_id to.
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the embedding.
similarity_search_with_score(query: str, k: int = 4) → List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score_by_vector(embedding: List[float], k: int = 4) → List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.Milvus(embedding_function: langchain.embeddings.base.Embeddings, connection_args: dict, collection_name: str, text_field: str)[source]#
Wrapper around the Milvus vector database. | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-13 | Wrapper around the Milvus vector database.
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, partition_name: Optional[str] = None, timeout: Optional[int] = None, **kwargs: Any) → List[str][source]#
Insert text data into Milvus.
When using add_texts() it is assumed that a collecton has already
been made and indexed. If metadata is included, it is assumed that
it is ordered correctly to match the schema provided to the Collection
and that the embedding vector is the first schema field.
Parameters
texts (Iterable[str]) – The text being embedded and inserted.
metadatas (Optional[List[dict]], optional) – The metadata that
corresponds to each insert. Defaults to None.
partition_name (str, optional) – The partition of the collection
to insert data into. Defaults to None.
timeout – specified timeout.
Returns
The resulting keys for each inserted element.
Return type
List[str]
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.milvus.Milvus[source]#
Create a Milvus collection, indexes it with HNSW, and insert data.
Parameters
texts (List[str]) – Text to insert.
embedding (Embeddings) – Embedding function to use.
metadatas (Optional[List[dict]], optional) – Dict metatadata.
Defaults to None.
Returns
The Milvus vector store.
Return type
VectorStore | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-14 | Defaults to None.
Returns
The Milvus vector store.
Return type
VectorStore
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20, param: Optional[dict] = None, expr: Optional[str] = None, partition_names: Optional[List[str]] = None, round_decimal: int = - 1, timeout: Optional[int] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Perform a search and return results that are reordered by MMR.
Parameters
query (str) – The text being searched.
k (int, optional) – How many results to give. Defaults to 4.
fetch_k (int, optional) – Total results to select k from.
Defaults to 20.
param (dict, optional) – The search params for the specified index.
Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None.
partition_names (List[str], optional) – What partitions to search.
Defaults to None.
round_decimal (int, optional) – Round the resulting distance. Defaults
to -1.
timeout (int, optional) – Amount to wait before timeout error. Defaults
to None.
Returns
Document results for search.
Return type
List[Document]
similarity_search(query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, partition_names: Optional[List[str]] = None, round_decimal: int = - 1, timeout: Optional[int] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Perform a similarity search against the query string.
Parameters
query (str) – The text to search. | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-15 | Parameters
query (str) – The text to search.
k (int, optional) – How many results to return. Defaults to 4.
param (dict, optional) – The search params for the index type.
Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None.
partition_names (List[str], optional) – What partitions to search.
Defaults to None.
round_decimal (int, optional) – What decimal point to round to.
Defaults to -1.
timeout (int, optional) – How long to wait before timeout error.
Defaults to None.
Returns
Document results for search.
Return type
List[Document]
similarity_search_with_score(query: str, k: int = 4, param: Optional[dict] = None, expr: Optional[str] = None, partition_names: Optional[List[str]] = None, round_decimal: int = - 1, timeout: Optional[int] = None, **kwargs: Any) → List[Tuple[langchain.schema.Document, float]][source]#
Perform a search on a query string and return results.
Parameters
query (str) – The text being searched.
k (int, optional) – The amount of results ot return. Defaults to 4.
param (dict, optional) – The search params for the specified index.
Defaults to None.
expr (str, optional) – Filtering expression. Defaults to None.
partition_names (List[str], optional) – Partitions to search through.
Defaults to None.
round_decimal (int, optional) – Round the resulting distance. Defaults
to -1.
timeout (int, optional) – Amount to wait before timeout error. Defaults
to None.
kwargs – Collection.search() keyword arguments.
Returns
search_embedding,(Document, distance, primary_field) results.
Return type | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-16 | Returns
search_embedding,(Document, distance, primary_field) results.
Return type
List[float], List[Tuple[Document, any, any]]
class langchain.vectorstores.OpenSearchVectorSearch(opensearch_url: str, index_name: str, embedding_function: langchain.embeddings.base.Embeddings)[source]#
Wrapper around OpenSearch as a vector database.
Example
from langchain import OpenSearchVectorSearch
opensearch_vector_search = OpenSearchVectorSearch(
"http://localhost:9200",
"embeddings",
embedding_function
)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, bulk_size: int = 500, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
bulk_size – Bulk API request count; Default: 500
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, bulk_size: int = 500, **kwargs: Any) → langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch[source]#
Construct OpenSearchVectorSearch wrapper from raw documents.
Example
from langchain import OpenSearchVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
opensearch_vector_search = OpenSearchVectorSearch.from_texts(
texts,
embeddings,
opensearch_url="http://localhost:9200"
)
OpenSearch by default supports Approximate Search powered by nmslib, faiss | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-17 | )
OpenSearch by default supports Approximate Search powered by nmslib, faiss
and lucene engines recommended for large datasets. Also supports brute force
search through Script Scoring and Painless Scripting.
Optional Keyword Args for Approximate Search:engine: “nmslib”, “faiss”, “hnsw”; default: “nmslib”
space_type: “l2”, “l1”, “cosinesimil”, “linf”, “innerproduct”; default: “l2”
ef_search: Size of the dynamic list used during k-NN searches. Higher values
lead to more accurate but slower searches; default: 512
ef_construction: Size of the dynamic list used during k-NN graph creation.
Higher values lead to more accurate graph but slower indexing speed;
default: 512
m: Number of bidirectional links created for each new element. Large impact
on memory consumption. Between 2 and 100; default: 16
Keyword Args for Script Scoring or Painless Scripting:is_appx_search: False
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
By default supports Approximate Search.
Also supports Script Scoring and Painless Scripting.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query.
Optional Args for Approximate Search:search_type: “approximate_search”; default: “approximate_search”
size: number of results the query actually returns; default: 4
Optional Args for Script Scoring Search:search_type: “script_scoring”; default: “approximate_search” | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-18 | space_type: “l2”, “l1”, “linf”, “cosinesimil”, “innerproduct”,
“hammingbit”; default: “l2”
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {“match_all”: {}}
Optional Args for Painless Scripting Search:search_type: “painless_scripting”; default: “approximate_search”
space_type: “l2Squared”, “l1Norm”, “cosineSimilarity”; default: “l2Squared”
pre_filter: script_score query to pre-filter documents before identifying
nearest neighbors; default: {“match_all”: {}}
class langchain.vectorstores.Pinecone(index: Any, embedding_function: Callable, text_key: str, namespace: Optional[str] = None)[source]#
Wrapper around Pinecone vector database.
To use, you should have the pinecone-client python package installed.
Example
from langchain.vectorstores import Pinecone
from langchain.embeddings.openai import OpenAIEmbeddings
import pinecone
pinecone.init(api_key="***", environment="us-west1-gcp")
index = pinecone.Index("langchain-demo")
embeddings = OpenAIEmbeddings()
vectorstore = Pinecone(index, embeddings.embed_query, "text")
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, namespace: Optional[str] = None, batch_size: int = 32, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
ids – Optional list of ids to associate with the texts. | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-19 | ids – Optional list of ids to associate with the texts.
namespace – Optional pinecone namespace to add the texts to.
Returns
List of ids from adding the texts into the vectorstore.
classmethod from_existing_index(index_name: str, embedding: langchain.embeddings.base.Embeddings, text_key: str = 'text', namespace: Optional[str] = None) → langchain.vectorstores.pinecone.Pinecone[source]#
Load pinecone vectorstore from index name.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, batch_size: int = 32, text_key: str = 'text', index_name: Optional[str] = None, namespace: Optional[str] = None, **kwargs: Any) → langchain.vectorstores.pinecone.Pinecone[source]#
Construct Pinecone wrapper from raw documents.
This is a user friendly interface that:
Embeds documents.
Adds the documents to a provided Pinecone index
This is intended to be a quick way to get started.
Example
from langchain import Pinecone
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
pinecone = Pinecone.from_texts(
texts,
embeddings,
index_name="langchain-demo"
)
similarity_search(query: str, k: int = 5, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Return pinecone documents most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – Dictionary of argument(s) to filter on metadata | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-20 | filter – Dictionary of argument(s) to filter on metadata
namespace – Namespace to search in. Default will search in ‘’ namespace.
Returns
List of Documents most similar to the query and score for each
similarity_search_with_score(query: str, k: int = 5, filter: Optional[dict] = None, namespace: Optional[str] = None) → List[Tuple[langchain.schema.Document, float]][source]#
Return pinecone documents most similar to query, along with scores.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – Dictionary of argument(s) to filter on metadata
namespace – Namespace to search in. Default will search in ‘’ namespace.
Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.Qdrant(client: Any, collection_name: str, embedding_function: Callable, content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata')[source]#
Wrapper around Qdrant vector database.
To use you should have the qdrant-client package installed.
Example
from langchain import Qdrant
client = QdrantClient()
collection_name = "MyCollection"
qdrant = Qdrant(client, collection_name, embedding_function)
CONTENT_KEY = 'page_content'#
METADATA_KEY = 'metadata'#
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
Returns
List of ids from adding the texts into the vectorstore. | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-21 | Returns
List of ids from adding the texts into the vectorstore.
classmethod from_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, url: Optional[str] = None, port: Optional[int] = 6333, grpc_port: int = 6334, prefer_grpc: bool = False, https: Optional[bool] = None, api_key: Optional[str] = None, prefix: Optional[str] = None, timeout: Optional[float] = None, host: Optional[str] = None, collection_name: Optional[str] = None, distance_func: str = 'Cosine', content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', **kwargs: Any) → langchain.vectorstores.qdrant.Qdrant[source]#
Return VectorStore initialized from documents and embeddings.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, url: Optional[str] = None, port: Optional[int] = 6333, grpc_port: int = 6334, prefer_grpc: bool = False, https: Optional[bool] = None, api_key: Optional[str] = None, prefix: Optional[str] = None, timeout: Optional[float] = None, host: Optional[str] = None, collection_name: Optional[str] = None, distance_func: str = 'Cosine', content_payload_key: str = 'page_content', metadata_payload_key: str = 'metadata', **kwargs: Any) → langchain.vectorstores.qdrant.Qdrant[source]#
Construct Qdrant wrapper from raw documents.
Parameters
texts – A list of texts to be indexed in Qdrant.
embedding – A subclass of Embeddings, responsible for text vectorization. | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-22 | embedding – A subclass of Embeddings, responsible for text vectorization.
metadatas – An optional list of metadata. If provided it has to be of the same
length as a list of texts.
url – either host or str of “Optional[scheme], host, Optional[port],
Optional[prefix]”. Default: None
port – Port of the REST API interface. Default: 6333
grpc_port – Port of the gRPC interface. Default: 6334
prefer_grpc – If true - use gPRC interface whenever possible in custom methods.
https – If true - use HTTPS(SSL) protocol. Default: None
api_key – API key for authentication in Qdrant Cloud. Default: None
prefix – If not None - add prefix to the REST URL path.
Example: service/v1 will result in
http://localhost:6333/service/v1/{qdrant-endpoint} for REST API.
Default: None
timeout – Timeout for REST and gRPC API requests.
Default: 5.0 seconds for REST and unlimited for gRPC
host – Host name of Qdrant service. If url and host are None, set to
‘localhost’. Default: None
collection_name – Name of the Qdrant collection to be used. If not provided,
will be created randomly.
distance_func – Distance function. One of the: “Cosine” / “Euclid” / “Dot”.
content_payload_key – A payload key used to store the content of the document.
metadata_payload_key – A payload key used to store the metadata of the document.
**kwargs – Additional arguments passed directly into REST client initialization
This is a user friendly interface that:
Embeds documents.
Creates an in memory docstore
Initializes the Qdrant database
This is intended to be a quick way to get started.
Example | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-23 | This is intended to be a quick way to get started.
Example
from langchain import Qdrant
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
qdrant = Qdrant.from_texts(texts, embeddings, "localhost")
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Returns
List of Documents selected by maximal marginal relevance.
similarity_search(query: str, k: int = 4, filter: Optional[Dict[str, Union[str, int, bool]]] = None, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query.
similarity_search_with_score(query: str, k: int = 4, filter: Optional[Dict[str, Union[str, int, bool]]] = None) → List[Tuple[langchain.schema.Document, float]][source]#
Return docs most similar to query.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
filter – Filter by metadata. Defaults to None.
Returns
List of Documents most similar to the query and score for each | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-24 | Returns
List of Documents most similar to the query and score for each
class langchain.vectorstores.VectorStore[source]#
Interface for vector stores.
add_documents(documents: List[langchain.schema.Document], **kwargs: Any) → List[str][source]#
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) – Documents to add to the vectorstore.
Returns
List of IDs of the added texts.
Return type
List[str]
abstract add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts – Iterable of strings to add to the vectorstore.
metadatas – Optional list of metadatas associated with the texts.
kwargs – vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
as_retriever() → langchain.vectorstores.base.VectorStoreRetriever[source]#
classmethod from_documents(documents: List[langchain.schema.Document], embedding: langchain.embeddings.base.Embeddings, **kwargs: Any) → langchain.vectorstores.base.VectorStore[source]#
Return VectorStore initialized from documents and embeddings.
abstract classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.base.VectorStore[source]#
Return VectorStore initialized from texts and embeddings.
max_marginal_relevance_search(query: str, k: int = 4, fetch_k: int = 20) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-25 | Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query – Text to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Returns
List of Documents selected by maximal marginal relevance.
max_marginal_relevance_search_by_vector(embedding: List[float], k: int = 4, fetch_k: int = 20) → List[langchain.schema.Document][source]#
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
fetch_k – Number of Documents to fetch to pass to MMR algorithm.
Returns
List of Documents selected by maximal marginal relevance.
abstract similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to query.
similarity_search_by_vector(embedding: List[float], k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Return docs most similar to embedding vector.
Parameters
embedding – Embedding to look up documents similar to.
k – Number of Documents to return. Defaults to 4.
Returns
List of Documents most similar to the query vector.
class langchain.vectorstores.Weaviate(client: Any, index_name: str, text_key: str, attributes: Optional[List[str]] = None)[source]#
Wrapper around Weaviate vector database.
To use, you should have the weaviate-client python package installed.
Example
import weaviate
from langchain.vectorstores import Weaviate | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
e7d7c6df04cc-26 | Example
import weaviate
from langchain.vectorstores import Weaviate
client = weaviate.Client(url=os.environ["WEAVIATE_URL"], ...)
weaviate = Weaviate(client, index_name, text_key)
add_texts(texts: Iterable[str], metadatas: Optional[List[dict]] = None, **kwargs: Any) → List[str][source]#
Upload texts with metadata (properties) to Weaviate.
classmethod from_texts(texts: List[str], embedding: langchain.embeddings.base.Embeddings, metadatas: Optional[List[dict]] = None, **kwargs: Any) → langchain.vectorstores.base.VectorStore[source]#
Not implemented for Weaviate yet.
similarity_search(query: str, k: int = 4, **kwargs: Any) → List[langchain.schema.Document][source]#
Look up similar documents in weaviate.
previous
Embeddings
next
Indexes
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | https://langchain.readthedocs.io\en\latest\reference\modules\vectorstore.html |
9d3f120ffc81-0 | .ipynb
.pdf
Tracing Walkthrough
Tracing Walkthrough#
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
## Uncomment this if using hosted setup.
# os.environ["LANGCHAIN_ENDPOINT"] = "https://langchain-api-gateway-57eoxz8z.uc.gateway.dev"
## Uncomment this if you want traces to be recorded to "my_session" instead of default.
# os.environ["LANGCHAIN_SESSION"] = "my_session"
## Better to set this environment variable in the terminal
## Uncomment this if using hosted version. Replace "my_api_key" with your actual API Key.
# os.environ["LANGCHAIN_API_KEY"] = "my_api_key"
import langchain
from langchain.agents import Tool, initialize_agent, load_tools
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
# Agent run with tracing. Ensure that OPENAI_API_KEY is set appropriately to run this example.
llm = OpenAI(temperature=0)
tools = load_tools(["llm-math"], llm=llm)
agent = initialize_agent(
tools, llm, agent="zero-shot-react-description", verbose=True
)
agent.run("What is 2 raised to .123243 power?")
> Entering new AgentExecutor chain...
I need to use a calculator to solve this.
Action: Calculator
Action Input: 2^.123243
Observation: Answer: 1.0891804557407723
Thought: I now know the final answer.
Final Answer: 1.0891804557407723
> Finished chain.
'1.0891804557407723'
# Agent run with tracing using a chat model
agent = initialize_agent( | https://langchain.readthedocs.io\en\latest\tracing\agent_with_tracing.html |
9d3f120ffc81-1 | # Agent run with tracing using a chat model
agent = initialize_agent(
tools, ChatOpenAI(temperature=0), agent="chat-zero-shot-react-description", verbose=True
)
agent.run("What is 2 raised to .123243 power?")
> Entering new AgentExecutor chain...
Question: What is 2 raised to .123243 power?
Thought: I need a calculator to solve this problem.
Action:
```
{
"action": "calculator",
"action_input": "2^0.123243"
}
```
Observation: calculator is not a valid tool, try another one.
I made a mistake, I need to use the correct tool for this question.
Action:
```
{
"action": "calculator",
"action_input": "2^0.123243"
}
```
Observation: calculator is not a valid tool, try another one.
I made a mistake, the tool name is actually "calc" instead of "calculator".
Action:
```
{
"action": "calc",
"action_input": "2^0.123243"
}
```
Observation: calc is not a valid tool, try another one.
I made another mistake, the tool name is actually "Calculator" instead of "calc".
Action:
```
{
"action": "Calculator",
"action_input": "2^0.123243"
}
```
Observation: Answer: 1.0891804557407723
Thought:The final answer is 1.0891804557407723.
Final Answer: 1.0891804557407723
> Finished chain.
'1.0891804557407723'
By Harrison Chase
© Copyright 2023, Harrison Chase. | https://langchain.readthedocs.io\en\latest\tracing\agent_with_tracing.html |
9d3f120ffc81-2 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | https://langchain.readthedocs.io\en\latest\tracing\agent_with_tracing.html |
f8466a26cabf-0 | .md
.pdf
Cloud Hosted Setup
Contents
Installation
Environment Setup
Cloud Hosted Setup#
We offer a hosted version of tracing at langchainplus.vercel.app. You can use this to view traces from your run without having to run the server locally.
Note: we are currently only offering this to a limited number of users. The hosted platform is VERY alpha, in active development, and data might be dropped at any time. Don’t depend on data being persisted in the system long term and don’t log traces that may contain sensitive information. If you’re interested in using the hosted platform, please fill out the form here.
Installation#
Login to the system and click “API Key” in the top right corner. Generate a new key and keep it safe. You will need it to authenticate with the system.
Environment Setup#
After installation, you must now set up your environment to use tracing.
This can be done by setting an environment variable in your terminal by running export LANGCHAIN_HANDLER=langchain.
You can also do this by adding the below snippet to the top of every script. IMPORTANT: this must go at the VERY TOP of your script, before you import anything from langchain.
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
You will also need to set an environment variable to specify the endpoint and your API key. This can be done with the following environment variables:
LANGCHAIN_ENDPOINT = “https://langchain-api-gateway-57eoxz8z.uc.gateway.dev”
LANGCHAIN_API_KEY - set this to the API key you generated during installation.
An example of adding all relevant environment variables is below:
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
os.environ["LANGCHAIN_ENDPOINT"] = "https://langchain-api-gateway-57eoxz8z.uc.gateway.dev" | https://langchain.readthedocs.io\en\latest\tracing\hosted_installation.html |
f8466a26cabf-1 | os.environ["LANGCHAIN_API_KEY"] = "my_api_key" # Don't commit this to your repo! Better to set it in your terminal.
Contents
Installation
Environment Setup
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | https://langchain.readthedocs.io\en\latest\tracing\hosted_installation.html |
1076c6bbe821-0 | .md
.pdf
Locally Hosted Setup
Contents
Installation
Environment Setup
Locally Hosted Setup#
This page contains instructions for installing and then setting up the environment to use the locally hosted version of tracing.
Installation#
Ensure you have Docker installed (see Get Docker) and that it’s running.
Install the latest version of langchain: pip install langchain or pip install langchain -U to upgrade your
existing version.
Run langchain-server. This command was installed automatically when you ran the above command (pip install langchain).
This will spin up the server in the terminal, hosted on port 4137 by default.
Once you see the terminal
output langchain-langchain-frontend-1 | ➜ Local: [http://localhost:4173/](http://localhost:4173/), navigate
to http://localhost:4173/
You should see a page with your tracing sessions. See the overview page for a walkthrough of the UI.
Currently, trace data is not guaranteed to be persisted between runs of langchain-server. If you want to
persist your data, you can mount a volume to the Docker container. See the Docker docs for more info.
To stop the server, press Ctrl+C in the terminal where you ran langchain-server.
Environment Setup#
After installation, you must now set up your environment to use tracing.
This can be done by setting an environment variable in your terminal by running export LANGCHAIN_HANDLER=langchain.
You can also do this by adding the below snippet to the top of every script. IMPORTANT: this must go at the VERY TOP of your script, before you import anything from langchain.
import os
os.environ["LANGCHAIN_HANDLER"] = "langchain"
Contents
Installation
Environment Setup
By Harrison Chase
© Copyright 2023, Harrison Chase. | https://langchain.readthedocs.io\en\latest\tracing\local_installation.html |
1076c6bbe821-1 | By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | https://langchain.readthedocs.io\en\latest\tracing\local_installation.html |
9b239547f557-0 | .md
.pdf
Agents
Agents#
Agents are systems that use a language model to interact with other tools.
These can be used to do more grounded question/answering, interact with APIs, or even take actions.
These agents can be used to power the next generation of personal assistants -
systems that intelligently understand what you mean, and then can take actions to help you accomplish your goal.
Agents are a core use of LangChain - so much so that there is a whole module dedicated to them.
Therefore, we recommend that you check out that documentation for detailed instruction on how to work
with them.
Agent Documentation
previous
Retrieval Question Answering with Sources
next
Chatbots
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | https://langchain.readthedocs.io\en\latest\use_cases\agents.html |
9508c11cebf7-0 | .md
.pdf
Chatbots
Chatbots#
Since language models are good at producing text, that makes them ideal for creating chatbots.
Aside from the base prompts/LLMs, an important concept to know for Chatbots is memory.
Most chat based applications rely on remembering what happened in previous interactions, which is memory is designed to help with.
The following resources exist:
ChatGPT Clone: A notebook walking through how to recreate a ChatGPT-like experience with LangChain.
Conversation Memory: A notebook walking through how to use different types of conversational memory.
Conversation Agent: A notebook walking through how to create an agent optimized for conversation.
Additional related resources include:
Memory Key Concepts: Explanation of key concepts related to memory.
Memory Examples: A collection of how-to examples for working with memory.
previous
Agents
next
Generate Examples
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | https://langchain.readthedocs.io\en\latest\use_cases\chatbots.html |
c9606383b9d2-0 | .md
.pdf
Data Augmented Generation
Contents
Overview
Related Literature
Fetching
Text Splitting
Relevant Documents
Augmenting
Use Cases
Data Augmented Generation#
Overview#
Language models are trained on large amounts of unstructured data, which makes them fantastic at general purpose text generation. However, there are many instances where you may want the language model to generate text based not on generic data but rather on specific data. Some common examples of this include:
Summarization of a specific piece of text (a website, a private document, etc.)
Question answering over a specific piece of text (a website, a private document, etc.)
Question answering over multiple pieces of text (multiple websites, multiple private documents, etc.)
Using the results of some external call to an API (results from a SQL query, etc.)
All of these examples are instances when you do not want the LLM to generate text based solely on the data it was trained over, but rather you want it to incorporate other external data in some way. At a high level, this process can be broken down into two steps:
Fetching: Fetching the relevant data to include.
Augmenting: Passing the data in as context to the LLM.
This guide is intended to provide an overview of how to do this. This includes an overview of the literature, as well as common tools, abstractions and chains for doing this.
Related Literature#
There are a lot of related papers in this area. Most of them are focused on end-to-end methods that optimize the fetching of the relevant data as well as passing it in as context. These are a few of the papers that are particularly relevant:
RAG: Retrieval Augmented Generation. | https://langchain.readthedocs.io\en\latest\use_cases\combine_docs.html |
c9606383b9d2-1 | RAG: Retrieval Augmented Generation.
This paper introduces RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever.
REALM: Retrieval-Augmented Language Model Pre-Training.
To capture knowledge in a more modular and interpretable way, this paper augments language model pre-training with a latent knowledge retriever, which allows the model to retrieve and attend over documents from a large corpus such as Wikipedia, used during pre-training, fine-tuning and inference.
HayStack: This is not a paper, but rather an open source library aimed at semantic search, question answering, summarization, and document ranking for a wide range of NLP applications. The underpinnings of this library are focused on the same fetching and augmenting concepts discussed here, and incorporate some methods in the above papers.
These papers/open-source projects are centered around retrieval of documents, which is important for question-answering tasks over a large corpus of documents (which is how they are evaluated). However, we use the terminology of Data Augmented Generation to highlight that retrieval from some document store is only one possible way of fetching relevant data to include. Other methods to fetch relevant data could involve hitting an API, querying a database, or just working with user provided data (eg a specific document that they want to summarize).
Let’s now deep dive on the two steps involved: fetching and augmenting.
Fetching#
There are many ways to fetch relevant data to pass in as context to a LM, and these methods largely depend
on the use case.
User provided: In some cases, the user may provide the relevant data, and no algorithm for fetching is needed.
An example of this is for summarization of specific documents: the user will provide the document to be summarized, | https://langchain.readthedocs.io\en\latest\use_cases\combine_docs.html |
c9606383b9d2-2 | and task the language model with summarizing it.
Document Retrieval: One of the more common use cases involves fetching relevant documents or pieces of text from
a large corpus of data. A common example of this is question answering over a private collection of documents.
API Querying: Another common way to fetch data is from an API query. One example of this is WebGPT like system,
where you first query Google (or another search API) for relevant information, and then those results are used in
the generation step. Another example could be querying a structured database (like SQL) and then using a language model
to synthesize those results.
There are two big issues to deal with in fetching:
Fetching small enough pieces of information
Not fetching too many pieces of information (e.g. fetching only the most relevant pieces)
Text Splitting#
One big issue with all of these methods is how to make sure you are working with pieces of text that are not too large.
This is important because most language models have a context length, and so you cannot (yet) just pass a
large document in as context. Therefore, it is important to not only fetch relevant data but also make sure it is in
small enough chunks.
LangChain provides some utilities to help with splitting up larger pieces of data. This comes in the form of the TextSplitter class.
The class takes in a document and splits it up into chunks, with several parameters that control the
size of the chunks as well as the overlap in the chunks (important for maintaining context).
See this walkthrough for more information.
Relevant Documents#
A second large issue related fetching data is to make sure you are not fetching too many documents, and are only fetching
the documents that are relevant to the query/question at hand. There are a few ways to deal with this. | https://langchain.readthedocs.io\en\latest\use_cases\combine_docs.html |
c9606383b9d2-3 | One concrete example of this is vector stores for document retrieval, often used for semantic search or question answering.
With this method, larger documents are split up into
smaller chunks and then each chunk of text is passed to an embedding function which creates an embedding for that piece of text.
Those are embeddings are then stored in a database. When a new search query or question comes in, an embedding is
created for that query/question and then documents with embeddings most similar to that embedding are fetched.
Examples of vector database companies include Pinecone and Weaviate.
Although this is perhaps the most common way of document retrieval, people are starting to think about alternative
data structures and indexing techniques specifically for working with language models. For a leading example of this,
check out LlamaIndex - a collection of data structures created by and optimized
for language models.
Augmenting#
So you’ve fetched your relevant data - now what? How do you pass them to the language model in a format it can understand?
For a detailed overview of the different ways of doing so, and the tradeoffs between them, please see
this documentation
Use Cases#
LangChain supports the above three methods of augmenting LLMs with external data.
These methods can be used to underpin several common use cases, and they are discussed below.
For all three of these use cases, all three methods are supported.
It is important to note that a large part of these implementations is the prompts
that are used. We provide default prompts for all three use cases, but these can be configured.
This is in case you discover a prompt that works better for your specific application.
Question-Answering
Summarization
previous
Generate Examples
next
Question Answering
Contents
Overview
Related Literature
Fetching
Text Splitting
Relevant Documents
Augmenting
Use Cases
By Harrison Chase | https://langchain.readthedocs.io\en\latest\use_cases\combine_docs.html |
c9606383b9d2-4 | Text Splitting
Relevant Documents
Augmenting
Use Cases
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | https://langchain.readthedocs.io\en\latest\use_cases\combine_docs.html |
12be0983e85b-0 | .rst
.pdf
Evaluation
Contents
The Problem
The Solution
The Examples
Other Examples
Evaluation#
This section of documentation covers how we approach and think about evaluation in LangChain.
Both evaluation of internal chains/agents, but also how we would recommend people building on top of LangChain approach evaluation.
The Problem#
It can be really hard to evaluate LangChain chains and agents.
There are two main reasons for this:
# 1: Lack of data
You generally don’t have a ton of data to evaluate your chains/agents over before starting a project.
This is usually because Large Language Models (the core of most chains/agents) are terrific few-shot and zero shot learners,
meaning you are almost always able to get started on a particular task (text-to-SQL, question answering, etc) without
a large dataset of examples.
This is in stark contrast to traditional machine learning where you had to first collect a bunch of datapoints
before even getting started using a model.
# 2: Lack of metrics
Most chains/agents are performing tasks for which there are not very good metrics to evaluate performance.
For example, one of the most common use cases is generating text of some form.
Evaluating generated text is much more complicated than evaluating a classification prediction, or a numeric prediction.
The Solution#
LangChain attempts to tackle both of those issues.
What we have so far are initial passes at solutions - we do not think we have a perfect solution.
So we very much welcome feedback, contributions, integrations, and thoughts on this.
Here is what we have for each problem so far:
# 1: Lack of data
We have started LangChainDatasets a Community space on Hugging Face.
We intend this to be a collection of open source datasets for evaluating common chains and agents. | https://langchain.readthedocs.io\en\latest\use_cases\evaluation.html |
12be0983e85b-1 | We intend this to be a collection of open source datasets for evaluating common chains and agents.
We have contributed five datasets of our own to start, but we highly intend this to be a community effort.
In order to contribute a dataset, you simply need to join the community and then you will be able to upload datasets.
We’re also aiming to make it as easy as possible for people to create their own datasets.
As a first pass at this, we’ve added a QAGenerationChain, which given a document comes up
with question-answer pairs that can be used to evaluate question-answering tasks over that document down the line.
See this notebook for an example of how to use this chain.
# 2: Lack of metrics
We have two solutions to the lack of metrics.
The first solution is to use no metrics, and rather just rely on looking at results by eye to get a sense for how the chain/agent is performing.
To assist in this, we have developed (and will continue to develop) tracing, a UI-based visualizer of your chain and agent runs.
The second solution we recommend is to use Language Models themselves to evaluate outputs.
For this we have a few different chains and prompts aimed at tackling this issue.
The Examples#
We have created a bunch of examples combining the above two solutions to show how we internally evaluate chains and agents when we are developing.
In addition to the examples we’ve curated, we also highly welcome contributions here.
To facilitate that, we’ve included a template notebook for community members to use to build their own examples.
The existing examples we have are:
Question Answering (State of Union): An notebook showing evaluation of a question-answering task over a State-of-the-Union address.
Question Answering (Paul Graham Essay): An notebook showing evaluation of a question-answering task over a Paul Graham essay. | https://langchain.readthedocs.io\en\latest\use_cases\evaluation.html |
12be0983e85b-2 | SQL Question Answering (Chinook): An notebook showing evaluation of a question-answering task over a SQL database (the Chinook database).
Agent Vectorstore: An notebook showing evaluation of an agent doing question answering while routing between two different vector databases.
Agent Search + Calculator: An notebook showing evaluation of an agent doing question answering using a Search engine and a Calculator as tools.
Other Examples#
In addition, we also have some more generic resources for evaluation.
Question Answering: An overview of LLMs aimed at evaluating question answering systems in general.
Data Augmented Question Answering: An end-to-end example of evaluating a question answering system focused on a specific document (a RetrievalQAChain to be precise). This example highlights how to use LLMs to come up with question/answer examples to evaluate over, and then highlights how to use LLMs to evaluate performance on those generated examples.
Hugging Face Datasets: Covers an example of loading and using a dataset from Hugging Face for evaluation.
previous
Extraction
next
Agent Benchmarking: Search + Calculator
Contents
The Problem
The Solution
The Examples
Other Examples
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | https://langchain.readthedocs.io\en\latest\use_cases\evaluation.html |
28e8bbcc1053-0 | .md
.pdf
Extraction
Extraction#
Most APIs and databases still deal with structured information.
Therefore, in order to better work with those, it can be useful to extract structured information from text.
Examples of this include:
Extracting a structured row to insert into a database from a sentence
Extracting multiple rows to insert into a database from a long document
Extracting the correct API parameters from a user query
This work is extremely related to output parsing.
Output parsers are responsible for instructing the LLM to respond in a specific format.
In this case, the output parsers specify the format of the data you would like to extract from the document.
Then, in addition to the output format instructions, the prompt should also contain the data you would like to extract information from.
While normal output parsers are good enough for basic structuring of response data,
when doing extraction you often want to extract more complicated or nested structures.
For a deep dive on extraction, we recommend checking out kor,
a library that uses the existing LangChain chain and OutputParser abstractions
but deep dives on allowing extraction of more complicated schemas.
previous
Querying Tabular Data
next
Evaluation
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | https://langchain.readthedocs.io\en\latest\use_cases\extraction.html |
cb5b0bc5be0b-0 | .ipynb
.pdf
Generate Examples
Generate Examples#
This notebook shows how to use LangChain to generate more examples similar to the ones you already have.
from langchain.llms.openai import OpenAI
from langchain.example_generator import generate_example
from langchain.prompts import PromptTemplate
# Use examples from ReAct
examples = [
{
"question": "What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into?", | https://langchain.readthedocs.io\en\latest\use_cases\generate_examples.html |
cb5b0bc5be0b-1 | "answer": "Thought 1: I need to search Colorado orogeny, find the area that the eastern sector of the Colorado orogeny extends into, then find the elevation range of that area.\nAction 1: Search[Colorado orogeny]\nObservation 1: The Colorado orogeny was an episode of mountain building (an orogeny) in Colorado and surrounding areas.\nThought 2: It does not mention the eastern sector. So I need to look up eastern sector.\nAction 2: Lookup[eastern sector]\nObservation 2: (Result 1 / 1) The eastern sector extends into the High Plains and is called the Central Plains orogeny.\nThought 3: The eastern sector of Colorado orogeny extends into the High Plains. So I need to search High Plains and find its elevation range.\nAction 3: Search[High Plains]\nObservation 3: High Plains refers to one of two distinct land regions\nThought 4: I need to instead search High Plains (United States).\nAction 4: Search[High Plains (United States)]\nObservation 4: The High Plains are a subregion of the Great Plains. From east to west, the High Plains rise in elevation from around 1,800 to 7,000 ft (550 to 2,130 m).[3]\nThought 5: High Plains rise in elevation from around 1,800 to 7,000 ft, so the answer is 1,800 to 7,000 ft.\nAction 5: Finish[1,800 to 7,000 ft]"
},
{
"question": "Musician and satirist Allie Goertz wrote a song about the \"The Simpsons\" character Milhouse, who Matt Groening named after who?", | https://langchain.readthedocs.io\en\latest\use_cases\generate_examples.html |
cb5b0bc5be0b-2 | "answer": "Thought 1: The question simplifies to \"The Simpsons\" character Milhouse is named after who. I only need to search Milhouse and find who it is named after.\nAction 1: Search[Milhouse]\nObservation 1: Milhouse Mussolini Van Houten is a recurring character in the Fox animated television series The Simpsons voiced by Pamela Hayden and created by Matt Groening.\nThought 2: The paragraph does not tell who Milhouse is named after, maybe I can look up \"named after\".\nAction 2: Lookup[named after]\nObservation 2: (Result 1 / 1) Milhouse was named after U.S. president Richard Nixon, whose middle name was Milhous.\nThought 3: Milhouse was named after U.S. president Richard Nixon, so the answer is Richard Nixon.\nAction 3: Finish[Richard Nixon]"
},
{
"question": "Which documentary is about Finnish rock groups, Adam Clayton Powell or The Saimaa Gesture?", | https://langchain.readthedocs.io\en\latest\use_cases\generate_examples.html |
cb5b0bc5be0b-3 | "answer": "Thought 1: I need to search Adam Clayton Powell and The Saimaa Gesture, and find which documentary is about Finnish rock groups.\nAction 1: Search[Adam Clayton Powell]\nObservation 1 Could not find [Adam Clayton Powell]. Similar: [’Adam Clayton Powell III’, ’Seventh Avenue (Manhattan)’, ’Adam Clayton Powell Jr. State Office Building’, ’Isabel Washington Powell’, ’Adam Powell’, ’Adam Clayton Powell (film)’, ’Giancarlo Esposito’].\nThought 2: To find the documentary, I can search Adam Clayton Powell (film).\nAction 2: Search[Adam Clayton Powell (film)]\nObservation 2: Adam Clayton Powell is a 1989 American documentary film directed by Richard Kilberg. The film is about the rise and fall of influential African-American politician Adam Clayton Powell Jr.[3][4] It was later aired as part of the PBS series The American Experience.\nThought 3: Adam Clayton Powell (film) is a documentary about an African-American politician, not Finnish rock groups. So the documentary about Finnish rock groups must instead be The Saimaa Gesture.\nAction 3: Finish[The Saimaa Gesture]"
},
{
"question": "What profession does Nicholas Ray and Elia Kazan have in common?", | https://langchain.readthedocs.io\en\latest\use_cases\generate_examples.html |
cb5b0bc5be0b-4 | "question": "What profession does Nicholas Ray and Elia Kazan have in common?",
"answer": "Thought 1: I need to search Nicholas Ray and Elia Kazan, find their professions, then find the profession they have in common.\nAction 1: Search[Nicholas Ray]\nObservation 1: Nicholas Ray (born Raymond Nicholas Kienzle Jr., August 7, 1911 - June 16, 1979) was an American film director, screenwriter, and actor best known for the 1955 film Rebel Without a Cause.\nThought 2: Professions of Nicholas Ray are director, screenwriter, and actor. I need to search Elia Kazan next and find his professions.\nAction 2: Search[Elia Kazan]\nObservation 2: Elia Kazan was an American film and theatre director, producer, screenwriter and actor.\nThought 3: Professions of Elia Kazan are director, producer, screenwriter, and actor. So profession Nicholas Ray and Elia Kazan have in common is director, screenwriter, and actor.\nAction 3: Finish[director, screenwriter, actor]"
},
{
"question": "Which magazine was started first Arthur’s Magazine or First for Women?", | https://langchain.readthedocs.io\en\latest\use_cases\generate_examples.html |
cb5b0bc5be0b-5 | "question": "Which magazine was started first Arthur’s Magazine or First for Women?",
"answer": "Thought 1: I need to search Arthur’s Magazine and First for Women, and find which was started first.\nAction 1: Search[Arthur’s Magazine]\nObservation 1: Arthur’s Magazine (1844-1846) was an American literary periodical published in Philadelphia in the 19th century.\nThought 2: Arthur’s Magazine was started in 1844. I need to search First for Women next.\nAction 2: Search[First for Women]\nObservation 2: First for Women is a woman’s magazine published by Bauer Media Group in the USA.[1] The magazine was started in 1989.\nThought 3: First for Women was started in 1989. 1844 (Arthur’s Magazine) < 1989 (First for Women), so Arthur’s Magazine was started first.\nAction 3: Finish[Arthur’s Magazine]"
},
{
"question": "Were Pavel Urysohn and Leonid Levin known for the same type of work?", | https://langchain.readthedocs.io\en\latest\use_cases\generate_examples.html |
cb5b0bc5be0b-6 | "answer": "Thought 1: I need to search Pavel Urysohn and Leonid Levin, find their types of work, then find if they are the same.\nAction 1: Search[Pavel Urysohn]\nObservation 1: Pavel Samuilovich Urysohn (February 3, 1898 - August 17, 1924) was a Soviet mathematician who is best known for his contributions in dimension theory.\nThought 2: Pavel Urysohn is a mathematician. I need to search Leonid Levin next and find its type of work.\nAction 2: Search[Leonid Levin]\nObservation 2: Leonid Anatolievich Levin is a Soviet-American mathematician and computer scientist.\nThought 3: Leonid Levin is a mathematician and computer scientist. So Pavel Urysohn and Leonid Levin have the same type of work.\nAction 3: Finish[yes]"
}
]
example_template = PromptTemplate(template="Question: {question}\n{answer}", input_variables=["question", "answer"])
new_example = generate_example(examples, OpenAI(), example_template)
new_example.split('\n')
['',
'',
'Question: What is the difference between the Illinois and Missouri orogeny?',
'Thought 1: I need to search Illinois and Missouri orogeny, and find the difference between them.',
'Action 1: Search[Illinois orogeny]',
'Observation 1: The Illinois orogeny is a hypothesized orogenic event that occurred in the Late Paleozoic either in the Pennsylvanian or Permian period.',
'Thought 2: The Illinois orogeny is a hypothesized orogenic event. I need to search Missouri orogeny next and find its details.', | https://langchain.readthedocs.io\en\latest\use_cases\generate_examples.html |
cb5b0bc5be0b-7 | 'Action 2: Search[Missouri orogeny]',
'Observation 2: The Missouri orogeny was a major tectonic event that occurred in the late Pennsylvanian and early Permian period (about 300 million years ago).',
'Thought 3: The Illinois orogeny is hypothesized and occurred in the Late Paleozoic and the Missouri orogeny was a major tectonic event that occurred in the late Pennsylvanian and early Permian period. So the difference between the Illinois and Missouri orogeny is that the Illinois orogeny is hypothesized and occurred in the Late Paleozoic while the Missouri orogeny was a major']
previous
Chatbots
next
Data Augmented Generation
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | https://langchain.readthedocs.io\en\latest\use_cases\generate_examples.html |
4164c46bb07b-0 | .ipynb
.pdf
Model Comparison
Model Comparison#
Constructing your language model application will likely involved choosing between many different options of prompts, models, and even chains to use. When doing so, you will want to compare these different options on different inputs in an easy, flexible, and intuitive way.
LangChain provides the concept of a ModelLaboratory to test out and try different models.
from langchain import LLMChain, OpenAI, Cohere, HuggingFaceHub, PromptTemplate
from langchain.model_laboratory import ModelLaboratory
llms = [
OpenAI(temperature=0),
Cohere(model="command-xlarge-20221108", max_tokens=20, temperature=0),
HuggingFaceHub(repo_id="google/flan-t5-xl", model_kwargs={"temperature":1})
]
model_lab = ModelLaboratory.from_llms(llms)
model_lab.compare("What color is a flamingo?")
Input:
What color is a flamingo?
OpenAI
Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
Flamingos are pink.
Cohere
Params: {'model': 'command-xlarge-20221108', 'max_tokens': 20, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0}
Pink
HuggingFaceHub
Params: {'repo_id': 'google/flan-t5-xl', 'temperature': 1}
pink | https://langchain.readthedocs.io\en\latest\use_cases\model_laboratory.html |
4164c46bb07b-1 | pink
prompt = PromptTemplate(template="What is the capital of {state}?", input_variables=["state"])
model_lab_with_prompt = ModelLaboratory.from_llms(llms, prompt=prompt)
model_lab_with_prompt.compare("New York")
Input:
New York
OpenAI
Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
The capital of New York is Albany.
Cohere
Params: {'model': 'command-xlarge-20221108', 'max_tokens': 20, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0}
The capital of New York is Albany.
HuggingFaceHub
Params: {'repo_id': 'google/flan-t5-xl', 'temperature': 1}
st john s
from langchain import SelfAskWithSearchChain, SerpAPIWrapper
open_ai_llm = OpenAI(temperature=0)
search = SerpAPIWrapper()
self_ask_with_search_openai = SelfAskWithSearchChain(llm=open_ai_llm, search_chain=search, verbose=True)
cohere_llm = Cohere(temperature=0, model="command-xlarge-20221108")
search = SerpAPIWrapper()
self_ask_with_search_cohere = SelfAskWithSearchChain(llm=cohere_llm, search_chain=search, verbose=True)
chains = [self_ask_with_search_openai, self_ask_with_search_cohere]
names = [str(open_ai_llm), str(cohere_llm)] | https://langchain.readthedocs.io\en\latest\use_cases\model_laboratory.html |
4164c46bb07b-2 | names = [str(open_ai_llm), str(cohere_llm)]
model_lab = ModelLaboratory(chains, names=names)
model_lab.compare("What is the hometown of the reigning men's U.S. Open champion?")
Input:
What is the hometown of the reigning men's U.S. Open champion?
OpenAI
Params: {'model': 'text-davinci-002', 'temperature': 0.0, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}
> Entering new chain...
What is the hometown of the reigning men's U.S. Open champion?
Are follow up questions needed here: Yes.
Follow up: Who is the reigning men's U.S. Open champion?
Intermediate answer: Carlos Alcaraz.
Follow up: Where is Carlos Alcaraz from?
Intermediate answer: El Palmar, Spain.
So the final answer is: El Palmar, Spain
> Finished chain.
So the final answer is: El Palmar, Spain
Cohere
Params: {'model': 'command-xlarge-20221108', 'max_tokens': 256, 'temperature': 0.0, 'k': 0, 'p': 1, 'frequency_penalty': 0, 'presence_penalty': 0}
> Entering new chain...
What is the hometown of the reigning men's U.S. Open champion?
Are follow up questions needed here: Yes.
Follow up: Who is the reigning men's U.S. Open champion?
Intermediate answer: Carlos Alcaraz.
So the final answer is:
Carlos Alcaraz
> Finished chain.
So the final answer is:
Carlos Alcaraz
previous | https://langchain.readthedocs.io\en\latest\use_cases\model_laboratory.html |
4164c46bb07b-3 | > Finished chain.
So the final answer is:
Carlos Alcaraz
previous
SQL Question Answering Benchmarking: Chinook
next
Installation
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | https://langchain.readthedocs.io\en\latest\use_cases\model_laboratory.html |
128367f7c542-0 | .md
.pdf
Question Answering
Contents
Document Question Answering
Adding in sources
Additional Related Resources
Question Answering#
Question answering in this context refers to question answering over your document data.
For question answering over other types of data, like SQL databases or APIs, please see here
For question answering over many documents, you almost always want to create an index over the data.
This can be used to smartly access the most relevant documents for a given question, allowing you to avoid having to pass all the documents to the LLM (saving you time and money).
See this notebook for a more detailed introduction to this, but for a super quick start the steps involved are:
Load Your Documents
from langchain.document_loaders import TextLoader
loader = TextLoader('../state_of_the_union.txt')
See here for more information on how to get started with document loading.
Create Your Index
from langchain.indexes import VectorstoreIndexCreator
index = VectorstoreIndexCreator().from_loaders([loader])
The best and most popular index by far at the moment is the VectorStore index.
Query Your Index
query = "What did the president say about Ketanji Brown Jackson"
index.query(query)
Alternatively, use query_with_sources to also get back the sources involved
query = "What did the president say about Ketanji Brown Jackson"
index.query_with_sources(query)
Again, these high level interfaces obfuscate a lot of what is going on under the hood, so please see this notebook for a lower level walkthrough.
Document Question Answering#
Question answering involves fetching multiple documents, and then asking a question of them.
The LLM response will contain the answer to your question, based on the content of the documents.
The recommended way to get started using a question answering chain is:
from langchain.chains.question_answering import load_qa_chain | https://langchain.readthedocs.io\en\latest\use_cases\question_answering.html |
128367f7c542-1 | from langchain.chains.question_answering import load_qa_chain
chain = load_qa_chain(llm, chain_type="stuff")
chain.run(input_documents=docs, question=query)
The following resources exist:
Question Answering Notebook: A notebook walking through how to accomplish this task.
VectorDB Question Answering Notebook: A notebook walking through how to do question answering over a vector database. This can often be useful for when you have a LOT of documents, and you don’t want to pass them all to the LLM, but rather first want to do some semantic search over embeddings.
Adding in sources#
There is also a variant of this, where in addition to responding with the answer the language model will also cite its sources (eg which of the documents passed in it used).
The recommended way to get started using a question answering with sources chain is:
from langchain.chains.qa_with_sources import load_qa_with_sources_chain
chain = load_qa_with_sources_chain(llm, chain_type="stuff")
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
The following resources exist:
QA With Sources Notebook: A notebook walking through how to accomplish this task.
VectorDB QA With Sources Notebook: A notebook walking through how to do question answering with sources over a vector database. This can often be useful for when you have a LOT of documents, and you don’t want to pass them all to the LLM, but rather first want to do some semantic search over embeddings.
Additional Related Resources#
Additional related resources include:
Utilities for working with Documents: Guides on how to use several of the utilities which will prove helpful for this task, including Text Splitters (for splitting up long documents) and Embeddings & Vectorstores (useful for the above Vector DB example). | https://langchain.readthedocs.io\en\latest\use_cases\question_answering.html |
128367f7c542-2 | CombineDocuments Chains: A conceptual overview of specific types of chains by which you can accomplish this task.
Data Augmented Generation: An overview of data augmented generation, which is the general concept of combining external data with LLMs (of which this is a subset).
previous
Data Augmented Generation
next
Summarization
Contents
Document Question Answering
Adding in sources
Additional Related Resources
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | https://langchain.readthedocs.io\en\latest\use_cases\question_answering.html |
fc8cf63c0594-0 | .md
.pdf
Summarization
Summarization#
Summarization involves creating a smaller summary of multiple longer documents.
This can be useful for distilling long documents into the core pieces of information.
The recommended way to get started using a summarization chain is:
from langchain.chains.summarize import load_summarize_chain
chain = load_summarize_chain(llm, chain_type="map_reduce")
chain.run(docs)
The following resources exist:
Summarization Notebook: A notebook walking through how to accomplish this task.
Additional related resources include:
Utilities for working with Documents: Guides on how to use several of the utilities which will prove helpful for this task, including Text Splitters (for splitting up long documents).
CombineDocuments Chains: A conceptual overview of specific types of chains by which you can accomplish this task.
Data Augmented Generation: An overview of data augmented generation, which is the general concept of combining external data with LLMs (of which this is a subset).
previous
Question Answering
next
Querying Tabular Data
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Mar 24, 2023. | https://langchain.readthedocs.io\en\latest\use_cases\summarization.html |