id
stringlengths
14
16
text
stringlengths
45
2.05k
source
stringlengths
53
111
3e33e085e24c-23
""" Using these checked assertions, rewrite the original summary to be completely true. The output should have the same structure and formatting as the original summary. Summary: > Finished chain. > Entering new LLMChain chain... Prompt after formatting: Below are some assertions that have been fact checked and are labeled as true or false. If all of the assertions are true, return "True". If any of the assertions are false, return "False". Here are some examples: === Checked Assertions: """ - The sky is red: False - Water is made of lava: False - The sun is a star: True """ Result: False === Checked Assertions: """ - The sky is blue: True - Water is wet: True - The sun is a star: True """ Result: True === Checked Assertions: """ - The sky is blue - True - Water is made of lava- False - The sun is a star - True """ Result: False === Checked Assertions:""" - Birds and mammals are both capable of laying eggs: False. Mammals give birth to live young, while birds lay eggs. - Birds are not mammals: True. Birds are a class of their own, separate from mammals. - Birds are a class of their own: True. Birds are a class of their own, separate from mammals. """ Result: > Finished chain. > Finished chain. > Finished chain. 'Birds are not mammals, but they are a class of their own. They lay eggs, unlike mammals which give birth to live young.' previous LLMRequestsChain next Moderation By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\chains\examples\llm_summarization_checker.html
5621d9032c1d-0
.ipynb .pdf Moderation Contents How to use the moderation chain How to append a Moderation chain to an LLMChain Moderation# This notebook walks through examples of how to use a moderation chain, and several common ways for doing so. Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model. Some API providers, like OpenAI, specifically prohibit you, or your end users, from generating some types of harmful content. To comply with this (and to just generally prevent your application from being harmful) you may often want to append a moderation chain to any LLMChains, in order to make sure any output the LLM generates is not harmful. If the content passed into the moderation chain is harmful, there is not one best way to handle it, it probably depends on your application. Sometimes you may want to throw an error in the Chain (and have your application handle that). Other times, you may want to return something to the user explaining that the text was harmful. There could even be other ways to handle it! We will cover all these ways in this notebook. In this notebook, we will show: How to run any piece of text through a moderation chain. How to append a Moderation chain to a LLMChain. from langchain.llms import OpenAI from langchain.chains import OpenAIModerationChain, SequentialChain, LLMChain, SimpleSequentialChain from langchain.prompts import PromptTemplate How to use the moderation chain# Here’s an example of using the moderation chain with default settings (will return a string explaining stuff was flagged). moderation_chain = OpenAIModerationChain() moderation_chain.run("This is okay") 'This is okay' moderation_chain.run("I will kill you")
https://langchain.readthedocs.io\en\latest\modules\chains\examples\moderation.html
5621d9032c1d-1
'This is okay' moderation_chain.run("I will kill you") "Text was found that violates OpenAI's content policy." Here’s an example of using the moderation chain to throw an error. moderation_chain_error = OpenAIModerationChain(error=True) moderation_chain_error.run("This is okay") 'This is okay' moderation_chain_error.run("I will kill you") --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[7], line 1 ----> 1 moderation_chain_error.run("I will kill you") File ~/workplace/langchain/langchain/chains/base.py:138, in Chain.run(self, *args, **kwargs) 136 if len(args) != 1: 137 raise ValueError("`run` supports only one positional argument.") --> 138 return self(args[0])[self.output_keys[0]] 140 if kwargs and not args: 141 return self(kwargs)[self.output_keys[0]] File ~/workplace/langchain/langchain/chains/base.py:112, in Chain.__call__(self, inputs, return_only_outputs) 108 if self.verbose: 109 print( 110 f"\n\n\033[1m> Entering new {self.__class__.__name__} chain...\033[0m" 111 ) --> 112 outputs = self._call(inputs) 113 if self.verbose: 114 print(f"\n\033[1m> Finished {self.__class__.__name__} chain.\033[0m") File ~/workplace/langchain/langchain/chains/moderation.py:81, in OpenAIModerationChain._call(self, inputs) 79 text = inputs[self.input_key]
https://langchain.readthedocs.io\en\latest\modules\chains\examples\moderation.html
5621d9032c1d-2
79 text = inputs[self.input_key] 80 results = self.client.create(text) ---> 81 output = self._moderate(text, results["results"][0]) 82 return {self.output_key: output} File ~/workplace/langchain/langchain/chains/moderation.py:73, in OpenAIModerationChain._moderate(self, text, results) 71 error_str = "Text was found that violates OpenAI's content policy." 72 if self.error: ---> 73 raise ValueError(error_str) 74 else: 75 return error_str ValueError: Text was found that violates OpenAI's content policy. Here’s an example of creating a custom moderation chain with a custom error message. It requires some knowledge of OpenAI’s moderation endpoint results (see docs here). class CustomModeration(OpenAIModerationChain): def _moderate(self, text: str, results: dict) -> str: if results["flagged"]: error_str = f"The following text was found that violates OpenAI's content policy: {text}" return error_str return text custom_moderation = CustomModeration() custom_moderation.run("This is okay") 'This is okay' custom_moderation.run("I will kill you") "The following text was found that violates OpenAI's content policy: I will kill you" How to append a Moderation chain to an LLMChain# To easily combine a moderation chain with an LLMChain, you can use the SequentialChain abstraction. Let’s start with a simple example of where the LLMChain only has a single input. For this purpose, we will prompt the model so it says something harmful. prompt = PromptTemplate(template="{text}", input_variables=["text"])
https://langchain.readthedocs.io\en\latest\modules\chains\examples\moderation.html
5621d9032c1d-3
prompt = PromptTemplate(template="{text}", input_variables=["text"]) llm_chain = LLMChain(llm=OpenAI(temperature=0, model_name="text-davinci-002"), prompt=prompt) text = """We are playing a game of repeat after me. Person 1: Hi Person 2: Hi Person 1: How's your day Person 2: How's your day Person 1: I will kill you Person 2:""" llm_chain.run(text) ' I will kill you' chain = SimpleSequentialChain(chains=[llm_chain, moderation_chain]) chain.run(text) "Text was found that violates OpenAI's content policy." Now let’s walk through an example of using it with an LLMChain which has multiple inputs (a bit more tricky because we can’t use the SimpleSequentialChain) prompt = PromptTemplate(template="{setup}{new_input}Person2:", input_variables=["setup", "new_input"]) llm_chain = LLMChain(llm=OpenAI(temperature=0, model_name="text-davinci-002"), prompt=prompt) setup = """We are playing a game of repeat after me. Person 1: Hi Person 2: Hi Person 1: How's your day Person 2: How's your day Person 1:""" new_input = "I will kill you" inputs = {"setup": setup, "new_input": new_input} llm_chain(inputs, return_only_outputs=True) {'text': ' I will kill you'} # Setting the input/output keys so it lines up moderation_chain.input_key = "text" moderation_chain.output_key = "sanitized_text" chain = SequentialChain(chains=[llm_chain, moderation_chain], input_variables=["setup", "new_input"])
https://langchain.readthedocs.io\en\latest\modules\chains\examples\moderation.html
5621d9032c1d-4
chain(inputs, return_only_outputs=True) {'sanitized_text': "Text was found that violates OpenAI's content policy."} previous LLMSummarizationCheckerChain next PAL Contents How to use the moderation chain How to append a Moderation chain to an LLMChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\chains\examples\moderation.html
49659aa07170-0
.ipynb .pdf PAL Contents Math Prompt Colored Objects Intermediate Steps PAL# Implements Program-Aided Language Models, as in https://arxiv.org/pdf/2211.10435.pdf. from langchain.chains import PALChain from langchain import OpenAI llm = OpenAI(model_name='code-davinci-002', temperature=0, max_tokens=512) Math Prompt# pal_chain = PALChain.from_math_prompt(llm, verbose=True) question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?" pal_chain.run(question) > Entering new PALChain chain... def solution(): """Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?""" cindy_pets = 4 marcia_pets = cindy_pets + 2 jan_pets = marcia_pets * 3 total_pets = cindy_pets + marcia_pets + jan_pets result = total_pets return result > Finished chain. '28' Colored Objects# pal_chain = PALChain.from_colored_object_prompt(llm, verbose=True) question = "On the desk, you see two blue booklets, two purple booklets, and two yellow pairs of sunglasses. If I remove all the pairs of sunglasses from the desk, how many purple items remain on it?" pal_chain.run(question) > Entering new PALChain chain... # Put objects into a list to record ordering objects = [] objects += [('booklet', 'blue')] * 2
https://langchain.readthedocs.io\en\latest\modules\chains\examples\pal.html
49659aa07170-1
objects = [] objects += [('booklet', 'blue')] * 2 objects += [('booklet', 'purple')] * 2 objects += [('sunglasses', 'yellow')] * 2 # Remove all pairs of sunglasses objects = [object for object in objects if object[0] != 'sunglasses'] # Count number of purple objects num_purple = len([object for object in objects if object[1] == 'purple']) answer = num_purple > Finished PALChain chain. '2' Intermediate Steps# You can also use the intermediate steps flag to return the code executed that generates the answer. pal_chain = PALChain.from_colored_object_prompt(llm, verbose=True, return_intermediate_steps=True) question = "On the desk, you see two blue booklets, two purple booklets, and two yellow pairs of sunglasses. If I remove all the pairs of sunglasses from the desk, how many purple items remain on it?" result = pal_chain({"question": question}) > Entering new PALChain chain... # Put objects into a list to record ordering objects = [] objects += [('booklet', 'blue')] * 2 objects += [('booklet', 'purple')] * 2 objects += [('sunglasses', 'yellow')] * 2 # Remove all pairs of sunglasses objects = [object for object in objects if object[0] != 'sunglasses'] # Count number of purple objects num_purple = len([object for object in objects if object[1] == 'purple']) answer = num_purple > Finished chain. result['intermediate_steps']
https://langchain.readthedocs.io\en\latest\modules\chains\examples\pal.html
49659aa07170-2
answer = num_purple > Finished chain. result['intermediate_steps'] "# Put objects into a list to record ordering\nobjects = []\nobjects += [('booklet', 'blue')] * 2\nobjects += [('booklet', 'purple')] * 2\nobjects += [('sunglasses', 'yellow')] * 2\n\n# Remove all pairs of sunglasses\nobjects = [object for object in objects if object[0] != 'sunglasses']\n\n# Count number of purple objects\nnum_purple = len([object for object in objects if object[1] == 'purple'])\nanswer = num_purple" previous Moderation next SQLite example Contents Math Prompt Colored Objects Intermediate Steps By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\chains\examples\pal.html
eef8cf42c1ec-0
.ipynb .pdf SQLite example Contents Customize Prompt Return Intermediate Steps Choosing how to limit the number of rows returned Adding example rows from each table Custom Table Info SQLDatabaseSequentialChain SQLite example# This example showcases hooking up an LLM to answer questions over a database. This uses the example Chinook database. To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file in a notebooks folder at the root of this repository. from langchain import OpenAI, SQLDatabase, SQLDatabaseChain db = SQLDatabase.from_uri("sqlite:///../../../../notebooks/Chinook.db") llm = OpenAI(temperature=0) NOTE: For data-sensitive projects, you can specify return_direct=True in the SQLDatabaseChain initialization to directly return the output of the SQL query without any additional formatting. This prevents the LLM from seeing any contents within the database. Note, however, the LLM still has access to the database scheme (i.e. dialect, table and key names) by default. db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True) db_chain.run("How many employees are there?") > Entering new SQLDatabaseChain chain... How many employees are there? SQLQuery: /Users/harrisonchase/workplace/langchain/langchain/sql_database.py:120: SAWarning: Dialect sqlite+pysqlite does *not* support Decimal objects natively, and SQLAlchemy must convert from floating point - rounding errors and other issues may occur. Please consider storing Decimal numbers as strings or integers on this platform for lossless storage. sample_rows = connection.execute(command) SELECT COUNT(*) FROM Employee; SQLResult: [(8,)] Answer: There are 8 employees. > Finished chain.
https://langchain.readthedocs.io\en\latest\modules\chains\examples\sqlite.html
eef8cf42c1ec-1
Answer: There are 8 employees. > Finished chain. ' There are 8 employees.' Customize Prompt# You can also customize the prompt that is used. Here is an example prompting it to understand that foobar is the same as the Employee table from langchain.prompts.prompt import PromptTemplate _DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Use the following format: Question: "Question here" SQLQuery: "SQL Query to run" SQLResult: "Result of the SQLQuery" Answer: "Final answer here" Only use the following tables: {table_info} If someone asks for the table foobar, they really mean the employee table. Question: {input}""" PROMPT = PromptTemplate( input_variables=["input", "table_info", "dialect"], template=_DEFAULT_TEMPLATE ) db_chain = SQLDatabaseChain(llm=llm, database=db, prompt=PROMPT, verbose=True) db_chain.run("How many employees are there in the foobar table?") > Entering new SQLDatabaseChain chain... How many employees are there in the foobar table? SQLQuery: SELECT COUNT(*) FROM Employee; SQLResult: [(8,)] Answer: There are 8 employees in the foobar table. > Finished chain. ' There are 8 employees in the foobar table.' Return Intermediate Steps# You can also return the intermediate steps of the SQLDatabaseChain. This allows you to access the SQL statement that was generated, as well as the result of running that against the SQL Database. db_chain = SQLDatabaseChain(llm=llm, database=db, prompt=PROMPT, verbose=True, return_intermediate_steps=True)
https://langchain.readthedocs.io\en\latest\modules\chains\examples\sqlite.html
eef8cf42c1ec-2
result = db_chain("How many employees are there in the foobar table?") result["intermediate_steps"] > Entering new SQLDatabaseChain chain... How many employees are there in the foobar table? SQLQuery: SELECT COUNT(*) FROM Employee; SQLResult: [(8,)] Answer: There are 8 employees in the foobar table. > Finished chain. [' SELECT COUNT(*) FROM Employee;', '[(8,)]'] Choosing how to limit the number of rows returned# If you are querying for several rows of a table you can select the maximum number of results you want to get by using the ‘top_k’ parameter (default is 10). This is useful for avoiding query results that exceed the prompt max length or consume tokens unnecessarily. db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True, top_k=3) db_chain.run("What are some example tracks by composer Johann Sebastian Bach?") > Entering new SQLDatabaseChain chain... What are some example tracks by composer Johann Sebastian Bach? SQLQuery: SELECT Name, Composer FROM Track WHERE Composer LIKE '%Johann Sebastian Bach%' LIMIT 3; SQLResult: [('Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace', 'Johann Sebastian Bach'), ('Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria', 'Johann Sebastian Bach'), ('Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude', 'Johann Sebastian Bach')]
https://langchain.readthedocs.io\en\latest\modules\chains\examples\sqlite.html
eef8cf42c1ec-3
Answer: Some example tracks by composer Johann Sebastian Bach are 'Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace', 'Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria', and 'Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude'. > Finished chain. ' Some example tracks by composer Johann Sebastian Bach are \'Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace\', \'Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria\', and \'Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude\'.' Adding example rows from each table# Sometimes, the format of the data is not obvious and it is optimal to include a sample of rows from the tables in the prompt to allow the LLM to understand the data before providing a final query. Here we will use this feature to let the LLM know that artists are saved with their full names by providing two rows from the Track table. db = SQLDatabase.from_uri( "sqlite:///../../../../notebooks/Chinook.db", include_tables=['Track'], # we include only one table to save tokens in the prompt :) sample_rows_in_table_info=2) The sample rows are added to the prompt after each corresponding table’s column information: print(db.table_info) CREATE TABLE "Track" ( "TrackId" INTEGER NOT NULL, "Name" NVARCHAR(200) NOT NULL, "AlbumId" INTEGER, "MediaTypeId" INTEGER NOT NULL, "GenreId" INTEGER,
https://langchain.readthedocs.io\en\latest\modules\chains\examples\sqlite.html
eef8cf42c1ec-4
"MediaTypeId" INTEGER NOT NULL, "GenreId" INTEGER, "Composer" NVARCHAR(220), "Milliseconds" INTEGER NOT NULL, "Bytes" INTEGER, "UnitPrice" NUMERIC(10, 2) NOT NULL, PRIMARY KEY ("TrackId"), FOREIGN KEY("MediaTypeId") REFERENCES "MediaType" ("MediaTypeId"), FOREIGN KEY("GenreId") REFERENCES "Genre" ("GenreId"), FOREIGN KEY("AlbumId") REFERENCES "Album" ("AlbumId") ) /* 2 rows from Track table: TrackId Name AlbumId MediaTypeId GenreId Composer Milliseconds Bytes UnitPrice 1 For Those About To Rock (We Salute You) 1 1 1 Angus Young, Malcolm Young, Brian Johnson 343719 11170334 0.99 2 Balls to the Wall 2 2 1 None 342562 5510424 0.99 */ /home/jon/projects/langchain/langchain/sql_database.py:135: SAWarning: Dialect sqlite+pysqlite does *not* support Decimal objects natively, and SQLAlchemy must convert from floating point - rounding errors and other issues may occur. Please consider storing Decimal numbers as strings or integers on this platform for lossless storage. sample_rows = connection.execute(command) db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True) db_chain.run("What are some example tracks by Bach?") > Entering new SQLDatabaseChain chain... What are some example tracks by Bach? SQLQuery: SELECT Name FROM Track WHERE Composer LIKE '%Bach%' LIMIT 5;
https://langchain.readthedocs.io\en\latest\modules\chains\examples\sqlite.html
eef8cf42c1ec-5
SQLQuery: SELECT Name FROM Track WHERE Composer LIKE '%Bach%' LIMIT 5; SQLResult: [('American Woman',), ('Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace',), ('Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria',), ('Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude',), ('Toccata and Fugue in D Minor, BWV 565: I. Toccata',)] Answer: Some example tracks by Bach are 'American Woman', 'Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace', 'Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria', 'Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude', and 'Toccata and Fugue in D Minor, BWV 565: I. Toccata'. > Finished chain. ' Some example tracks by Bach are \'American Woman\', \'Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace\', \'Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria\', \'Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude\', and \'Toccata and Fugue in D Minor, BWV 565: I. Toccata\'.' Custom Table Info#
https://langchain.readthedocs.io\en\latest\modules\chains\examples\sqlite.html
eef8cf42c1ec-6
Custom Table Info# In some cases, it can be useful to provide custom table information instead of using the automatically generated table definitions and the first sample_rows_in_table_info sample rows. For example, if you know that the first few rows of a table are uninformative, it could help to manually provide example rows that are more diverse or provide more information to the model. It is also possible to limit the columns that will be visible to the model if there are unnecessary columns. This information can be provided as a dictionary with table names as the keys and table information as the values. For example, let’s provide a custom definition and sample rows for the Track table with only a few columns: custom_table_info = { "Track": """CREATE TABLE Track ( "TrackId" INTEGER NOT NULL, "Name" NVARCHAR(200) NOT NULL, "Composer" NVARCHAR(220), PRIMARY KEY ("TrackId") ) /* 3 rows from Track table: TrackId Name Composer 1 For Those About To Rock (We Salute You) Angus Young, Malcolm Young, Brian Johnson 2 Balls to the Wall None 3 My favorite song ever The coolest composer of all time */""" } db = SQLDatabase.from_uri( "sqlite:///../../../../notebooks/Chinook.db", include_tables=['Track', 'Playlist'], sample_rows_in_table_info=2, custom_table_info=custom_table_info) print(db.table_info) CREATE TABLE "Playlist" ( "PlaylistId" INTEGER NOT NULL, "Name" NVARCHAR(120), PRIMARY KEY ("PlaylistId") ) /* 2 rows from Playlist table: PlaylistId Name 1 Music 2 Movies */ CREATE TABLE Track (
https://langchain.readthedocs.io\en\latest\modules\chains\examples\sqlite.html
eef8cf42c1ec-7
PlaylistId Name 1 Music 2 Movies */ CREATE TABLE Track ( "TrackId" INTEGER NOT NULL, "Name" NVARCHAR(200) NOT NULL, "Composer" NVARCHAR(220), PRIMARY KEY ("TrackId") ) /* 3 rows from Track table: TrackId Name Composer 1 For Those About To Rock (We Salute You) Angus Young, Malcolm Young, Brian Johnson 2 Balls to the Wall None 3 My favorite song ever The coolest composer of all time */ Note how our custom table definition and sample rows for Track overrides the sample_rows_in_table_info parameter. Tables that are not overridden by custom_table_info, in this example Playlist, will have their table info gathered automatically as usual. db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True) db_chain.run("What are some example tracks by Bach?") > Entering new SQLDatabaseChain chain... What are some example tracks by Bach? SQLQuery: SELECT Name, Composer FROM Track WHERE Composer LIKE '%Bach%' LIMIT 5; SQLResult: [('American Woman', 'B. Cummings/G. Peterson/M.J. Kale/R. Bachman'), ('Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace', 'Johann Sebastian Bach'), ('Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria', 'Johann Sebastian Bach'), ('Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude', 'Johann Sebastian Bach'), ('Toccata and Fugue in D Minor, BWV 565: I. Toccata', 'Johann Sebastian Bach')]
https://langchain.readthedocs.io\en\latest\modules\chains\examples\sqlite.html
eef8cf42c1ec-8
Answer: Some example tracks by Bach are 'American Woman', 'Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace', 'Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria', 'Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude', and 'Toccata and Fugue in D Minor, BWV 565: I. Toccata'. > Finished chain. ' Some example tracks by Bach are \'American Woman\', \'Concerto for 2 Violins in D Minor, BWV 1043: I. Vivace\', \'Aria Mit 30 Veränderungen, BWV 988 "Goldberg Variations": Aria\', \'Suite for Solo Cello No. 1 in G Major, BWV 1007: I. Prélude\', and \'Toccata and Fugue in D Minor, BWV 565: I. Toccata\'.' SQLDatabaseSequentialChain# Chain for querying SQL database that is a sequential chain. The chain is as follows: 1. Based on the query, determine which tables to use. 2. Based on those tables, call the normal SQL database chain. This is useful in cases where the number of tables in the database is large. from langchain.chains import SQLDatabaseSequentialChain db = SQLDatabase.from_uri("sqlite:///../../../../notebooks/Chinook.db") chain = SQLDatabaseSequentialChain.from_llm(llm, db, verbose=True) chain.run("How many employees are also customers?") > Entering new SQLDatabaseSequentialChain chain... Table names to use: ['Customer', 'Employee'] > Entering new SQLDatabaseChain chain... How many employees are also customers?
https://langchain.readthedocs.io\en\latest\modules\chains\examples\sqlite.html
eef8cf42c1ec-9
> Entering new SQLDatabaseChain chain... How many employees are also customers? SQLQuery: SELECT COUNT(*) FROM Employee INNER JOIN Customer ON Employee.EmployeeId = Customer.SupportRepId; SQLResult: [(59,)] Answer: 59 employees are also customers. > Finished chain. > Finished chain. ' 59 employees are also customers.' previous PAL next Async API for Chain Contents Customize Prompt Return Intermediate Steps Choosing how to limit the number of rows returned Adding example rows from each table Custom Table Info SQLDatabaseSequentialChain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\chains\examples\sqlite.html
deea1d6dff72-0
.ipynb .pdf Loading from LangChainHub Loading from LangChainHub# This notebook covers how to load chains from LangChainHub. from langchain.chains import load_chain chain = load_chain("lc://chains/llm-math/chain.json") chain.run("whats 2 raised to .12") > Entering new LLMMathChain chain... whats 2 raised to .12 Answer: 1.0791812460476249 > Finished chain. 'Answer: 1.0791812460476249' Sometimes chains will require extra arguments that were not serialized with the chain. For example, a chain that does question answering over a vector database will require a vector database. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.text_splitter import CharacterTextSplitter from langchain import OpenAI, VectorDBQA from langchain.document_loaders import TextLoader loader = TextLoader('../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() vectorstore = Chroma.from_documents(texts, embeddings) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. chain = load_chain("lc://chains/vector-db-qa/stuff/chain.json", vectorstore=vectorstore) query = "What did the president say about Ketanji Brown Jackson" chain.run(query)
https://langchain.readthedocs.io\en\latest\modules\chains\generic\from_hub.html
deea1d6dff72-1
query = "What did the president say about Ketanji Brown Jackson" chain.run(query) " The president said that Ketanji Brown Jackson is a Circuit Court of Appeals Judge, one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans, and will continue Justice Breyer's legacy of excellence." previous Generic Chains next LLM Chain By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\chains\generic\from_hub.html
f091a807fcd8-0
.ipynb .pdf LLM Chain Contents Single Input Multiple Inputs From string LLM Chain# This notebook showcases a simple LLM chain. from langchain import PromptTemplate, OpenAI, LLMChain Single Input# First, lets go over an example using a single input template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0), verbose=True) question = "What NFL team won the Super Bowl in the year Justin Beiber was born?" llm_chain.predict(question=question) > Entering new LLMChain chain... Prompt after formatting: Question: What NFL team won the Super Bowl in the year Justin Beiber was born? Answer: Let's think step by step. > Finished LLMChain chain. ' Justin Bieber was born in 1994, so the NFL team that won the Super Bowl in 1994 was the Dallas Cowboys.' Multiple Inputs# Now lets go over an example using multiple inputs. template = """Write a {adjective} poem about {subject}.""" prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"]) llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0), verbose=True) llm_chain.predict(adjective="sad", subject="ducks") > Entering new LLMChain chain... Prompt after formatting: Write a sad poem about ducks. > Finished LLMChain chain.
https://langchain.readthedocs.io\en\latest\modules\chains\generic\llm_chain.html
f091a807fcd8-1
Prompt after formatting: Write a sad poem about ducks. > Finished LLMChain chain. "\n\nThe ducks swim in the pond,\nTheir feathers so soft and warm,\nBut they can't help but feel so forlorn.\n\nTheir quacks echo in the air,\nBut no one is there to hear,\nFor they have no one to share.\n\nThe ducks paddle around in circles,\nTheir heads hung low in despair,\nFor they have no one to care.\n\nThe ducks look up to the sky,\nBut no one is there to see,\nFor they have no one to be.\n\nThe ducks drift away in the night,\nTheir hearts filled with sorrow and pain,\nFor they have no one to gain." From string# You can also construct an LLMChain from a string template directly. template = """Write a {adjective} poem about {subject}.""" llm_chain = LLMChain.from_string(llm=OpenAI(temperature=0), template=template) llm_chain.predict(adjective="sad", subject="ducks") "\n\nThe ducks swim in the pond,\nTheir feathers so soft and warm,\nBut they can't help but feel so forlorn.\n\nTheir quacks echo in the air,\nBut no one is there to hear,\nFor they have no one to share.\n\nThe ducks paddle around in circles,\nTheir heads hung low in despair,\nFor they have no one to care.\n\nThe ducks look up to the sky,\nBut no one is there to see,\nFor they have no one to be.\n\nThe ducks drift away in the night,\nTheir hearts filled with sorrow and pain,\nFor they have no one to gain." previous Loading from LangChainHub next Sequential Chains Contents Single Input
https://langchain.readthedocs.io\en\latest\modules\chains\generic\llm_chain.html
f091a807fcd8-2
previous Loading from LangChainHub next Sequential Chains Contents Single Input Multiple Inputs From string By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\chains\generic\llm_chain.html
d800824387f9-0
.ipynb .pdf Sequential Chains Contents SimpleSequentialChain Sequential Chain Memory in Sequential Chains Sequential Chains# The next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another. In this notebook we will walk through some examples for how to do this, using sequential chains. Sequential chains are defined as a series of chains, called in deterministic order. There are two types of sequential chains: SimpleSequentialChain: The simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next. SequentialChain: A more general form of sequential chains, allowing for multiple inputs/outputs. SimpleSequentialChain# In this series of chains, each individual chain has a single input and a single output, and the output of one step is used as input to the next. Let’s walk through a toy example of doing this, where the first chain takes in the title of an imaginary play and then generates a synopsis for that title, and the second chain takes in the synopsis of that play and generates an imaginary review for that play. %load_ext dotenv %dotenv cannot find .env file from langchain.llms import OpenAI from langchain.chains import LLMChain from langchain.prompts import PromptTemplate # This is an LLMChain to write a synopsis given a title of a play. llm = OpenAI(temperature=.7) template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title. Title: {title} Playwright: This is a synopsis for the above play:""" prompt_template = PromptTemplate(input_variables=["title"], template=template)
https://langchain.readthedocs.io\en\latest\modules\chains\generic\sequential_chains.html
d800824387f9-1
prompt_template = PromptTemplate(input_variables=["title"], template=template) synopsis_chain = LLMChain(llm=llm, prompt=prompt_template) # This is an LLMChain to write a review of a play given a synopsis. llm = OpenAI(temperature=.7) template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play. Play Synopsis: {synopsis} Review from a New York Times play critic of the above play:""" prompt_template = PromptTemplate(input_variables=["synopsis"], template=template) review_chain = LLMChain(llm=llm, prompt=prompt_template) # This is the overall chain where we run these two chains in sequence. from langchain.chains import SimpleSequentialChain overall_chain = SimpleSequentialChain(chains=[synopsis_chain, review_chain], verbose=True) review = overall_chain.run("Tragedy at sunset on the beach") > Entering new SimpleSequentialChain chain... Tragedy at Sunset on the Beach is a story of a young couple, Jack and Sarah, who are in love and looking forward to their future together. On the night of their anniversary, they decide to take a walk on the beach at sunset. As they are walking, they come across a mysterious figure, who tells them that their love will be tested in the near future. The figure then tells the couple that the sun will soon set, and with it, a tragedy will strike. If Jack and Sarah can stay together and pass the test, they will be granted everlasting love. However, if they fail, their love will be lost forever.
https://langchain.readthedocs.io\en\latest\modules\chains\generic\sequential_chains.html
d800824387f9-2
The play follows the couple as they struggle to stay together and battle the forces that threaten to tear them apart. Despite the tragedy that awaits them, they remain devoted to one another and fight to keep their love alive. In the end, the couple must decide whether to take a chance on their future together or succumb to the tragedy of the sunset. Tragedy at Sunset on the Beach is an emotionally gripping story of love, hope, and sacrifice. Through the story of Jack and Sarah, the audience is taken on a journey of self-discovery and the power of love to overcome even the greatest of obstacles. The play's talented cast brings the characters to life, allowing us to feel the depths of their emotion and the intensity of their struggle. With its compelling story and captivating performances, this play is sure to draw in audiences and leave them on the edge of their seats. The play's setting of the beach at sunset adds a touch of poignancy and romanticism to the story, while the mysterious figure serves to keep the audience enthralled. Overall, Tragedy at Sunset on the Beach is an engaging and thought-provoking play that is sure to leave audiences feeling inspired and hopeful. > Finished chain. print(review) Tragedy at Sunset on the Beach is an emotionally gripping story of love, hope, and sacrifice. Through the story of Jack and Sarah, the audience is taken on a journey of self-discovery and the power of love to overcome even the greatest of obstacles. The play's talented cast brings the characters to life, allowing us to feel the depths of their emotion and the intensity of their struggle. With its compelling story and captivating performances, this play is sure to draw in audiences and leave them on the edge of their seats.
https://langchain.readthedocs.io\en\latest\modules\chains\generic\sequential_chains.html
d800824387f9-3
The play's setting of the beach at sunset adds a touch of poignancy and romanticism to the story, while the mysterious figure serves to keep the audience enthralled. Overall, Tragedy at Sunset on the Beach is an engaging and thought-provoking play that is sure to leave audiences feeling inspired and hopeful. Sequential Chain# Of course, not all sequential chains will be as simple as passing a single string as an argument and getting a single string as output for all steps in the chain. In this next example, we will experiment with more complex chains that involve multiple inputs, and where there also multiple final outputs. Of particular importance is how we name the input/output variable names. In the above example we didn’t have to think about that because we were just passing the output of one chain directly as input to the next, but here we do have worry about that because we have multiple inputs. # This is an LLMChain to write a synopsis given a title of a play and the era it is set in. llm = OpenAI(temperature=.7) template = """You are a playwright. Given the title of play and the era it is set in, it is your job to write a synopsis for that title. Title: {title} Era: {era} Playwright: This is a synopsis for the above play:""" prompt_template = PromptTemplate(input_variables=["title", 'era'], template=template) synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="synopsis") # This is an LLMChain to write a review of a play given a synopsis. llm = OpenAI(temperature=.7) template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play. Play Synopsis: {synopsis}
https://langchain.readthedocs.io\en\latest\modules\chains\generic\sequential_chains.html
d800824387f9-4
Play Synopsis: {synopsis} Review from a New York Times play critic of the above play:""" prompt_template = PromptTemplate(input_variables=["synopsis"], template=template) review_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="review") # This is the overall chain where we run these two chains in sequence. from langchain.chains import SequentialChain overall_chain = SequentialChain( chains=[synopsis_chain, review_chain], input_variables=["era", "title"], # Here we return multiple variables output_variables=["synopsis", "review"], verbose=True) review = overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian England"}) > Entering new SequentialChain chain... > Finished chain. Memory in Sequential Chains# Sometimes you may want to pass along some context to use in each step of the chain or in a later part of the chain, but maintaining and chaining together the input/output variables can quickly get messy. Using SimpleMemory is a convenient way to do manage this and clean up your chains. For example, using the previous playwright SequentialChain, lets say you wanted to include some context about date, time and location of the play, and using the generated synopsis and review, create some social media post text. You could add these new context variables as input_variables, or we can add a SimpleMemory to the chain to manage this context: from langchain.chains import SequentialChain from langchain.memory import SimpleMemory llm = OpenAI(temperature=.7)
https://langchain.readthedocs.io\en\latest\modules\chains\generic\sequential_chains.html
d800824387f9-5
from langchain.memory import SimpleMemory llm = OpenAI(temperature=.7) template = """You are a social media manager for a theater company. Given the title of play, the era it is set in, the date,time and location, the synopsis of the play, and the review of the play, it is your job to write a social media post for that play. Here is some context about the time and location of the play: Date and Time: {time} Location: {location} Play Synopsis: {synopsis} Review from a New York Times play critic of the above play: {review} Social Media Post: """ prompt_template = PromptTemplate(input_variables=["synopsis", "review", "time", "location"], template=template) social_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="social_post_text") overall_chain = SequentialChain( memory=SimpleMemory(memories={"time": "December 25th, 8pm PST", "location": "Theater in the Park"}), chains=[synopsis_chain, review_chain, social_chain], input_variables=["era", "title"], # Here we return multiple variables output_variables=["social_post_text"], verbose=True) overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian England"}) > Entering new SequentialChain chain... > Finished chain. {'title': 'Tragedy at sunset on the beach', 'era': 'Victorian England', 'time': 'December 25th, 8pm PST', 'location': 'Theater in the Park',
https://langchain.readthedocs.io\en\latest\modules\chains\generic\sequential_chains.html
d800824387f9-6
'location': 'Theater in the Park', 'social_post_text': "\nSpend your Christmas night with us at Theater in the Park and experience the heartbreaking story of love and loss that is 'A Walk on the Beach'. Set in Victorian England, this romantic tragedy follows the story of Frances and Edward, a young couple whose love is tragically cut short. Don't miss this emotional and thought-provoking production that is sure to leave you in tears. #AWalkOnTheBeach #LoveAndLoss #TheaterInThePark #VictorianEngland"} previous LLM Chain next Serialization Contents SimpleSequentialChain Sequential Chain Memory in Sequential Chains By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\chains\generic\sequential_chains.html
a993b6f38d2b-0
.ipynb .pdf Serialization Contents Saving a chain to disk Loading a chain from disk Saving components separately Serialization# This notebook covers how to serialize chains to and from disk. The serialization format we use is json or yaml. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time. Saving a chain to disk# First, let’s go over how to save a chain to disk. This can be done with the .save method, and specifying a file path with a json or yaml extension. from langchain import PromptTemplate, OpenAI, LLMChain template = """Question: {question} Answer: Let's think step by step.""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0), verbose=True) llm_chain.save("llm_chain.json") Let’s now take a look at what’s inside this saved file !cat llm_chain.json { "memory": null, "verbose": true, "prompt": { "input_variables": [ "question" ], "output_parser": null, "template": "Question: {question}\n\nAnswer: Let's think step by step.", "template_format": "f-string" }, "llm": { "model_name": "text-davinci-003", "temperature": 0.0, "max_tokens": 256, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0, "n": 1, "best_of": 1, "request_timeout": null,
https://langchain.readthedocs.io\en\latest\modules\chains\generic\serialization.html
a993b6f38d2b-1
"best_of": 1, "request_timeout": null, "logit_bias": {}, "_type": "openai" }, "output_key": "text", "_type": "llm_chain" } Loading a chain from disk# We can load a chain from disk by using the load_chain method. from langchain.chains import load_chain chain = load_chain("llm_chain.json") chain.run("whats 2 + 2") > Entering new LLMChain chain... Prompt after formatting: Question: whats 2 + 2 Answer: Let's think step by step. > Finished chain. ' 2 + 2 = 4' Saving components separately# In the above example, we can see that the prompt and llm configuration information is saved in the same json as the overall chain. Alternatively, we can split them up and save them separately. This is often useful to make the saved components more modular. In order to do this, we just need to specify llm_path instead of the llm component, and prompt_path instead of the prompt component. llm_chain.prompt.save("prompt.json") !cat prompt.json { "input_variables": [ "question" ], "output_parser": null, "template": "Question: {question}\n\nAnswer: Let's think step by step.", "template_format": "f-string" } llm_chain.llm.save("llm.json") !cat llm.json { "model_name": "text-davinci-003", "temperature": 0.0, "max_tokens": 256, "top_p": 1, "frequency_penalty": 0,
https://langchain.readthedocs.io\en\latest\modules\chains\generic\serialization.html
a993b6f38d2b-2
"top_p": 1, "frequency_penalty": 0, "presence_penalty": 0, "n": 1, "best_of": 1, "request_timeout": null, "logit_bias": {}, "_type": "openai" } config = { "memory": None, "verbose": True, "prompt_path": "prompt.json", "llm_path": "llm.json", "output_key": "text", "_type": "llm_chain" } import json with open("llm_chain_separate.json", "w") as f: json.dump(config, f, indent=2) !cat llm_chain_separate.json { "memory": null, "verbose": true, "prompt_path": "prompt.json", "llm_path": "llm.json", "output_key": "text", "_type": "llm_chain" } We can then load it in the same way chain = load_chain("llm_chain_separate.json") chain.run("whats 2 + 2") > Entering new LLMChain chain... Prompt after formatting: Question: whats 2 + 2 Answer: Let's think step by step. > Finished chain. ' 2 + 2 = 4' previous Sequential Chains next Transformation Chain Contents Saving a chain to disk Loading a chain from disk Saving components separately By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\chains\generic\serialization.html
84721e7b4457-0
.ipynb .pdf Transformation Chain Transformation Chain# This notebook showcases using a generic transformation chain. As an example, we will create a dummy transformation that takes in a super long text, filters the text to only the first 3 paragraphs, and then passes that into an LLMChain to summarize those. from langchain.chains import TransformChain, LLMChain, SimpleSequentialChain from langchain.llms import OpenAI from langchain.prompts import PromptTemplate with open('../../state_of_the_union.txt') as f: state_of_the_union = f.read() def transform_func(inputs: dict) -> dict: text = inputs["text"] shortened_text = "\n\n".join(text.split("\n\n")[:3]) return {"output_text": shortened_text} transform_chain = TransformChain(input_variables=["text"], output_variables=["output_text"], transform=transform_func) template = """Summarize this text: {output_text} Summary:""" prompt = PromptTemplate(input_variables=["output_text"], template=template) llm_chain = LLMChain(llm=OpenAI(), prompt=prompt) sequential_chain = SimpleSequentialChain(chains=[transform_chain, llm_chain]) sequential_chain.run(state_of_the_union) ' The speaker addresses the nation, noting that while last year they were kept apart due to COVID-19, this year they are together again. They are reminded that regardless of their political affiliations, they are all Americans.' previous Serialization next Utility Chains By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\chains\generic\transformation.html
9be142e298de-0
.ipynb .pdf Getting Started Contents PromptTemplates LLMChain Streaming Getting Started# This notebook covers how to get started with chat models. The interface is based around messages rather than raw text. from langchain.chat_models import ChatOpenAI from langchain import PromptTemplate, LLMChain from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate, ) from langchain.schema import ( AIMessage, HumanMessage, SystemMessage ) chat = ChatOpenAI(temperature=0) You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, and ChatMessage – ChatMessage takes in an arbitrary role parameter. Most of the time, you’ll just be dealing with HumanMessage, AIMessage, and SystemMessage chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")]) AIMessage(content="J'aime programmer.", additional_kwargs={}) OpenAI’s chat model supports multiple messages as input. See here for more information. Here is an example of sending a system and user message to the chat model: messages = [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="Translate this sentence from English to French. I love programming.") ] chat(messages) AIMessage(content="J'aime programmer.", additional_kwargs={}) You can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter. batch_messages = [ [ SystemMessage(content="You are a helpful assistant that translates English to French."),
https://langchain.readthedocs.io\en\latest\modules\chat\getting_started.html
9be142e298de-1
[ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="Translate this sentence from English to French. I love programming.") ], [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="Translate this sentence from English to French. I love artificial intelligence.") ], ] result = chat.generate(batch_messages) result LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89}}) You can recover things like token usage from this LLMResult result.llm_output {'token_usage': {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89}} PromptTemplates# You can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model. For convience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like: template="You are a helpful assistant that translates {input_language} to {output_language}." system_message_prompt = SystemMessagePromptTemplate.from_template(template)
https://langchain.readthedocs.io\en\latest\modules\chat\getting_started.html
9be142e298de-2
system_message_prompt = SystemMessagePromptTemplate.from_template(template) human_template="{text}" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt]) # get a chat completion from the formatted messages chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages()) AIMessage(content="J'adore la programmation.", additional_kwargs={}) If you wanted to construct the MessagePromptTemplate more directly, you could create a PromptTemplate outside and then pass it in, eg: prompt=PromptTemplate( template="You are a helpful assistant that translates {input_language} to {output_language}.", input_variables=["input_language", "output_language"], ) system_message_prompt = SystemMessagePromptTemplate(prompt=prompt) LLMChain# You can use the existing LLMChain in a very similar way to before - provide a prompt and a model. chain = LLMChain(llm=chat, prompt=chat_prompt) chain.run(input_language="English", output_language="French", text="I love programming.") "J'adore la programmation." Streaming# Streaming is supported for ChatOpenAI through callback handling. from langchain.callbacks.base import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler chat = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0) resp = chat([HumanMessage(content="Write me a song about sparkling water.")]) Verse 1: Bubbles rising to the top A refreshing drink that never stops Clear and crisp, it's pure delight A taste that's sure to excite Chorus: Sparkling water, oh so fine
https://langchain.readthedocs.io\en\latest\modules\chat\getting_started.html
9be142e298de-3
A taste that's sure to excite Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Verse 2: No sugar, no calories, just pure bliss A drink that's hard to resist It's the perfect way to quench my thirst A drink that always comes first Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Bridge: From the mountains to the sea Sparkling water, you're the key To a healthy life, a happy soul A drink that makes me feel whole Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Outro: Sparkling water, you're the one A drink that's always so much fun I'll never let you go, my friend Sparkling previous Chat next Key Concepts Contents PromptTemplates LLMChain Streaming By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\chat\getting_started.html
bbc4c7d7e6e5-0
.rst .pdf How-To Guides How-To Guides# The examples here all address certain “how-to” guides for working with chat models. Agent Chat Vector DB Few Shot Examples Memory PromptLayer ChatOpenAI Streaming Retrieval Question/Answering Retrieval Question Answering with Sources previous Key Concepts next Agent By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\chat\how_to_guides.html
0ef4bbd70f05-0
.md .pdf Key Concepts Contents ChatMessage HumanMessage AIMessage SystemMessage ChatMessage ChatGeneration Chat Model Key Concepts# ChatMessage# A chat message is what we refer to as the modular unit of information. At the moment, this consists of “content”, which refers to the content of the chat message. At the moment, most chat models are trained to predict sequences of Human <> AI messages. This is because so far the primary interaction mode has been between a human user and a singular AI system. At the moment, there are four different classes of Chat Messages HumanMessage# A HumanMessage is a ChatMessage that is sent as if from a Human’s point of view. AIMessage# An AIMessage is a ChatMessage that is sent from the point of view of the AI system to which the Human is corresponding. SystemMessage# A SystemMessage is still a bit ambiguous, and so far seems to be a concept unique to OpenAI ChatMessage# A chat message is a generic chat message, with not only a “content” field but also a “role” field. With this field, arbitrary roles may be assigned to a message. ChatGeneration# The output of a single prediction of a chat message. Currently this is just a chat message itself (eg content and a role) Chat Model# A model which takes in a list of chat messages, and predicts a chat message in response. previous Getting Started next How-To Guides Contents ChatMessage HumanMessage AIMessage SystemMessage ChatMessage ChatGeneration Chat Model By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\chat\key_concepts.html
29a13a0a2e79-0
.ipynb .pdf Agent Agent# This notebook covers how to create a custom agent for a chat model. It will utilize chat specific prompts. from langchain.agents import ZeroShotAgent, Tool, AgentExecutor from langchain.chains import LLMChain from langchain.utilities import SerpAPIWrapper search = SerpAPIWrapper() tools = [ Tool( name = "Search", func=search.run, description="useful for when you need to answer questions about current events" ) ] prefix = """Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:""" suffix = """Begin! Remember to speak as a pirate when giving your final answer. Use lots of "Args""" prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=[] ) from langchain.chat_models import ChatOpenAI from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate, ) from langchain.schema import ( AIMessage, HumanMessage, SystemMessage ) messages = [ SystemMessagePromptTemplate(prompt=prompt), HumanMessagePromptTemplate.from_template("{input}\n\nThis was your previous work " f"(but I haven't seen any of it! I only see what " "you return as final answer):\n{agent_scratchpad}") ] prompt = ChatPromptTemplate.from_messages(messages) llm_chain = LLMChain(llm=ChatOpenAI(temperature=0), prompt=prompt) tool_names = [tool.name for tool in tools]
https://langchain.readthedocs.io\en\latest\modules\chat\examples\agent.html
29a13a0a2e79-1
tool_names = [tool.name for tool in tools] agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names) agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True) agent_executor.run("How many people live in canada as of 2023?") > Entering new AgentExecutor chain... Arrr, ye be in luck, matey! I'll find ye the answer to yer question. Thought: I need to search for the current population of Canada. Action: Search Action Input: "current population of Canada 2023" Observation: The current population of Canada is 38,623,091 as of Saturday, March 4, 2023, based on Worldometer elaboration of the latest United Nations data. Thought:Ahoy, me hearties! I've found the answer to yer question. Final Answer: As of March 4, 2023, the population of Canada be 38,623,091. Arrr! > Finished chain. 'As of March 4, 2023, the population of Canada be 38,623,091. Arrr!' previous How-To Guides next Chat Vector DB By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\chat\examples\agent.html
6b42e2d49957-0
.ipynb .pdf Chat Vector DB Contents ConversationalRetrievalChain with streaming to stdout Chat Vector DB# This notebook goes over how to set up a chat model to chat with a vector database. This notebook is very similar to the example of using an LLM in the ConversationalRetrievalChain. The only differences here are (1) using a ChatModel, and (2) passing in a ChatPromptTemplate (optimized for chat models). from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.text_splitter import CharacterTextSplitter from langchain.chains import ConversationalRetrievalChain Load in documents. You can replace this with a loader for whatever type of data you want from langchain.document_loaders import TextLoader loader = TextLoader('../../state_of_the_union.txt') documents = loader.load() If you had multiple loaders that you wanted to combine, you do something like: # loaders = [....] # docs = [] # for loader in loaders: # docs.extend(loader.load()) We now split the documents, create embeddings for them, and put them in a vectorstore. This allows us to do semantic search over them. text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) documents = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() vectorstore = Chroma.from_documents(documents, embeddings) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. We are now going to construct a prompt specifically designed for chat models. from langchain.chat_models import ChatOpenAI from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate,
https://langchain.readthedocs.io\en\latest\modules\chat\examples\chat_vector_db.html
6b42e2d49957-1
ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate, ) from langchain.schema import ( AIMessage, HumanMessage, SystemMessage ) system_template="""Use the following pieces of context to answer the users question. If you don't know the answer, just say that you don't know, don't try to make up an answer. ---------------- {context}""" messages = [ SystemMessagePromptTemplate.from_template(system_template), HumanMessagePromptTemplate.from_template("{question}") ] prompt = ChatPromptTemplate.from_messages(messages) We now initialize the ConversationalRetrievalChain qa = ConversationalRetrievalChain.from_llm(ChatOpenAI(temperature=0), vectorstore,qa_prompt=prompt) Here’s an example of asking a question with no chat history chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = qa({"question": query, "chat_history": chat_history}) result["answer"] "The President nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to serve on the United States Supreme Court. He described her as one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and a consensus builder. She has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." Here’s an example of asking a question with some chat history chat_history = [(query, result["answer"])] query = "Did he mention who came before her" result = qa({"question": query, "chat_history": chat_history}) result['answer']
https://langchain.readthedocs.io\en\latest\modules\chat\examples\chat_vector_db.html
6b42e2d49957-2
result = qa({"question": query, "chat_history": chat_history}) result['answer'] "The President mentioned Circuit Court of Appeals Judge Ketanji Brown Jackson as the nominee for the United States Supreme Court. He described her as one of the nation's top legal minds who will continue Justice Breyer's legacy of excellence. The President did not mention any specific sources of support for Judge Jackson, but he did note that advancing immigration reform is supported by everyone from labor unions to religious leaders to the U.S. Chamber of Commerce." ConversationalRetrievalChain with streaming to stdout# Output from the chain will be streamed to stdout token by token in this example. from langchain.chains.llm import LLMChain from langchain.llms import OpenAI from langchain.callbacks.base import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.chains.chat_index.prompts import CONDENSE_QUESTION_PROMPT from langchain.chains.question_answering import load_qa_chain # Construct a ChatVectorDBChain with a streaming llm for combine docs # and a separate, non-streaming llm for question generation llm = OpenAI(temperature=0) streaming_llm = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0) question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT) doc_chain = load_qa_chain(streaming_llm, chain_type="stuff", prompt=prompt) qa = ConversationalRetrievalChain(retriever=vectorstore.as_retriever(), combine_docs_chain=doc_chain, question_generator=question_generator) chat_history = [] query = "What did the president say about Ketanji Brown Jackson"
https://langchain.readthedocs.io\en\latest\modules\chat\examples\chat_vector_db.html
6b42e2d49957-3
chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = qa({"question": query, "chat_history": chat_history}) The President nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to serve on the United States Supreme Court. He described her as one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and a consensus builder. He also mentioned that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. chat_history = [(query, result["answer"])] query = "Did he mention who she suceeded" result = qa({"question": query, "chat_history": chat_history}) The context does not provide information on who Ketanji Brown Jackson succeeded on the United States Supreme Court. previous Agent next Few Shot Examples Contents ConversationalRetrievalChain with streaming to stdout By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\chat\examples\chat_vector_db.html
6d0612fb6f0c-0
.ipynb .pdf Few Shot Examples Contents Alternating Human/AI messages System Messages Few Shot Examples# This notebook covers how to use few shot examples in chat models. There does not appear to be solid consensus on how best to do few shot prompting. As a result, we are not solidifying any abstractions around this yet but rather using existing abstractions. Alternating Human/AI messages# The first way of doing few shot prompting relies on using alternating human/ai messages. See an example of this below. from langchain.chat_models import ChatOpenAI from langchain import PromptTemplate, LLMChain from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate, ) from langchain.schema import ( AIMessage, HumanMessage, SystemMessage ) chat = ChatOpenAI(temperature=0) template="You are a helpful assistant that translates english to pirate." system_message_prompt = SystemMessagePromptTemplate.from_template(template) example_human = HumanMessagePromptTemplate.from_template("Hi") example_ai = AIMessagePromptTemplate.from_template("Argh me mateys") human_template="{text}" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, example_human, example_ai, human_message_prompt]) chain = LLMChain(llm=chat, prompt=chat_prompt) # get a chat completion from the formatted messages chain.run("I love programming.") "I be lovin' programmin', me hearty!" System Messages# OpenAI provides an optional name parameter that they also recommend using in conjunction with system messages to do few shot prompting. Here is an example of how to do that below.
https://langchain.readthedocs.io\en\latest\modules\chat\examples\few_shot_examples.html
6d0612fb6f0c-1
template="You are a helpful assistant that translates english to pirate." system_message_prompt = SystemMessagePromptTemplate.from_template(template) example_human = SystemMessagePromptTemplate.from_template("Hi", additional_kwargs={"name": "example_user"}) example_ai = SystemMessagePromptTemplate.from_template("Argh me mateys", additional_kwargs={"name": "example_assistant"}) human_template="{text}" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, example_human, example_ai, human_message_prompt]) chain = LLMChain(llm=chat, prompt=chat_prompt) # get a chat completion from the formatted messages chain.run("I love programming.") "I be lovin' programmin', me hearty." previous Chat Vector DB next Memory Contents Alternating Human/AI messages System Messages By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\chat\examples\few_shot_examples.html
b4312201c54c-0
.ipynb .pdf Memory Memory# This notebook goes over how to use Memory with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object. from langchain.prompts import ( ChatPromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate, HumanMessagePromptTemplate ) prompt = ChatPromptTemplate.from_messages([ SystemMessagePromptTemplate.from_template("The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know."), MessagesPlaceholder(variable_name="history"), HumanMessagePromptTemplate.from_template("{input}") ]) from langchain.chains import ConversationChain from langchain.chat_models import ChatOpenAI from langchain.memory import ConversationBufferMemory llm = ChatOpenAI(temperature=0) We can now initialize the memory. Note that we set return_messages=True To denote that this should return a list of messages when appropriate memory = ConversationBufferMemory(return_messages=True) We can now use this in the rest of the chain. conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm) conversation.predict(input="Hi there!") 'Hello! How can I assist you today?' conversation.predict(input="I'm doing well! Just having a conversation with an AI.") "That sounds like fun! I'm happy to chat with you. Is there anything specific you'd like to talk about?" conversation.predict(input="Tell me about yourself.")
https://langchain.readthedocs.io\en\latest\modules\chat\examples\memory.html
b4312201c54c-1
conversation.predict(input="Tell me about yourself.") "Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this one. Is there anything else you'd like to know about me?" previous Few Shot Examples next PromptLayer ChatOpenAI By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\chat\examples\memory.html
ac4bc0470881-0
.ipynb .pdf PromptLayer ChatOpenAI Contents Install PromptLayer Imports Set the Environment API Key Use the PromptLayerOpenAI LLM like normal Using PromptLayer Track PromptLayer ChatOpenAI# This example showcases how to connect to PromptLayer to start recording your ChatOpenAI requests. Install PromptLayer# The promptlayer package is required to use PromptLayer with OpenAI. Install promptlayer using pip. pip install promptlayer Imports# import os from langchain.chat_models import PromptLayerChatOpenAI from langchain.schema import HumanMessage Set the Environment API Key# You can create a PromptLayer API Key at wwww.promptlayer.com by clicking the settings cog in the navbar. Set it as an environment variable called PROMPTLAYER_API_KEY. os.environ["PROMPTLAYER_API_KEY"] = "**********" Use the PromptLayerOpenAI LLM like normal# You can optionally pass in pl_tags to track your requests with PromptLayer’s tagging feature. chat = PromptLayerChatOpenAI(pl_tags=["langchain"]) chat([HumanMessage(content="I am a cat and I want")]) AIMessage(content='to take a nap in a cozy spot. I search around for a suitable place and finally settle on a soft cushion on the window sill. I curl up into a ball and close my eyes, relishing the warmth of the sun on my fur. As I drift off to sleep, I can hear the birds chirping outside and feel the gentle breeze blowing through the window. This is the life of a contented cat.', additional_kwargs={}) The above request should now appear on your PromptLayer dashboard. Using PromptLayer Track#
https://langchain.readthedocs.io\en\latest\modules\chat\examples\promptlayer_chatopenai.html
ac4bc0470881-1
The above request should now appear on your PromptLayer dashboard. Using PromptLayer Track# If you would like to use any of the PromptLayer tracking features, you need to pass the argument return_pl_id when instantializing the PromptLayer LLM to get the request id. chat = PromptLayerChatOpenAI(return_pl_id=True) chat_results = chat.generate([[HumanMessage(content="I am a cat and I want")]]) for res in chat_results.generations: pl_request_id = res[0].generation_info["pl_request_id"] promptlayer.track.score(request_id=pl_request_id, score=100) Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well. Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard. previous Memory next Streaming Contents Install PromptLayer Imports Set the Environment API Key Use the PromptLayerOpenAI LLM like normal Using PromptLayer Track By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\chat\examples\promptlayer_chatopenai.html
4a2f756a5f60-0
.ipynb .pdf Streaming Streaming# This notebook goes over how to use streaming with a chat model. from langchain.chat_models import ChatOpenAI from langchain.schema import ( HumanMessage, ) from langchain.callbacks.base import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler chat = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0) resp = chat([HumanMessage(content="Write me a song about sparkling water.")]) Verse 1: Bubbles rising to the top A refreshing drink that never stops Clear and crisp, it's pure delight A taste that's sure to excite Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Verse 2: No sugar, no calories, just pure bliss A drink that's hard to resist It's the perfect way to quench my thirst A drink that always comes first Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Bridge: From the mountains to the sea Sparkling water, you're the key To a healthy life, a happy soul A drink that makes me feel whole Chorus: Sparkling water, oh so fine A drink that's always on my mind With every sip, I feel alive Sparkling water, you're my vibe Outro: Sparkling water, you're the one A drink that's always so much fun I'll never let you go, my friend Sparkling previous PromptLayer ChatOpenAI next Retrieval Question/Answering
https://langchain.readthedocs.io\en\latest\modules\chat\examples\streaming.html
4a2f756a5f60-1
Sparkling previous PromptLayer ChatOpenAI next Retrieval Question/Answering By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\chat\examples\streaming.html
7f5034026cfd-0
.ipynb .pdf Retrieval Question/Answering Retrieval Question/Answering# This example showcases using a chat model to do question answering over a vector database. This notebook is very similar to the example of using an LLM in the RetrievalQA. The only differences here are (1) using a ChatModel, and (2) passing in a ChatPromptTemplate (optimized for chat models). from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.text_splitter import CharacterTextSplitter from langchain.chains import RetrievalQA from langchain.document_loaders import TextLoader loader = TextLoader('../../state_of_the_union.txt') documents = loader.load() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_documents(texts, embeddings) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. We can now set up the chat model and chat model specific prompt from langchain.chat_models import ChatOpenAI from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate, ) from langchain.schema import ( AIMessage, HumanMessage, SystemMessage ) system_template="""Use the following pieces of context to answer the users question. If you don't know the answer, just say that you don't know, don't try to make up an answer. ---------------- {context}""" messages = [ SystemMessagePromptTemplate.from_template(system_template), HumanMessagePromptTemplate.from_template("{question}") ]
https://langchain.readthedocs.io\en\latest\modules\chat\examples\vector_db_qa.html
7f5034026cfd-1
HumanMessagePromptTemplate.from_template("{question}") ] prompt = ChatPromptTemplate.from_messages(messages) chain_type_kwargs = {"prompt": prompt} qa = RetrievalQA.from_chain_type(llm=ChatOpenAI(), chain_type="stuff", retriever=docsearch.as_retriever(), chain_type_kwargs=chain_type_kwargs) query = "What did the president say about Ketanji Brown Jackson" qa.run(query) "The president nominated Ketanji Brown Jackson to serve on the United States Supreme Court. He referred to her as one of our nation's top legal minds, a former federal public defender, a consensus builder, and from a family of public school educators and police officers. Since she's been nominated, she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." previous Streaming next Retrieval Question Answering with Sources By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\chat\examples\vector_db_qa.html
f50154cd7d27-0
.ipynb .pdf Retrieval Question Answering with Sources Retrieval Question Answering with Sources# This notebook goes over how to do question-answering with sources with a chat model over a vector database. It does this by using the RetrievalQAWithSourcesChain, which does the lookup of the documents from a vector database. This notebook is very similar to the example of using an LLM in the RetrievalQAWithSources. The only differences here are (1) using a ChatModel, and (2) passing in a ChatPromptTemplate (optimized for chat models). from langchain.embeddings.openai import OpenAIEmbeddings from langchain.embeddings.cohere import CohereEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch from langchain.vectorstores import Chroma with open('../../state_of_the_union.txt') as f: state_of_the_union = f.read() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": f"{i}-pl"} for i in range(len(texts))]) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. from langchain.chains import RetrievalQAWithSourcesChain We can now set up the chat model and chat model specific prompt from langchain.chat_models import ChatOpenAI from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate,
https://langchain.readthedocs.io\en\latest\modules\chat\examples\vector_db_qa_with_sources.html
f50154cd7d27-1
ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate, ) from langchain.schema import ( AIMessage, HumanMessage, SystemMessage ) system_template="""Use the following pieces of context to answer the users question. If you don't know the answer, just say that you don't know, don't try to make up an answer. ALWAYS return a "SOURCES" part in your answer. The "SOURCES" part should be a reference to the source of the document from which you got your answer. Example of your response should be: ``` The answer is foo SOURCES: xyz ``` Begin! ---------------- {summaries}""" messages = [ SystemMessagePromptTemplate.from_template(system_template), HumanMessagePromptTemplate.from_template("{question}") ] prompt = ChatPromptTemplate.from_messages(messages) chain_type_kwargs = {"prompt": prompt} chain = RetrievalQAWithSourcesChain.from_chain_type( ChatOpenAI(temperature=0), chain_type="stuff", retriever=docsearch.as_retriever(), chain_type_kwargs=chain_type_kwargs ) chain({"question": "What did the president say about Justice Breyer"}, return_only_outputs=True) {'answer': 'The President honored Justice Stephen Breyer, an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court, for his dedicated service to the country. \n', 'sources': '31-pl'} previous Retrieval Question/Answering next Agents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\chat\examples\vector_db_qa_with_sources.html
2a7b2ac10468-0
.rst .pdf How To Guides How To Guides# There are a lot of different document loaders that LangChain supports. Below are how-to guides for working with them File Loader: A walkthrough of how to use Unstructured to load files of arbitrary types (pdfs, txt, html, etc). Directory Loader: A walkthrough of how to use Unstructured load files from a given directory. Notion: A walkthrough of how to load data for an arbitrary Notion DB. ReadTheDocs: A walkthrough of how to load data for documentation generated by ReadTheDocs. HTML: A walkthrough of how to load data from an html file. PDF: A walkthrough of how to load data from a PDF file. PowerPoint: A walkthrough of how to load data from a powerpoint file. Email: A walkthrough of how to load data from an email (.eml) file. GoogleDrive: A walkthrough of how to load data from Google drive. Obsidian: A walkthrough of how to load data from an Obsidian file dump. Roam: A walkthrough of how to load data from a Roam file export. EverNote: A walkthrough of how to load data from a EverNote (.enex) file. YouTube: A walkthrough of how to load the transcript from a YouTube video. Hacker News: A walkthrough of how to load a Hacker News page. GitBook: A walkthrough of how to load a GitBook page. s3 File: A walkthrough of how to load a file from s3. s3 Directory: A walkthrough of how to load all files in a directory from s3. GCS File: A walkthrough of how to load a file from Google Cloud Storage (GCS). GCS Directory: A walkthrough of how to load all files in a directory from Google Cloud Storage (GCS).
https://langchain.readthedocs.io\en\latest\modules\document_loaders\how_to_guides.html
2a7b2ac10468-1
Web Base: A walkthrough of how to load all text data from webpages. IMSDb: A walkthrough of how to load all text data from IMSDb webpage. AZLyrics: A walkthrough of how to load all text data from AZLyrics webpage. College Confidential: A walkthrough of how to load all text data from College Confidential webpage. Gutenberg: A walkthrough of how to load data from a Gutenberg ebook text. Airbyte Json: A walkthrough of how to load data from a local Airbyte JSON file. CoNLL-U: A walkthrough of how to load data from a ConLL-U file. iFixit: A walkthrough of how to search and load data like guides, technical Q&A’s, and device wikis from iFixit.com Notebook: A walkthrough of how to load data from .ipynb notebook. Copypaste: A walkthrough of how to load a document object from something you just want to copy and paste. CSV: A walkthrough of how to load data from a .csv file. Facebook Chat: A walkthrough of how to load data from a Facebook Chat json file. Image: A walkthrough of how to load images such as JPGs PNGs into a document format that can be used downstream. Markdown: A walkthrough of how to load data from a markdown file. SRT: A walkthrough of how to load data from a subtitle (.srt) file. Telegram: A walkthrough of how to load data from a Telegram Chat json file. URL: A walkthrough of how to load HTML documents from a list of URLs into a document format that we can use downstream. Word Document: A walkthrough of how to load data from Microsoft Word files. Blackboard: A walkthrough of how to load data from a Blackboard course. previous Key Concepts next CoNLL-U By Harrison Chase
https://langchain.readthedocs.io\en\latest\modules\document_loaders\how_to_guides.html
2a7b2ac10468-2
previous Key Concepts next CoNLL-U By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\how_to_guides.html
b14ac5711f62-0
.md .pdf Key Concepts Contents Document Loader Unstructured Key Concepts# Document# This class is a container for document information. This contains two parts: page_content: The content of the actual page itself. metadata: The metadata associated with the document. This can be things like the file path, the url, etc. Loader# This base class is a way to load documents. It exposes a load method that returns Document objects. Unstructured# Unstructured is a python package specifically focused on transformations from raw documents to text. previous Document Loaders next How To Guides Contents Document Loader Unstructured By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\key_concepts.html
6f3418bc584d-0
.ipynb .pdf Airbyte JSON Airbyte JSON# This covers how to load any source from Airbyte into a local JSON file that can be read in as a document Prereqs: Have docker desktop installed Steps: Clone Airbyte from GitHub - git clone https://github.com/airbytehq/airbyte.git Switch into Airbyte directory - cd airbyte Start Airbyte - docker compose up In your browser, just visit http://localhost:8000. You will be asked for a username and password. By default, that’s username airbyte and password password. Setup any source you wish. Set destination as Local JSON, with specified destination path - lets say /json_data. Set up manual sync. Run the connection! To see what files are create, you can navigate to: file:///tmp/airbyte_local Find your data and copy path. That path should be saved in the file variable below. It should start with /tmp/airbyte_local from langchain.document_loaders import AirbyteJSONLoader !ls /tmp/airbyte_local/json_data/ _airbyte_raw_pokemon.jsonl loader = AirbyteJSONLoader('/tmp/airbyte_local/json_data/_airbyte_raw_pokemon.jsonl') data = loader.load() print(data[0].page_content[:500]) abilities: ability: name: blaze url: https://pokeapi.co/api/v2/ability/66/ is_hidden: False slot: 1 ability: name: solar-power url: https://pokeapi.co/api/v2/ability/94/ is_hidden: True slot: 3 base_experience: 267 forms: name: charizard url: https://pokeapi.co/api/v2/pokemon-form/6/ game_indices:
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\airbyte_json.html
6f3418bc584d-1
game_indices: game_index: 180 version: name: red url: https://pokeapi.co/api/v2/version/1/ game_index: 180 version: name: blue url: https://pokeapi.co/api/v2/version/2/ game_index: 180 version: n previous CoNLL-U next AZLyrics By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\airbyte_json.html
39e1a6c29532-0
.ipynb .pdf AZLyrics AZLyrics# This covers how to load AZLyrics webpages into a document format that we can use downstream. from langchain.document_loaders import AZLyricsLoader loader = AZLyricsLoader("https://www.azlyrics.com/lyrics/mileycyrus/flowers.html") data = loader.load() data
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\azlyrics.html
39e1a6c29532-1
[Document(page_content="Miley Cyrus - Flowers Lyrics | AZLyrics.com\n\r\nWe were good, we were gold\nKinda dream that can't be sold\nWe were right till we weren't\nBuilt a home and watched it burn\n\nI didn't wanna leave you\nI didn't wanna lie\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my name in the sand\nTalk to myself for hours\nSay things you don't understand\nI can take myself dancing\nAnd I can hold my own hand\nYeah, I can love me better than you can\n\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby\n\nPaint my nails, cherry red\nMatch the roses that you left\nNo remorse, no regret\nI forgive every word you said\n\nI didn't wanna leave you, baby\nI didn't wanna fight\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my name in the sand\nTalk to myself for hours, yeah\nSay things you don't
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\azlyrics.html
39e1a6c29532-2
to myself for hours, yeah\nSay things you don't understand\nI can take myself dancing\nAnd I can hold my own hand\nYeah, I can love me better than you can\n\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby\nCan love me better\nI\n\nI didn't wanna wanna leave you\nI didn't wanna fight\nStarted to cry but then remembered I\n\nI can buy myself flowers\nWrite my name in the sand\nTalk to myself for hours (Yeah)\nSay things you don't understand\nI can take myself dancing\nAnd I can hold my own hand\nYeah, I can love me better than\nYeah, I can love me better than you can, uh\n\nCan love me better\nI can love me better, baby\nCan love me better\nI can love me better, baby (Than you can)\nCan love me better\nI can love me better, baby\nCan love me better\nI\n", lookup_str='', metadata={'source':
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\azlyrics.html
39e1a6c29532-3
love me better\nI\n", lookup_str='', metadata={'source': 'https://www.azlyrics.com/lyrics/mileycyrus/flowers.html'}, lookup_index=0)]
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\azlyrics.html
39e1a6c29532-4
previous Airbyte JSON next Blackboard By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\azlyrics.html
8731153a60a3-0
.ipynb .pdf Blackboard Blackboard# This covers how to load data from a Blackboard Learn instance. from langchain.document_loaders import BlackboardLoader loader = BlackboardLoader( blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1", bbrouter="expires:12345...", load_all_recursively=True, ) documents = loader.load() previous AZLyrics next College Confidential By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\blackboard.html
4a2d064fe305-0
.ipynb .pdf College Confidential College Confidential# This covers how to load College Confidential webpages into a document format that we can use downstream. from langchain.document_loaders import CollegeConfidentialLoader loader = CollegeConfidentialLoader("https://www.collegeconfidential.com/colleges/brown-university/") data = loader.load() data
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\college_confidential.html
4a2d064fe305-1
[Document(page_content='\n\n\n\n\n\n\n\nA68FEB02-9D19-447C-B8BC-818149FD6EAF\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Media (2)\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nE45B8B13-33D4-450E-B7DB-F66EFE8F2097\n\n\n\n\n\n\n\n\n\nE45B8B13-33D4-450E-B7DB-F66EFE8F2097\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAbout Brown\n\n\n\n\n\n\nBrown University Overview\nBrown University is a private, nonprofit school in the urban setting of Providence, Rhode Island. Brown was founded in 1764 and the school currently enrolls around 10,696 students a year, including 7,349 undergraduates. Brown provides on-campus housing for students. Most students live in off campus housing.\n📆 Mark your calendar! January 5, 2023 is the final deadline to submit an application for the Fall 2023 semester. \nThere are many ways for students to get involved at Brown! \nLove music or
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\college_confidential.html
4a2d064fe305-2
students to get involved at Brown! \nLove music or performing? Join a campus band, sing in a chorus, or perform with one of the school\'s theater groups.\nInterested in journalism or communications? Brown students can write for the campus newspaper, host a radio show or be a producer for the student-run television channel.\nInterested in joining a fraternity or sorority? Brown has fraternities and sororities.\nPlanning to play sports? Brown has many options for athletes. See them all and learn more about life at Brown on the Student Life page.\n\n\n\n2022 Brown Facts At-A-Glance\n\n\n\n\n\nAcademic Calendar\nOther\n\n\nOverall Acceptance Rate\n6%\n\n\nEarly Decision Acceptance Rate\n16%\n\n\nEarly Action Acceptance Rate\nEA not offered\n\n\nApplicants Submitting SAT scores\n51%\n\n\nTuition\n$62,680\n\n\nPercent of Need Met\n100%\n\n\nAverage First-Year Financial Aid Package\n$59,749\n\n\n\n\nIs Brown a Good School?\n\nDifferent people have different ideas about what makes a "good" school. Some factors that can help you
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\college_confidential.html
4a2d064fe305-3
"good" school. Some factors that can help you determine what a good school for you might be include admissions criteria, acceptance rate, tuition costs, and more.\nLet\'s take a look at these factors to get a clearer sense of what Brown offers and if it could be the right college for you.\nBrown Acceptance Rate 2022\nIt is extremely difficult to get into Brown. Around 6% of applicants get into Brown each year. In 2022, just 2,568 out of the 46,568 students who applied were accepted.\nRetention and Graduation Rates at Brown\nRetention refers to the number of students that stay enrolled at a school over time. This is a way to get a sense of how satisfied students are with their school experience, and if they have the support necessary to succeed in college. \nApproximately 98% of first-year, full-time undergrads who start at Browncome back their sophomore year. 95% of Brown undergrads graduate within six years. The average six-year graduation rate for U.S. colleges and
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\college_confidential.html
4a2d064fe305-4
six-year graduation rate for U.S. colleges and universities is 61% for public schools, and 67% for private, non-profit schools.\nJob Outcomes for Brown Grads\nJob placement stats are a good resource for understanding the value of a degree from Brown by providing a look on how job placement has gone for other grads. \nCheck with Brown directly, for information on any information on starting salaries for recent grads.\nBrown\'s Endowment\nAn endowment is the total value of a school\'s investments, donations, and assets. Endowment is not necessarily an indicator of the quality of a school, but it can give you a sense of how much money a college can afford to invest in expanding programs, improving facilities, and support students. \nAs of 2022, the total market value of Brown University\'s endowment was $4.7 billion. The average college endowment was $905 million in 2021. The school spends $34,086 for each full-time student enrolled. \nTuition and Financial Aid at Brown\nTuition is another important factor
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\college_confidential.html
4a2d064fe305-5
Financial Aid at Brown\nTuition is another important factor when choose a college. Some colleges may have high tuition, but do a better job at meeting students\' financial need.\nBrown meets 100% of the demonstrated financial need for undergraduates. The average financial aid package for a full-time, first-year student is around $59,749 a year. \nThe average student debt for graduates in the class of 2022 was around $24,102 per student, not including those with no debt. For context, compare this number with the average national debt, which is around $36,000 per borrower. \nThe 2023-2024 FAFSA Opened on October 1st, 2022\nSome financial aid is awarded on a first-come, first-served basis, so fill out the FAFSA as soon as you can. Visit the FAFSA website to apply for student aid. Remember, the first F in FAFSA stands for FREE! You should never have to pay to submit the Free Application for Federal Student Aid (FAFSA), so be very wary of anyone asking you
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\college_confidential.html
4a2d064fe305-6
so be very wary of anyone asking you for money.\nLearn more about Tuition and Financial Aid at Brown.\nBased on this information, does Brown seem like a good fit? Remember, a school that is perfect for one person may be a terrible fit for someone else! So ask yourself: Is Brown a good school for you?\nIf Brown University seems like a school you want to apply to, click the heart button to save it to your college list.\n\nStill Exploring Schools?\nChoose one of the options below to learn more about Brown:\nAdmissions\nStudent Life\nAcademics\nTuition & Aid\nBrown Community Forums\nThen use the college admissions predictor to take a data science look at your chances of getting into some of the best colleges and universities in the U.S.\nWhere is Brown?\nBrown is located in the urban setting of Providence, Rhode Island, less than an hour from Boston. \nIf you would like to see Brown for yourself, plan a visit. The best way to reach campus is to take Interstate
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\college_confidential.html
4a2d064fe305-7
best way to reach campus is to take Interstate 95 to Providence, or book a flight to the nearest airport, T.F. Green.\nYou can also take a virtual campus tour to get a sense of what Brown and Providence are like without leaving home.\nConsidering Going to School in Rhode Island?\nSee a full list of colleges in Rhode Island and save your favorites to your college list.\n\n\n\nCollege Info\n\n\n\n\n\n\n\n\n\n Providence, RI 02912\n \n\n\n\n Campus Setting: Urban\n \n\n\n\n\n\n\n\n (401) 863-2378\n \n\n Website\n \n\n
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\college_confidential.html
4a2d064fe305-8
\n\n Virtual Tour\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nBrown Application Deadline\n\n\n\nFirst-Year Applications are Due\n\nJan 5\n\nTransfer Applications are Due\n\nMar 1\n\n\n\n \n The deadline for Fall first-year applications to Brown is \n Jan 5. \n \n \n \n\n \n The deadline for Fall transfer applications to Brown is \n Mar 1. \n \n \n \n\n \n Check the school website \n for more information about deadlines for specific programs
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\college_confidential.html
4a2d064fe305-9
for more information about deadlines for specific programs or special admissions programs\n \n \n\n\n\n\n\n\nBrown ACT Scores\n\n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nACT Range\n\n\n \n 33 - 35\n \n \n\n\n\nEstimated Chance of Acceptance by ACT Score\n\n\nACT Score\nEstimated Chance\n\n\n35 and Above\nGood\n\n\n33 to 35\nAvg\n\n\n33 and Less\nLow\n\n\n\n\n\n\nStand out on your college application\n\n• Qualify for scholarships\n• Most students who retest improve their score\n\nSponsored by ACT\n\n\n Take the Next ACT Test\n \n\n\n\n\n\nBrown SAT Scores\n\n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nComposite SAT Range\n\n\n \n 720 -
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\college_confidential.html
4a2d064fe305-10
720 - 770\n \n \n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nMath SAT Range\n\n\n \n Not available\n \n \n\n\n\nic_reflect\n\n\n\n\n\n\n\n\nReading SAT Range\n\n\n \n 740 - 800\n \n \n\n\n\n\n\n\n Brown Tuition & Fees\n \n\n\n\nTuition & Fees\n\n\n\n $82,286\n \nIn State\n\n\n\n\n
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\college_confidential.html
4a2d064fe305-11
$82,286\n \nOut-of-State\n\n\n\n\n\n\n\nCost Breakdown\n\n\nIn State\n\n\nOut-of-State\n\n\n\n\nState Tuition\n\n\n\n $62,680\n \n\n\n\n $62,680\n \n\n\n\n\nFees\n\n\n\n $2,466\n \n\n\n\n $2,466\n \n\n\n\n\nHousing\n\n\n\n $15,840\n
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\college_confidential.html
4a2d064fe305-12
\n\n\n\n $15,840\n \n\n\n\n\nBooks\n\n\n\n $1,300\n \n\n\n\n $1,300\n \n\n\n\n\n\n Total (Before Financial Aid):\n \n\n\n\n $82,286\n \n\n\n\n $82,286\n
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\college_confidential.html
4a2d064fe305-13
\n\n\n\n\n\n\n\n\n\n\n\nStudent Life\n\n Wondering what life at Brown is like? There are approximately \n 10,696 students enrolled at \n Brown, \n including 7,349 undergraduate students and \n 3,347 graduate students.\n 96% percent of students attend school \n full-time, \n 6% percent are from RI and \n 94% percent of students are from other states.\n \n\n\n\n\n\n None\n \n\n\n\n\nUndergraduate Enrollment\n\n\n\n 96%\n \nFull Time\n\n\n\n\n 4%\n
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\college_confidential.html
4a2d064fe305-14
4%\n \nPart Time\n\n\n\n\n\n\n\n 94%\n \n\n\n\n\nResidency\n\n\n\n 6%\n \nIn State\n\n\n\n\n 94%\n \nOut-of-State\n\n\n\n\n\n\n\n Data Source: IPEDs and Peterson\'s Databases © 2022 Peterson\'s LLC All rights reserved\n \n', lookup_str='', metadata={'source': 'https://www.collegeconfidential.com/colleges/brown-university/'}, lookup_index=0)]
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\college_confidential.html
4a2d064fe305-15
previous Blackboard next Copy Paste By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\college_confidential.html
d309b72cf9b9-0
.ipynb .pdf CoNLL-U CoNLL-U# This is an example of how to load a file in CoNLL-U format. The whole file is treated as one document. The example data (conllu.conllu) is based on one of the standard UD/CoNLL-U examples. from langchain.document_loaders import CoNLLULoader loader = CoNLLULoader("example_data/conllu.conllu") document = loader.load() document previous How To Guides next Airbyte JSON By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\CoNLL-U.html
1445866a2dc5-0
.ipynb .pdf Copy Paste Contents Metadata Copy Paste# This notebook covers how to load a document object from something you just want to copy and paste. In this case, you don’t even need to use a DocumentLoader, but rather can just construct the Document directly. from langchain.docstore.document import Document text = "..... put the text you copy pasted here......" doc = Document(page_content=text) Metadata# If you want to add metadata about the where you got this piece of text, you easily can with the metadata key. metadata = {"source": "internet", "date": "Friday"} doc = Document(page_content=text, metadata=metadata) previous College Confidential next CSV Loader Contents Metadata By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\copypaste.html
0cd69a9f5ed3-0
.ipynb .pdf CSV Loader Contents CSV Loader Customizing the csv parsing and loading Specify a column to be used identify the document source CSV Loader# Load csv files with a single row per document. from langchain.document_loaders.csv_loader import CSVLoader loader = CSVLoader(file_path='./example_data/mlb_teams_2012.csv') data = loader.load() print(data)
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\csv.html
0cd69a9f5ed3-1
[Document(page_content='Team: Nationals\n"Payroll (millions)": 81.34\n"Wins": 98', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 0}, lookup_index=0), Document(page_content='Team: Reds\n"Payroll (millions)": 82.20\n"Wins": 97', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 1}, lookup_index=0), Document(page_content='Team: Yankees\n"Payroll (millions)": 197.96\n"Wins": 95', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 2}, lookup_index=0), Document(page_content='Team: Giants\n"Payroll (millions)": 117.62\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 3}, lookup_index=0), Document(page_content='Team: Braves\n"Payroll (millions)": 83.31\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 4}, lookup_index=0), Document(page_content='Team: Athletics\n"Payroll (millions)": 55.37\n"Wins": 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 5}, lookup_index=0), Document(page_content='Team: Rangers\n"Payroll
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\csv.html
0cd69a9f5ed3-2
lookup_index=0), Document(page_content='Team: Rangers\n"Payroll (millions)": 120.51\n"Wins": 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 6}, lookup_index=0), Document(page_content='Team: Orioles\n"Payroll (millions)": 81.43\n"Wins": 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 7}, lookup_index=0), Document(page_content='Team: Rays\n"Payroll (millions)": 64.17\n"Wins": 90', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 8}, lookup_index=0), Document(page_content='Team: Angels\n"Payroll (millions)": 154.49\n"Wins": 89', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 9}, lookup_index=0), Document(page_content='Team: Tigers\n"Payroll (millions)": 132.30\n"Wins": 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 10}, lookup_index=0), Document(page_content='Team: Cardinals\n"Payroll (millions)": 110.30\n"Wins": 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 11}, lookup_index=0), Document(page_content='Team:
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\csv.html
0cd69a9f5ed3-3
'row': 11}, lookup_index=0), Document(page_content='Team: Dodgers\n"Payroll (millions)": 95.14\n"Wins": 86', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 12}, lookup_index=0), Document(page_content='Team: White Sox\n"Payroll (millions)": 96.92\n"Wins": 85', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 13}, lookup_index=0), Document(page_content='Team: Brewers\n"Payroll (millions)": 97.65\n"Wins": 83', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 14}, lookup_index=0), Document(page_content='Team: Phillies\n"Payroll (millions)": 174.54\n"Wins": 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 15}, lookup_index=0), Document(page_content='Team: Diamondbacks\n"Payroll (millions)": 74.28\n"Wins": 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 16}, lookup_index=0), Document(page_content='Team: Pirates\n"Payroll (millions)": 63.43\n"Wins": 79', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 17}, lookup_index=0),
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\csv.html
0cd69a9f5ed3-4
'row': 17}, lookup_index=0), Document(page_content='Team: Padres\n"Payroll (millions)": 55.24\n"Wins": 76', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 18}, lookup_index=0), Document(page_content='Team: Mariners\n"Payroll (millions)": 81.97\n"Wins": 75', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 19}, lookup_index=0), Document(page_content='Team: Mets\n"Payroll (millions)": 93.35\n"Wins": 74', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 20}, lookup_index=0), Document(page_content='Team: Blue Jays\n"Payroll (millions)": 75.48\n"Wins": 73', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 21}, lookup_index=0), Document(page_content='Team: Royals\n"Payroll (millions)": 60.91\n"Wins": 72', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 22}, lookup_index=0), Document(page_content='Team: Marlins\n"Payroll (millions)": 118.07\n"Wins": 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 23}, lookup_index=0),
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\csv.html
0cd69a9f5ed3-5
'row': 23}, lookup_index=0), Document(page_content='Team: Red Sox\n"Payroll (millions)": 173.18\n"Wins": 69', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 24}, lookup_index=0), Document(page_content='Team: Indians\n"Payroll (millions)": 78.43\n"Wins": 68', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 25}, lookup_index=0), Document(page_content='Team: Twins\n"Payroll (millions)": 94.08\n"Wins": 66', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 26}, lookup_index=0), Document(page_content='Team: Rockies\n"Payroll (millions)": 78.06\n"Wins": 64', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 27}, lookup_index=0), Document(page_content='Team: Cubs\n"Payroll (millions)": 88.19\n"Wins": 61', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 28}, lookup_index=0), Document(page_content='Team: Astros\n"Payroll (millions)": 60.65\n"Wins": 55', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 29}, lookup_index=0)]
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\csv.html
0cd69a9f5ed3-6
Customizing the csv parsing and loading# See the csv module documentation for more information of what csv args are supported. loader = CSVLoader(file_path='./example_data/mlb_teams_2012.csv', csv_args={ 'delimiter': ',', 'quotechar': '"', 'fieldnames': ['MLB Team', 'Payroll in millions', 'Wins'] }) data = loader.load() print(data)
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\csv.html
0cd69a9f5ed3-7
[Document(page_content='MLB Team: Team\nPayroll in millions: "Payroll (millions)"\nWins: "Wins"', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 0}, lookup_index=0), Document(page_content='MLB Team: Nationals\nPayroll in millions: 81.34\nWins: 98', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 1}, lookup_index=0), Document(page_content='MLB Team: Reds\nPayroll in millions: 82.20\nWins: 97', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 2}, lookup_index=0), Document(page_content='MLB Team: Yankees\nPayroll in millions: 197.96\nWins: 95', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 3}, lookup_index=0), Document(page_content='MLB Team: Giants\nPayroll in millions: 117.62\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 4}, lookup_index=0), Document(page_content='MLB Team: Braves\nPayroll in millions: 83.31\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row':
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\csv.html
0cd69a9f5ed3-8
'./example_data/mlb_teams_2012.csv', 'row': 5}, lookup_index=0), Document(page_content='MLB Team: Athletics\nPayroll in millions: 55.37\nWins: 94', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 6}, lookup_index=0), Document(page_content='MLB Team: Rangers\nPayroll in millions: 120.51\nWins: 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 7}, lookup_index=0), Document(page_content='MLB Team: Orioles\nPayroll in millions: 81.43\nWins: 93', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 8}, lookup_index=0), Document(page_content='MLB Team: Rays\nPayroll in millions: 64.17\nWins: 90', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 9}, lookup_index=0), Document(page_content='MLB Team: Angels\nPayroll in millions: 154.49\nWins: 89', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 10}, lookup_index=0), Document(page_content='MLB Team: Tigers\nPayroll in millions: 132.30\nWins: 88', lookup_str='',
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\csv.html
0cd69a9f5ed3-9
in millions: 132.30\nWins: 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 11}, lookup_index=0), Document(page_content='MLB Team: Cardinals\nPayroll in millions: 110.30\nWins: 88', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 12}, lookup_index=0), Document(page_content='MLB Team: Dodgers\nPayroll in millions: 95.14\nWins: 86', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 13}, lookup_index=0), Document(page_content='MLB Team: White Sox\nPayroll in millions: 96.92\nWins: 85', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 14}, lookup_index=0), Document(page_content='MLB Team: Brewers\nPayroll in millions: 97.65\nWins: 83', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 15}, lookup_index=0), Document(page_content='MLB Team: Phillies\nPayroll in millions: 174.54\nWins: 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 16}, lookup_index=0), Document(page_content='MLB Team:
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\csv.html
0cd69a9f5ed3-10
16}, lookup_index=0), Document(page_content='MLB Team: Diamondbacks\nPayroll in millions: 74.28\nWins: 81', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 17}, lookup_index=0), Document(page_content='MLB Team: Pirates\nPayroll in millions: 63.43\nWins: 79', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 18}, lookup_index=0), Document(page_content='MLB Team: Padres\nPayroll in millions: 55.24\nWins: 76', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 19}, lookup_index=0), Document(page_content='MLB Team: Mariners\nPayroll in millions: 81.97\nWins: 75', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 20}, lookup_index=0), Document(page_content='MLB Team: Mets\nPayroll in millions: 93.35\nWins: 74', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv', 'row': 21}, lookup_index=0), Document(page_content='MLB Team: Blue Jays\nPayroll in millions: 75.48\nWins: 73', lookup_str='', metadata={'source': './example_data/mlb_teams_2012.csv',
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\csv.html