id
stringlengths
14
16
text
stringlengths
45
2.05k
source
stringlengths
53
111
f59971dcf3cb-0
.ipynb .pdf PDF Contents Using PyPDF Using Unstructured Retain Elements Fetching remote PDFs using Unstructured Using PDFMiner Using PyMuPDF PDF# This covers how to load pdfs into a document format that we can use downstream. Using PyPDF# Load PDF using pypdf into array of documents, where each document contains the page content and metadata with page number. from langchain.document_loaders import PyPDFLoader loader = PyPDFLoader("example_data/layout-parser-paper.pdf") pages = loader.load_and_split() pages[0]
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-1
Document(page_content='LayoutParser : A Uni\x0ced Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1( \x00), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1Allen Institute for AI\[email protected]\n2Brown University\nruochen [email protected]\n3Harvard University\nfmelissadell,jacob carlson [email protected]\n4University of Washington\[email protected]\n5University of Waterloo\[email protected]\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model con\x0cgurations complicate the easy reuse of im-\nportant innovations by a wide audience. Though there have been on-going\ne\x0borts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-2
processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser , an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de-\ntection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io .\nKeywords: Document Image Analysis ·Deep Learning ·Layout Analysis\n·Character Recognition ·Open Source library ·Toolkit.\n1 Introduction\nDeep Learning(DL)-based approaches are the state-of-the-art for
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-3
Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classi\x0ccation [ 11,arXiv:2103.15348v2 [cs.CV] 21 Jun 2021', lookup_str='', metadata={'source': 'example_data/layout-parser-paper.pdf', 'page': '0'}, lookup_index=0)
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-4
An advantage of this approach is that documents can be retrieved with page numbers. from langchain.vectorstores import FAISS from langchain.embeddings.openai import OpenAIEmbeddings faiss_index = FAISS.from_documents(pages, OpenAIEmbeddings()) docs = faiss_index.similarity_search("How will the community be engaged?", k=2) for doc in docs: print(str(doc.metadata["page"]) + ":", doc.page_content) 9: 10 Z. Shen et al. Fig. 4: Illustration of (a) the original historical Japanese document with layout detection results and (b) a recreated version of the document image that achieves much better character recognition recall. The reorganization algorithm rearranges the tokens based on the their detected bounding boxes given a maximum allowed height. 4LayoutParser Community Platform Another focus of LayoutParser is promoting the reusability of layout detection models and full digitization pipelines. Similar to many existing deep learning libraries, LayoutParser comes with a community model hub for distributing layout models. End-users can upload their self-trained models to the model hub, and these models can be loaded into a similar interface as the currently available LayoutParser pre-trained models. For example, the model trained on the News Navigator dataset [17] has been incorporated in the model hub. Beyond DL models, LayoutParser also promotes the sharing of entire doc- ument digitization pipelines. For example, sometimes the pipeline requires the combination of multiple DL models to achieve better accuracy. Currently, pipelines are mainly described in academic papers and implementations are often not pub- licly available. To this end, the LayoutParser community platform also enables the sharing of layout pipelines to promote the discussion and reuse of techniques. For each shared pipeline, it has a dedicated project page, with links to the source
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-5
For each shared pipeline, it has a dedicated project page, with links to the source code, documentation, and an outline of the approaches. A discussion panel is provided for exchanging ideas. Combined with the core LayoutParser library, users can easily build reusable components based on the shared pipelines and apply them to solve their unique problems. 5 Use Cases The core objective of LayoutParser is to make it easier to create both large-scale and light-weight document digitization pipelines. Large-scale document processing 3: 4 Z. Shen et al. Efficient Data AnnotationC u s t o m i z e d M o d e l T r a i n i n gModel Cust omizationDI A Model HubDI A Pipeline SharingCommunity PlatformLa y out Detection ModelsDocument Images T h e C o r e L a y o u t P a r s e r L i b r a r yOCR ModuleSt or age & VisualizationLa y out Data Structur e Fig. 1: The overall architecture of LayoutParser . For an input document image, the core LayoutParser library provides a set of o -the-shelf tools for layout detection, OCR, visualization, and storage, backed by a carefully designed layout data structure. LayoutParser also supports high level customization via ecient layout annotation and model training functions. These improve model accuracy on the target samples. The community platform enables the easy sharing of DIA models and whole digitization pipelines to promote reusability and reproducibility. A collection of detailed documentation, tutorials and exemplar projects make LayoutParser easy to learn and use. AllenNLP [ 8] and transformers [ 34] have provided the community with complete DL-based support for developing and deploying models for general computer vision and natural language processing problems. LayoutParser , on the other
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-6
vision and natural language processing problems. LayoutParser , on the other hand, specializes speci cally in DIA tasks. LayoutParser is also equipped with a community platform inspired by established model hubs such as Torch Hub [23] andTensorFlow Hub [1]. It enables the sharing of pretrained models as well as full document processing pipelines that are unique to DIA tasks. There have been a variety of document data collections to facilitate the development of DL models. Some examples include PRImA [ 3](magazine layouts), PubLayNet [ 38](academic paper layouts), Table Bank [ 18](tables in academic papers), Newspaper Navigator Dataset [ 16,17](newspaper gure layouts) and HJDataset [31](historical Japanese document layouts). A spectrum of models trained on these datasets are currently available in the LayoutParser model zoo to support di erent use cases. 3 The Core LayoutParser Library At the core of LayoutParser is an o -the-shelf toolkit that streamlines DL- based document image analysis. Five components support a simple interface with comprehensive functionalities: 1) The layout detection models enable using pre-trained or self-trained DL models for layout detection with just four lines of code. 2) The detected layout information is stored in carefully engineered Using Unstructured# from langchain.document_loaders import UnstructuredPDFLoader loader = UnstructuredPDFLoader("example_data/layout-parser-paper.pdf") data = loader.load() Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredPDFLoader("example_data/layout-parser-paper.pdf", mode="elements") data = loader.load() data[0]
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-7
Document(page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1 (�), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1 Allen Institute for AI\[email protected]\n2 Brown University\nruochen [email protected]\n3 Harvard University\n{melissadell,jacob carlson}@fas.harvard.edu\n4 University of Washington\[email protected]\n5 University of Waterloo\[email protected]\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model configurations complicate the easy reuse of im-\nportant innovations by a wide audience. Though there have been on-going\nefforts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-8
processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser, an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de-\ntection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io.\nKeywords: Document Image Analysis · Deep Learning · Layout Analysis\n· Character Recognition · Open Source library · Toolkit.\n1\nIntroduction\nDeep Learning(DL)-based approaches are the state-of-the-art
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-9
Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classification [11,\narXiv:2103.15348v2 [cs.CV] 21 Jun 2021\n', lookup_str='', metadata={'file_path': 'example_data/layout-parser-paper.pdf', 'page_number': 1, 'total_pages': 16, 'format': 'PDF 1.5', 'title': '', 'author': '', 'subject': '', 'keywords': '', 'creator': 'LaTeX with hyperref', 'producer': 'pdfTeX-1.40.21', 'creationDate': 'D:20210622012710Z', 'modDate': 'D:20210622012710Z', 'trapped': '', 'encryption': None}, lookup_index=0)
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-10
Fetching remote PDFs using Unstructured# This covers how to load online pdfs into a document format that we can use downstream. This can be used for various online pdf sites such as https://open.umn.edu/opentextbooks/textbooks/ and https://arxiv.org/archive/ Note: all other pdf loaders can also be used to fetch remote PDFs, but OnlinePDFLoader is a legacy function, and works specifically with UnstructuredPDFLoader. from langchain.document_loaders import OnlinePDFLoader loader = OnlinePDFLoader("https://arxiv.org/pdf/2302.03803.pdf") data = loader.load() print(data)
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-11
[Document(page_content='A WEAK ( k, k ) -LEFSCHETZ THEOREM FOR PROJECTIVE TORIC ORBIFOLDS\n\nWilliam D. Montoya\n\nInstituto de Matem´atica, Estat´ıstica e Computa¸c˜ao Cient´ıfica,\n\nIn [3] we proved that, under suitable conditions, on a very general codimension s quasi- smooth intersection subvariety X in a projective toric orbifold P d Σ with d + s = 2 ( k + 1 ) the Hodge conjecture holds, that is, every ( p, p ) -cohomology class, under the Poincar´e duality is a rational linear combination of fundamental classes of algebraic subvarieties of X . The proof of the above-mentioned result relies, for p ≠ d + 1 − s , on a Lefschetz\n\nKeywords: (1,1)- Lefschetz theorem, Hodge conjecture, toric varieties, complete intersection Email: [email protected]\n\ntheorem ([7]) and the Hard Lefschetz theorem for projective orbifolds ([11]). When p =
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-12
theorem for projective orbifolds ([11]). When p = d + 1 − s the proof relies on the Cayley trick, a trick which associates to X a quasi-smooth hypersurface Y in a projective vector bundle, and the Cayley Proposition (4.3) which gives an isomorphism of some primitive cohomologies (4.2) of X and Y . The Cayley trick, following the philosophy of Mavlyutov in [7], reduces results known for quasi-smooth hypersurfaces to quasi-smooth intersection subvarieties. The idea in this paper goes the other way around, we translate some results for quasi-smooth intersection subvarieties to\n\nAcknowledgement. I thank Prof. Ugo Bruzzo and Tiago Fonseca for useful discus- sions. I also acknowledge support from FAPESP postdoctoral grant No. 2019/23499-7.\n\nLet M be a free abelian group of rank d , let N = Hom ( M, Z ) , and N R = N ⊗ Z R .\n\nif there exist k
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-13
N ⊗ Z R .\n\nif there exist k linearly independent primitive elements e\n\n, . . . , e k ∈ N such that σ = { µ\n\ne\n\n+ ⋯ + µ k e k } . • The generators e i are integral if for every i and any nonnegative rational number µ the product µe i is in N only if µ is an integer. • Given two rational simplicial cones σ , σ ′ one says that σ ′ is a face of σ ( σ ′ < σ ) if the set of integral generators of σ ′ is a subset of the set of integral generators of σ . • A finite set Σ = { σ\n\n, . . . , σ t } of rational simplicial cones is called a rational simplicial complete d -dimensional fan if:\n\nall faces of cones in Σ are in Σ ;\n\nif σ, σ ′ ∈ Σ then σ ∩ σ ′ < σ and σ ∩ σ ′ < σ ′
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-14
< σ and σ ∩ σ ′ < σ ′ ;\n\nN R = σ\n\n∪ ⋅ ⋅ ⋅ ∪ σ t .\n\nA rational simplicial complete d -dimensional fan Σ defines a d -dimensional toric variety P d Σ having only orbifold singularities which we assume to be projective. Moreover, T ∶ = N ⊗ Z C ∗ ≃ ( C ∗ ) d is the torus action on P d Σ . We denote by Σ ( i ) the i -dimensional cones\n\nFor a cone σ ∈ Σ, ˆ σ is the set of 1-dimensional cone in Σ that are not contained in σ\n\nand x ˆ σ ∶ = ∏ ρ ∈ ˆ σ x ρ is the associated monomial in S .\n\nDefinition 2.2. The irrelevant ideal of P d Σ is the monomial ideal B Σ ∶ =< x ˆ σ ∣ σ ∈ Σ > and the zero locus Z ( Σ ) ∶ = V (
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-15
locus Z ( Σ ) ∶ = V ( B Σ ) in the affine space A d ∶ = Spec ( S ) is the irrelevant locus.\n\nProposition 2.3 (Theorem 5.1.11 [5]) . The toric variety P d Σ is a categorical quotient A d ∖ Z ( Σ ) by the group Hom ( Cl ( Σ ) , C ∗ ) and the group action is induced by the Cl ( Σ ) - grading of S .\n\nNow we give a brief introduction to complex orbifolds and we mention the needed theorems for the next section. Namely: de Rham theorem and Dolbeault theorem for complex orbifolds.\n\nDefinition 2.4. A complex orbifold of complex dimension d is a singular complex space whose singularities are locally isomorphic to quotient singularities C d / G , for finite sub- groups G ⊂ Gl ( d, C ) .\n\nDefinition 2.5. A differential form on a complex orbifold
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-16
A differential form on a complex orbifold Z is defined locally at z ∈ Z as a G -invariant differential form on C d where G ⊂ Gl ( d, C ) and Z is locally isomorphic to d\n\nRoughly speaking the local geometry of orbifolds reduces to local G -invariant geometry.\n\nWe have a complex of differential forms ( A ● ( Z ) , d ) and a double complex ( A ● , ● ( Z ) , ∂, ¯ ∂ ) of bigraded differential forms which define the de Rham and the Dolbeault cohomology groups (for a fixed p ∈ N ) respectively:\n\n(1,1)-Lefschetz theorem for projective toric orbifolds\n\nDefinition 3.1. A subvariety X ⊂ P d Σ is quasi-smooth if V ( I X ) ⊂ A #Σ ( 1 ) is smooth outside\n\nExample 3.2 . Quasi-smooth hypersurfaces or more generally
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-17
. Quasi-smooth hypersurfaces or more generally quasi-smooth intersection sub-\n\nExample 3.2 . Quasi-smooth hypersurfaces or more generally quasi-smooth intersection sub- varieties are quasi-smooth subvarieties (see [2] or [7] for more details).\n\nRemark 3.3 . Quasi-smooth subvarieties are suborbifolds of P d Σ in the sense of Satake in [8]. Intuitively speaking they are subvarieties whose only singularities come from the ambient\n\nProof. From the exponential short exact sequence\n\nwe have a long exact sequence in cohomology\n\nH 1 (O ∗ X ) → H 2 ( X, Z ) → H 2 (O X ) ≃ H 0 , 2 ( X )\n\nwhere the last isomorphisms is due to Steenbrink in [9]. Now, it is enough to prove the commutativity of the next diagram\n\nwhere the last isomorphisms is due to Steenbrink in [9]. Now,\n\nH 2 ( X, Z ) / / H 2 ( X, O X ) ≃
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-18
/ H 2 ( X, O X ) ≃ Dolbeault H 2 ( X, C ) deRham ≃ H 2 dR ( X, C ) / / H 0 , 2 ¯ ∂ ( X )\n\nof the proof follows as the ( 1 , 1 ) -Lefschetz theorem in [6].\n\nRemark 3.5 . For k = 1 and P d Σ as the projective space, we recover the classical ( 1 , 1 ) - Lefschetz theorem.\n\nBy the Hard Lefschetz Theorem for projective orbifolds (see [11] for details) we\n\nBy the Hard Lefschetz Theorem for projective orbifolds (see [11] for details) we get an isomorphism of cohomologies :\n\ngiven by the Lefschetz morphism and since it is a morphism of Hodge structures, we have:\n\nH 1 , 1 ( X, Q ) ≃ H dim X − 1 , dim X − 1 ( X, Q )\n\nCorollary 3.6. If the dimension of X is 1 , 2 or
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-19
If the dimension of X is 1 , 2 or 3 . The Hodge conjecture holds on X\n\nProof. If the dim C X = 1 the result is clear by the Hard Lefschetz theorem for projective orbifolds. The dimension 2 and 3 cases are covered by Theorem 3.5 and the Hard Lefschetz.\n\nCayley trick and Cayley proposition\n\nThe Cayley trick is a way to associate to a quasi-smooth intersection subvariety a quasi- smooth hypersurface. Let L 1 , . . . , L s be line bundles on P d Σ and let π ∶ P ( E ) → P d Σ be the projective space bundle associated to the vector bundle E = L 1 ⊕ ⋯ ⊕ L s . It is known that P ( E ) is a ( d + s − 1 ) -dimensional simplicial toric variety whose fan depends on the degrees of the line bundles and the fan Σ. Furthermore, if the Cox ring, without considering the grading, of P
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-20
Cox ring, without considering the grading, of P d Σ is C [ x 1 , . . . , x m ] then the Cox ring of P ( E ) is\n\nMoreover for X a quasi-smooth intersection subvariety cut off by f 1 , . . . , f s with deg ( f i ) = [ L i ] we relate the hypersurface Y cut off by F = y 1 f 1 + ⋅ ⋅ ⋅ + y s f s which turns out to be quasi-smooth. For more details see Section 2 in [7].\n\nWe will denote P ( E ) as P d + s − 1 Σ ,X to keep track of its relation with X and P d Σ .\n\nThe following is a key remark.\n\nRemark 4.1 . There is a morphism ι ∶ X → Y ⊂ P d + s − 1 Σ ,X . Moreover every point z ∶ = ( x, y ) ∈ Y with y ≠ 0 has
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-21
y ) ∈ Y with y ≠ 0 has a preimage. Hence for any subvariety W = V ( I W ) ⊂ X ⊂ P d Σ there exists W ′ ⊂ Y ⊂ P d + s − 1 Σ ,X such that π ( W ′ ) = W , i.e., W ′ = { z = ( x, y ) ∣ x ∈ W } .\n\nFor X ⊂ P d Σ a quasi-smooth intersection variety the morphism in cohomology induced by the inclusion i ∗ ∶ H d − s ( P d Σ , C ) → H d − s ( X, C ) is injective by Proposition 1.4 in [7].\n\nDefinition 4.2. The primitive cohomology of H d − s prim ( X ) is the quotient H d − s ( X, C )/ i ∗ ( H d − s ( P d Σ , C )) and H d − s prim ( X, Q ) with rational
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-22
− s prim ( X, Q ) with rational coefficients.\n\nH d − s ( P d Σ , C ) and H d − s ( X, C ) have pure Hodge structures, and the morphism i ∗ is com- patible with them, so that H d − s prim ( X ) gets a pure Hodge structure.\n\nThe next Proposition is the Cayley proposition.\n\nProposition 4.3. [Proposition 2.3 in [3] ] Let X = X 1 ∩⋅ ⋅ ⋅∩ X s be a quasi-smooth intersec- tion subvariety in P d Σ cut off by homogeneous polynomials f 1 . . . f s . Then for p ≠ d + s − 1 2 , d + s − 3 2\n\nRemark 4.5 . The above isomorphisms are also true with rational coefficients since H ● ( X, C ) = H ● ( X, Q ) ⊗ Q C . See the beginning of Section 7.1 in
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-23
C . See the beginning of Section 7.1 in [10] for more details.\n\nTheorem 5.1. Let Y = { F = y 1 f 1 + ⋯ + y k f k = 0 } ⊂ P 2 k + 1 Σ ,X be the quasi-smooth hypersurface associated to the quasi-smooth intersection surface X = X f 1 ∩ ⋅ ⋅ ⋅ ∩ X f k ⊂ P k + 2 Σ . Then on Y the Hodge conjecture holds.\n\nthe Hodge conjecture holds.\n\nProof. If H k,k prim ( X, Q ) = 0 we are done. So let us assume H k,k prim ( X, Q ) ≠ 0. By the Cayley proposition H k,k prim ( Y, Q ) ≃ H 1 , 1 prim ( X, Q ) and by the ( 1 , 1 ) -Lefschetz theorem for projective\n\ntoric orbifolds there is a non-zero algebraic basis λ C 1 , . . . , λ C n with
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-24
1 , . . . , λ C n with rational coefficients of H 1 , 1 prim ( X, Q ) , that is, there are n ∶ = h 1 , 1 prim ( X, Q ) algebraic curves C 1 , . . . , C n in X such that under the Poincar´e duality the class in homology [ C i ] goes to λ C i , [ C i ] ↦ λ C i . Recall that the Cox ring of P k + 2 is contained in the Cox ring of P 2 k + 1 Σ ,X without considering the grading. Considering the grading we have that if α ∈ Cl ( P k + 2 Σ ) then ( α, 0 ) ∈ Cl ( P 2 k + 1 Σ ,X ) . So the polynomials defining C i ⊂ P k + 2 Σ can be interpreted in P 2 k + 1 X, Σ but with different degree. Moreover, by Remark 4.1 each
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-25
degree. Moreover, by Remark 4.1 each C i is contained in Y = { F = y 1 f 1 + ⋯ + y k f k = 0 } and\n\nfurthermore it has codimension k .\n\nClaim: { C i } ni = 1 is a basis of prim ( ) . It is enough to prove that λ C i is different from zero in H k,k prim ( Y, Q ) or equivalently that the cohomology classes { λ C i } ni = 1 do not come from the ambient space. By contradiction, let us assume that there exists a j and C ⊂ P 2 k + 1 Σ ,X such that λ C ∈ H k,k ( P 2 k + 1 Σ ,X , Q ) with i ∗ ( λ C ) = λ C j or in terms of homology there exists a ( k + 2 ) -dimensional algebraic subvariety V ⊂ P 2 k + 1 Σ ,X such that V ∩ Y = C j so
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-26
,X such that V ∩ Y = C j so they are equal as a homology class of P 2 k + 1 Σ ,X ,i.e., [ V ∩ Y ] = [ C j ] . It is easy to check that π ( V ) ∩ X = C j as a subvariety of P k + 2 Σ where π ∶ ( x, y ) ↦ x . Hence [ π ( V ) ∩ X ] = [ C j ] which is equivalent to say that λ C j comes from P k + 2 Σ which contradicts the choice of [ C j ] .\n\nRemark 5.2 . Into the proof of the previous theorem, the key fact was that on X the Hodge conjecture holds and we translate it to Y by contradiction. So, using an analogous argument we have:\n\nargument we have:\n\nProposition 5.3. Let Y = { F = y 1 f s +⋯+ y s f s = 0 } ⊂ P 2 k + 1 Σ
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-27
0 } ⊂ P 2 k + 1 Σ ,X be the quasi-smooth hypersurface associated to a quasi-smooth intersection subvariety X = X f 1 ∩ ⋅ ⋅ ⋅ ∩ X f s ⊂ P d Σ such that d + s = 2 ( k + 1 ) . If the Hodge conjecture holds on X then it holds as well on Y .\n\nCorollary 5.4. If the dimension of Y is 2 s − 1 , 2 s or 2 s + 1 then the Hodge conjecture holds on Y .\n\nProof. By Proposition 5.3 and Corollary 3.6.\n\n[\n\n] Angella, D. Cohomologies of certain orbifolds. Journal of Geometry and Physics\n\n(\n\n),\n\n–\n\n[\n\n] Batyrev, V. V., and Cox, D. A. On the Hodge structure of projective hypersur- faces in toric varieties. Duke Mathematical Journal\n\n,\n\n(Aug\n\n). [\n\n] Bruzzo, U., and Montoya, W. On the Hodge
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-28
U., and Montoya, W. On the Hodge conjecture for quasi-smooth in- tersections in toric varieties. S˜ao Paulo J. Math. Sci. Special Section: Geometry in Algebra and Algebra in Geometry (\n\n). [\n\n] Caramello Jr, F. C. Introduction to orbifolds. a\n\niv:\n\nv\n\n(\n\n). [\n\n] Cox, D., Little, J., and Schenck, H. Toric varieties, vol.\n\nAmerican Math- ematical Soc.,\n\n[\n\n] Griffiths, P., and Harris, J. Principles of Algebraic Geometry. John Wiley & Sons, Ltd,\n\n[\n\n] Mavlyutov, A. R. Cohomology of complete intersections in toric varieties. Pub- lished in Pacific J. of Math.\n\nNo.\n\n(\n\n),\n\n–\n\n[\n\n] Satake, I. On a Generalization of the Notion of Manifold. Proceedings of the National Academy of Sciences of the United States of America\n\n,\n\n(\n\n),\n\n–\n\n[\n\n] Steenbrink,
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-29
Steenbrink, J. H. M. Intersection form for quasi-homogeneous singularities. Com- positio Mathematica\n\n,\n\n(\n\n),\n\n–\n\n[\n\n] Voisin, C. Hodge Theory and Complex Algebraic Geometry I, vol.\n\nof Cambridge Studies in Advanced Mathematics . Cambridge University Press,\n\n[\n\n] Wang, Z. Z., and Zaffran, D. A remark on the Hard Lefschetz theorem for K¨ahler orbifolds. Proceedings of the American Mathematical Society\n\n,\n\n(Aug\n\n).\n\n[2] Batyrev, V. V., and Cox, D. A. On the Hodge structure of projective hypersur- faces in toric varieties. Duke Mathematical Journal 75, 2 (Aug 1994).\n\n[\n\n] Bruzzo, U., and Montoya, W. On the Hodge conjecture for quasi-smooth in- tersections in toric varieties. S˜ao Paulo J. Math. Sci. Special Section: Geometry in Algebra and Algebra in Geometry (\n\n).\n\n[3] Bruzzo, U., and Montoya, W. On the Hodge
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-30
U., and Montoya, W. On the Hodge conjecture for quasi-smooth in- tersections in toric varieties. S˜ao Paulo J. Math. Sci. Special Section: Geometry in Algebra and Algebra in Geometry (2021).\n\nA. R. Cohomology of complete intersections in toric varieties. Pub-', lookup_str='', metadata={'source': '/var/folders/ph/hhm7_zyx4l13k3v8z02dwp1w0000gn/T/tmpgq0ckaja/online_file.pdf'}, lookup_index=0)]
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-31
Using PDFMiner# from langchain.document_loaders import PDFMinerLoader loader = PDFMinerLoader("example_data/layout-parser-paper.pdf") data = loader.load() Using PyMuPDF# This is the fastest of the PDF parsing options, and contains detailed metadata about the PDF and its pages, as well as returns one document per page. from langchain.document_loaders import PyMuPDFLoader loader = PyMuPDFLoader("example_data/layout-parser-paper.pdf") data = loader.load() data[0]
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-32
Document(page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1 (�), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1 Allen Institute for AI\[email protected]\n2 Brown University\nruochen [email protected]\n3 Harvard University\n{melissadell,jacob carlson}@fas.harvard.edu\n4 University of Washington\[email protected]\n5 University of Waterloo\[email protected]\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model configurations complicate the easy reuse of im-\nportant innovations by a wide audience. Though there have been on-going\nefforts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-33
processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser, an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de-\ntection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io.\nKeywords: Document Image Analysis · Deep Learning · Layout Analysis\n· Character Recognition · Open Source library · Toolkit.\n1\nIntroduction\nDeep Learning(DL)-based approaches are the state-of-the-art
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-34
Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classification [11,\narXiv:2103.15348v2 [cs.CV] 21 Jun 2021\n', lookup_str='', metadata={'file_path': 'example_data/layout-parser-paper.pdf', 'page_number': 1, 'total_pages': 16, 'format': 'PDF 1.5', 'title': '', 'author': '', 'subject': '', 'keywords': '', 'creator': 'LaTeX with hyperref', 'producer': 'pdfTeX-1.40.21', 'creationDate': 'D:20210622012710Z', 'modDate': 'D:20210622012710Z', 'trapped': '', 'encryption': None}, lookup_index=0)
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
f59971dcf3cb-35
Additionally, you can pass along any of the options from the PyMuPDF documentation as keyword arguments in the load call, and it will be pass along to the get_text() call. previous Obsidian next PowerPoint Contents Using PyPDF Using Unstructured Retain Elements Fetching remote PDFs using Unstructured Using PDFMiner Using PyMuPDF By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\pdf.html
3f87dd89541a-0
.ipynb .pdf PowerPoint Contents Retain Elements PowerPoint# This covers how to load PowerPoint documents into a document format that we can use downstream. from langchain.document_loaders import UnstructuredPowerPointLoader loader = UnstructuredPowerPointLoader("example_data/fake-power-point.pptx") data = loader.load() data [Document(page_content='Adding a Bullet Slide\n\nFind the bullet slide layout\n\nUse _TextFrame.text for first bullet\n\nUse _TextFrame.add_paragraph() for subsequent bullets\n\nHere is a lot of text!\n\nHere is some text in a text box!', lookup_str='', metadata={'source': 'example_data/fake-power-point.pptx'}, lookup_index=0)] Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredPowerPointLoader("example_data/fake-power-point.pptx", mode="elements") data = loader.load() data[0] Document(page_content='Adding a Bullet Slide', lookup_str='', metadata={'source': 'example_data/fake-power-point.pptx'}, lookup_index=0) previous PDF next ReadTheDocs Documentation Contents Retain Elements By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\powerpoint.html
b3cc80c02f5b-0
.ipynb .pdf ReadTheDocs Documentation ReadTheDocs Documentation# This notebook covers how to load content from html that was generated as part of a Read-The-Docs build. For an example of this in the wild, see here. This assumes that the html has already been scraped into a folder. This can be done by uncommenting and running the following command #!wget -r -A.html -P rtdocs https://langchain.readthedocs.io/en/latest/ from langchain.document_loaders import ReadTheDocsLoader loader = ReadTheDocsLoader("rtdocs") docs = loader.load() previous PowerPoint next Roam By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\readthedocs_documentation.html
e4821805e1eb-0
.ipynb .pdf Roam Contents 🧑 Instructions for ingesting your own dataset Roam# This notebook covers how to load documents from a Roam database. This takes a lot of inspiration from the example repo here. 🧑 Instructions for ingesting your own dataset# Export your dataset from Roam Research. You can do this by clicking on the three dots in the upper right hand corner and then clicking Export. When exporting, make sure to select the Markdown & CSV format option. This will produce a .zip file in your Downloads folder. Move the .zip file into this repository. Run the following command to unzip the zip file (replace the Export... with your own file name as needed). unzip Roam-Export-1675782732639.zip -d Roam_DB from langchain.document_loaders import RoamLoader loader = ObsidianLoader("Roam_DB") docs = loader.load() previous ReadTheDocs Documentation next s3 Directory Contents 🧑 Instructions for ingesting your own dataset By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\roam.html
ebe76d460773-0
.ipynb .pdf s3 Directory Contents Specifying a prefix s3 Directory# This covers how to load document objects from an s3 directory object. from langchain.document_loaders import S3DirectoryLoader #!pip install boto3 loader = S3DirectoryLoader("testing-hwc") loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpaa9xl6ch/fake.docx'}, lookup_index=0)] Specifying a prefix# You can also specify a prefix for more finegrained control over what files to load. loader = S3DirectoryLoader("testing-hwc", prefix="fake") loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpujbkzf_l/fake.docx'}, lookup_index=0)] previous Roam next s3 File Contents Specifying a prefix By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\s3_directory.html
a474cef77fe9-0
.ipynb .pdf s3 File s3 File# This covers how to load document objects from an s3 file object. from langchain.document_loaders import S3FileLoader #!pip install boto3 loader = S3FileLoader("testing-hwc", "fake.docx") loader.load() [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': '/var/folders/y6/8_bzdg295ld6s1_97_12m4lr0000gn/T/tmpxvave6wl/fake.docx'}, lookup_index=0)] previous s3 Directory next Subtitle Files By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\s3_file.html
9957a559f1f3-0
.ipynb .pdf Subtitle Files Subtitle Files# How to load data from subtitle (.srt) files from langchain.document_loaders import SRTLoader loader = SRTLoader("example_data/Star_Wars_The_Clone_Wars_S06E07_Crisis_at_the_Heart.srt") docs = loader.load() docs[0].page_content[:100] '<i>Corruption discovered\nat the core of the Banking Clan!</i> <i>Reunited, Rush Clovis\nand Senator A' previous s3 File next Telegram By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\srt.html
c3b72596dbbb-0
.ipynb .pdf Telegram Telegram# This notebook covers how to load data from Telegram into a format that can be ingested into LangChain. from langchain.document_loaders import TelegramChatLoader loader = TelegramChatLoader("example_data/telegram.json") loader.load() [Document(page_content="Henry on 2020-01-01T00:00:02: It's 2020...\n\nHenry on 2020-01-01T00:00:04: Fireworks!\n\nGrace 🧤 ðŸ\x8d’ on 2020-01-01T00:00:05: You're a minute late!\n\n", lookup_str='', metadata={'source': 'example_data/telegram.json'}, lookup_index=0)] previous Subtitle Files next Unstructured File Loader By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\telegram.html
14b0edb0d820-0
.ipynb .pdf Unstructured File Loader Contents Retain Elements Define a Partitioning Strategy PDF Example Unstructured File Loader# This notebook covers how to use Unstructured to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more. # # Install package !pip install "unstructured[local-inference]" !pip install "detectron2@git+https://github.com/facebookresearch/[email protected]#egg=detectron2" !pip install layoutparser[layoutmodels,tesseract] # # Install other dependencies # # https://github.com/Unstructured-IO/unstructured/blob/main/docs/source/installing.rst # !brew install libmagic # !brew install poppler # !brew install tesseract # # If parsing xml / html documents: # !brew install libxml2 # !brew install libxslt # import nltk # nltk.download('punkt') from langchain.document_loaders import UnstructuredFileLoader loader = UnstructuredFileLoader("../../state_of_the_union.txt") docs = loader.load() docs[0].page_content[:400] 'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.\n\nLast year COVID-19 kept us apart. This year we are finally together again.\n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.\n\nWith a duty to one another to the American people to the Constit' Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements".
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\unstructured_file.html
14b0edb0d820-1
loader = UnstructuredFileLoader("../../state_of_the_union.txt", mode="elements") docs = loader.load() docs[:5] [Document(page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Last year COVID-19 kept us apart. This year we are finally together again.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='With a duty to one another to the American people to the Constitution.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0), Document(page_content='And with an unwavering resolve that freedom will always triumph over tyranny.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)] Define a Partitioning Strategy# Unstructured document loader allow users to pass in a strategy parameter that lets unstructured know how to partitioning the document. Currently supported strategies are "hi_res" (the default) and "fast". Hi res partitioning strategies are more accurate, but take longer to process. Fast strategies partition the document more quickly, but trade-off accuracy. Not all document types have separate hi res and fast partitioning strategies. For those document types, the strategy kwarg is ignored. In some cases, the high res strategy will fallback to fast if there is a dependency missing (i.e. a model for document partitioning). You can see how to apply a strategy to an UnstructuredFileLoader below.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\unstructured_file.html
14b0edb0d820-2
from langchain.document_loaders import UnstructuredFileLoader loader = UnstructuredFileLoader("layout-parser-paper-fast.pdf", strategy="fast", mode="elements") docs = loader.load() docs[:5] [Document(page_content='1', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='0', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='2', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'UncategorizedText'}, lookup_index=0), Document(page_content='n', lookup_str='', metadata={'source': 'layout-parser-paper-fast.pdf', 'filename': 'layout-parser-paper-fast.pdf', 'page_number': 1, 'category': 'Title'}, lookup_index=0)] PDF Example# Processing PDF documents works exactly the same way. Unstructured detects the file type and extracts the same types of elements. !wget https://raw.githubusercontent.com/Unstructured-IO/unstructured/main/example-docs/layout-parser-paper.pdf -P "../../" loader = UnstructuredFileLoader("../../layout-parser-paper.pdf", mode="elements") docs = loader.load() docs[:5]
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\unstructured_file.html
14b0edb0d820-3
docs = loader.load() docs[:5] [Document(page_content='LayoutParser : A Unified Toolkit for Deep Learning Based Document Image Analysis', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Zejiang Shen 1 ( (ea)\n ), Ruochen Zhang 2 , Melissa Dell 3 , Benjamin Charles Germain Lee 4 , Jacob Carlson 3 , and Weining Li 5', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Allen Institute for AI [email protected]', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Brown University ruochen [email protected]', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0), Document(page_content='Harvard University { melissadell,jacob carlson } @fas.harvard.edu', lookup_str='', metadata={'source': '../../layout-parser-paper.pdf'}, lookup_index=0)] previous Telegram next URL Contents Retain Elements Define a Partitioning Strategy PDF Example By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\unstructured_file.html
4062e686344e-0
.ipynb .pdf URL URL# This covers how to load HTML documents from a list of URLs into a document format that we can use downstream. from langchain.document_loaders import UnstructuredURLLoader urls = [ "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-8-2023", "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-9-2023" ] loader = UnstructuredURLLoader(urls=urls) data = loader.load() previous Unstructured File Loader next Web Base By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\url.html
24d78ee9eefd-0
.ipynb .pdf Web Base Web Base# This covers how to load all text from webpages into a document format that we can use downstream. For more custom logic for loading webpages look at some child class examples such as IMSDbLoader, AZLyricsLoader, and CollegeConfidentialLoader from langchain.document_loaders import WebBaseLoader loader = WebBaseLoader("https://www.espn.com/") data = loader.load() data
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\web_base.html
24d78ee9eefd-1
[Document(page_content="\n\n\n\n\n\n\n\n\nESPN - Serving Sports Fans. Anytime. Anywhere.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Skip to main content\n \n\n Skip to navigation\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n<\n\n>\n\n\n\n\n\n\n\n\n\nMenuESPN\n\n\nSearch\n\n\n\nscores\n\n\n\nNFLNBANHLNCAAMNCAAWSoccer…MLBNCAAFGolfTennisSports BettingBoxingCaribbean SeriesCFLNCAACricketF1HorseLLWSMMANASCARNBA G LeagueOlympic SportsRacingRN BBRN FBRugbyWNBAWWEX GamesXFLMore ESPNFantasyListenWatchESPN+\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\nSUBSCRIBE NOW\n\n\n\n\n\nUFC 284: Makhachev vs. Volkanovski (ESPN+ PPV)\n\n\n\n\n\n\n\nMen's College Hoops: Select
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\web_base.html
24d78ee9eefd-2
College Hoops: Select Games\n\n\n\n\n\n\n\nWomen's College Hoops: Select Games\n\n\n\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nGerman Cup: Round of 16\n\n\n\n\n\n\n\n30 For 30: Bullies Of Baltimore\n\n\n\n\n\n\n\nMatt Miller's Two-Round NFL Mock Draft\n\n\nQuick Links\n\n\n\n\nSuper Bowl LVII\n\n\n\n\n\n\n\nSuper Bowl Betting\n\n\n\n\n\n\n\nNBA Trade Machine\n\n\n\n\n\n\n\nNBA All-Star Game\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nGames For Me\n\n\n\n\n\n\nFavorites\n\n\n\n\n\n\n Manage Favorites\n \n\n\n\nCustomize ESPNSign UpLog InESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\web_base.html
24d78ee9eefd-3
Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nAP Photo/Mark J. Terrilllive\n\n\n\nChristian Wood elevates for the big-time stuffChristian Wood elevates for the big-time stuff15m0:29\n\n\nKyrie Irving nails the treyKyrie Irving nails the trey37m0:17\n\n\nDwight Powell rises up for putback dunkDwight Powell throws down the putback dunk for the Mavericks.38m0:16\n\n\nKyrie sinks his first basket with the MavericksKyrie Irving drains the jump shot early vs. the Clippers for his first points with the Mavericks.39m0:17\n\n\nReggie Bullock pulls up for wide open 3Reggie Bullock is left wide open for the 3-pointer early vs. the Clippers.46m0:21\n\n\n\nTOP HEADLINESSources: Lakers get PG Russell in 3-team tradeTrail Blazers shipping Hart to Knicks, sources sayUConn loses two straight for first time
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\web_base.html
24d78ee9eefd-4
sources sayUConn loses two straight for first time in 30 yearsNFL's Goodell on officiating: Never been betterNFLPA's Smith: Get rid of 'intrusive' NFL combineAlex Morgan: 'Bizarre' for Saudis to sponsor WWCBills' Hamlin makes appearance to receive awardWWE Hall of Famer Lawler recovering from strokeWhich NFL team trades up to No. 1?NBA TRADE DEADLINE3 P.M. ET ON THURSDAYTrade grades: What to make of the three-team deal involving Russell Westbrook and D'Angelo RussellESPN NBA Insider Kevin Pelton is handing out grades for the biggest moves.2hLayne Murdoch Jr./NBAE via Getty ImagesNBA trade tracker: Grades, details for every deal for the 2022-23 seasonWhich players are finding new homes and which teams are making trades during the free-agency frenzy?59mESPN.comNBA trade deadline: Latest buzz and newsNBA SCOREBOARDWEDNESDAY'S GAMESSee AllCLEAR THE RUNWAYJalen Green soars for lefty alley-oop1h0:19Jarrett Allen skies to drop the hammer2h0:16Once
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\web_base.html
24d78ee9eefd-5
Allen skies to drop the hammer2h0:16Once the undisputed greatest, Joe Montana is still working things out15hWright ThompsonSUPER BOWL LVII6:30 P.M. ET ON SUNDAYBarbershop tales, a fistfight and brotherly love: Untold stories that explain the Kelce brothersJason and Travis Kelce will become the first brothers to face each other in a Super Bowl. Here are untold stories from people who know them best.16hTim McManus, +2 MoreEd Zurga/AP PhotoNFL experts predict Chiefs-Eagles: Our Super Bowl winner picksNFL writers, analysts and reporters take their best guesses on the Super Bowl LVII matchup.17hESPN staffBeware of Philadelphia's Rocky statue curseMadden sim predicts Eagles to win Super BowlTOP 10 TEAMS FALLCOLLEGE HOOPSUConn loses two straight for first time since 1993, falling to Marquette57m1:58Vandy drains 3 at buzzer to knock off Tennessee, fans storm the court1h0:54COLLEGE HOOPS SCORESMEN'S AND WOMEN'S
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\web_base.html
24d78ee9eefd-6
HOOPS SCORESMEN'S AND WOMEN'S TOP-25 GAMESMen's college hoops scoreboardWomen's college basketball scoresPROJECTING THE BUBBLEMEN'S COLLEGE HOOPSBubble Watch: Current situation? North Carolina has some work to doThe countdown to Selection Sunday on March 12 has begun. We will track which teams are locks and which ones can play their way into or out of the 2023 NCAA men's basketball tournament.6hJohn GasawayAP Photo/Matt Rourke Top HeadlinesSources: Lakers get PG Russell in 3-team tradeTrail Blazers shipping Hart to Knicks, sources sayUConn loses two straight for first time in 30 yearsNFL's Goodell on officiating: Never been betterNFLPA's Smith: Get rid of 'intrusive' NFL combineAlex Morgan: 'Bizarre' for Saudis to sponsor WWCBills' Hamlin makes appearance to receive awardWWE Hall of Famer Lawler recovering from strokeWhich NFL team trades up to No. 1?Favorites FantasyManage FavoritesFantasy HomeCustomize ESPNSign UpLog InICYMI1:54Orlovsky roasts Stephen A. for his top-5 players in the
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\web_base.html
24d78ee9eefd-7
Stephen A. for his top-5 players in the Super BowlDan Orlovsky lets Stephen A. Smith hear it after he lists his top five players in Super Bowl LVII. Best of ESPN+Michael Hickey/Getty ImagesBubble Watch 2023: Brace yourself for NCAA tournament dramaThe countdown to Selection Sunday on March 12 has begun. We will track which teams are locks and which ones can play their way into or out of the 2023 NCAA men's basketball tournament.Adam Pantozzi/NBAE via Getty ImagesLeBron's journey to the NBA scoring record in shot chartsTake a look at how LeBron James' on-court performance has changed during his march to 38,388 points.Illustration by ESPNRe-drafting first two rounds of 2022 NFL class: All 64 picksWe gave every NFL team a do-over for last year's draft, re-drafting the top 64 picks. Here's who rises and falls with the benefit of hindsight.AP Photo/David DermerWay-too-early 2023 MLB starting rotation rankingsThe Yanks' and Mets' rotations take two of
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\web_base.html
24d78ee9eefd-8
Yanks' and Mets' rotations take two of the top three spots on our pre-spring training list. Where did they land -- and did another team sneak past one of 'em? Trending NowAP Photo/Jae C. HongStars pay tribute to LeBron James for securing NBA's all-time points recordLeBron James has passed Kareem Abdul-Jabbar for No. 1 on the all-time NBA scoring list, and other stars paid tribute to him on social media.Getty ImagesFans prepare for Rihanna's 2023 Super Bowl halftime showAs Rihanna prepares to make her highly anticipated return, supporters of all 32 teams are paying homage to the icon -- as only tormented NFL fans can.Photo by Cooper Neill/Getty ImagesWhen is the 2023 Super Bowl? Date, time for Chiefs vs. EaglesWe have you covered with seeding, scores and the full schedule for this season's playoffs -- and how to watch Super Bowl LVII.James Drake/Sports Illustrated via Getty ImagesNFL history: Super Bowl winners and resultsFrom the Packers' 1967 win
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\web_base.html
24d78ee9eefd-9
winners and resultsFrom the Packers' 1967 win over the Chiefs to the Rams' victory over the Bengals in 2022, we've got results for every Super Bowl.China Wong/NHLI via Getty ImagesBoston Bruins record tracker: Wins, points, milestonesThe B's are on pace for NHL records in wins and points, along with some individual superlatives as well. Follow along here with our updated tracker. Sports BettingPhoto by Kevin C. Cox/Getty ImagesSuper Bowl LVII betting: Everything you need to know to bet Eagles-ChiefsHere's your one-stop shop for all the information you need to help make your picks on the Philadelphia Eagles vs. Kansas City Chiefs in Super Bowl LVII. How to Watch on ESPN+(AP Photo/Koji Sasahara, File)How to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN, ESPN+Here's everything you need to know about how to watch the PGA Tour, Masters, PGA Championship and FedEx Cup playoffs on ESPN and ESPN+.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\web_base.html
24d78ee9eefd-10
and FedEx Cup playoffs on ESPN and ESPN+. \n\nESPN+\n\n\n\n\nUFC 284: Makhachev vs. Volkanovski (ESPN+ PPV)\n\n\n\n\n\n\n\nMen's College Hoops: Select Games\n\n\n\n\n\n\n\nWomen's College Hoops: Select Games\n\n\n\n\n\n\n\nNHL: Select Games\n\n\n\n\n\n\n\nGerman Cup: Round of 16\n\n\n\n\n\n\n\n30 For 30: Bullies Of Baltimore\n\n\n\n\n\n\n\nMatt Miller's Two-Round NFL Mock Draft\n\n\nQuick Links\n\n\n\n\nSuper Bowl LVII\n\n\n\n\n\n\n\nSuper Bowl Betting\n\n\n\n\n\n\n\nNBA Trade Machine\n\n\n\n\n\n\n\nNBA All-Star Game\n\n\n\n\n\n\n\nFantasy Baseball: Sign Up\n\n\n\n\n\n\n\nHow To Watch NHL Games\n\n\n\n\n\n\n\nGames For Me\n\n\nESPN Sites\n\n\n\n\nESPN Deportes\n\n\n\n\n\n\n\nAndscape\n\n\n\n\n\n\n\nespnW\n\n\n\n\n\n\n\nESPNFC\n\n\n\n\n\n\n\nX Games\n\n\n\n\n\n\n\nSEC Network\n\n\nESPN Apps\n\n\n\n\nESPN\n\n\n\n\n\n\n\nESPN Fantasy\n\n\nFollow
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\web_base.html
24d78ee9eefd-11
Fantasy\n\n\nFollow ESPN\n\n\n\n\nFacebook\n\n\n\n\n\n\n\nTwitter\n\n\n\n\n\n\n\nInstagram\n\n\n\n\n\n\n\nSnapchat\n\n\n\n\n\n\n\nYouTube\n\n\n\n\n\n\n\nThe ESPN Daily Podcast\n\n\nTerms of UsePrivacy PolicyYour US State Privacy RightsChildren's Online Privacy PolicyInterest-Based AdsAbout Nielsen MeasurementDo Not Sell or Share My Personal InformationContact UsDisney Ad Sales SiteWork for ESPNCopyright: © ESPN Enterprises, Inc. All rights reserved.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", lookup_str='', metadata={'source': 'https://www.espn.com/'}, lookup_index=0)]
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\web_base.html
24d78ee9eefd-12
""" # Use this piece of code for testing new custom BeautifulSoup parsers import requests from bs4 import BeautifulSoup html_doc = requests.get("{INSERT_NEW_URL_HERE}") soup = BeautifulSoup(html_doc.text, 'html.parser') # Beautiful soup logic to be exported to langchain.document_loaders.webpage.py # Example: transcript = soup.select_one("td[class='scrtext']").text # BS4 documentation can be found here: https://www.crummy.com/software/BeautifulSoup/bs4/doc/ """; previous URL next Word Documents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\web_base.html
7d64eb57ae08-0
.ipynb .pdf Word Documents Contents Retain Elements Word Documents# This covers how to load Word documents into a document format that we can use downstream. from langchain.document_loaders import UnstructuredWordDocumentLoader loader = UnstructuredWordDocumentLoader("example_data/fake.docx") data = loader.load() data [Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 'fake.docx'}, lookup_index=0)] Retain Elements# Under the hood, Unstructured creates different “elements” for different chunks of text. By default we combine those together, but you can easily keep that separation by specifying mode="elements". loader = UnstructuredWordDocumentLoader("example_data/fake.docx", mode="elements") data = loader.load() data[0] Document(page_content='Lorem ipsum dolor sit amet.', lookup_str='', metadata={'source': 'fake.docx', 'filename': 'fake.docx', 'category': 'Title'}, lookup_index=0) previous Web Base next YouTube Contents Retain Elements By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\word_document.html
56bdfb8a6886-0
.ipynb .pdf YouTube Contents Add video info YouTube loader from Google Cloud Prerequisites 🧑 Instructions for ingesting your Google Docs data YouTube# How to load documents from YouTube transcripts. from langchain.document_loaders import YoutubeLoader # !pip install youtube-transcript-api loader = YoutubeLoader.from_youtube_url("https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True) loader.load() Add video info# # ! pip install pytube loader = YoutubeLoader.from_youtube_url("https://www.youtube.com/watch?v=QsYGlZkevEg", add_video_info=True) loader.load() YouTube loader from Google Cloud# Prerequisites# Create a Google Cloud project or use an existing project Enable the Youtube Api Authorize credentials for desktop app pip install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib youtube-transcript-api 🧑 Instructions for ingesting your Google Docs data# By default, the GoogleDriveLoader expects the credentials.json file to be ~/.credentials/credentials.json, but this is configurable using the credentials_file keyword argument. Same thing with token.json. Note that token.json will be created automatically the first time you use the loader. GoogleApiYoutubeLoader can load from a list of Google Docs document ids or a folder id. You can obtain your folder and document id from the URL: Note depending on your set up, the service_account_path needs to be set up. See here for more details. from langchain.document_loaders import GoogleApiClient, GoogleApiYoutubeLoader # Init the GoogleApiClient from pathlib import Path google_api_client = GoogleApiClient(credentials_path=Path("your_path_creds.json")) # Use a Channel
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\youtube.html
56bdfb8a6886-1
# Use a Channel youtube_loader_channel = GoogleApiYoutubeLoader(google_api_client=google_api_client, channel_name="Reducible",captions_language="en") # Use Youtube Ids youtube_loader_ids = GoogleApiYoutubeLoader(google_api_client=google_api_client, video_ids=["TrdevFK_am4"], add_video_info=True) # returns a list of Documents youtube_loader_channel.load() previous Word Documents next Utils Contents Add video info YouTube loader from Google Cloud Prerequisites 🧑 Instructions for ingesting your Google Docs data By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\document_loaders\examples\youtube.html
1a7c2441a9b9-0
.md .pdf CombineDocuments Chains Contents Stuffing Map Reduce Refine Map-Rerank CombineDocuments Chains# CombineDocuments chains are useful for when you need to run a language over multiple documents. Common use cases for this include question answering, question answering with sources, summarization, and more. For more information on specific use cases as well as different methods for fetching these documents, please see this overview. This documentation now picks up from after you’ve fetched your documents - now what? How do you pass them to the language model in a format it can understand? There are a few different methods, or chains, for doing so. LangChain supports four of the more common ones - and we are actively looking to include more, so if you have any ideas please reach out! Note that there is not one best method - the decision of which one to use is often very context specific. In order from simplest to most complex: Stuffing# Stuffing is the simplest method, whereby you simply stuff all the related data into the prompt as context to pass to the language model. This is implemented in LangChain as the StuffDocumentsChain. Pros: Only makes a single call to the LLM. When generating text, the LLM has access to all the data at once. Cons: Most LLMs have a context length, and for large documents (or many documents) this will not work as it will result in a prompt larger than the context length. The main downside of this method is that it only works on smaller pieces of data. Once you are working with many pieces of data, this approach is no longer feasible. The next two approaches are designed to help deal with that. Map Reduce# This method involves running an initial prompt on each chunk of data (for summarization tasks, this
https://langchain.readthedocs.io\en\latest\modules\indexes\combine_docs.html
1a7c2441a9b9-1
This method involves running an initial prompt on each chunk of data (for summarization tasks, this could be a summary of that chunk; for question-answering tasks, it could be an answer based solely on that chunk). Then a different prompt is run to combine all the initial outputs. This is implemented in the LangChain as the MapReduceDocumentsChain. Pros: Can scale to larger documents (and more documents) than StuffDocumentsChain. The calls to the LLM on individual documents are independent and can therefore be parallelized. Cons: Requires many more calls to the LLM than StuffDocumentsChain. Loses some information during the final combined call. Refine# This method involves running an initial prompt on the first chunk of data, generating some output. For the remaining documents, that output is passed in, along with the next document, asking the LLM to refine the output based on the new document. Pros: Can pull in more relevant context, and may be less lossy than MapReduceDocumentsChain. Cons: Requires many more calls to the LLM than StuffDocumentsChain. The calls are also NOT independent, meaning they cannot be paralleled like MapReduceDocumentsChain. There is also some potential dependencies on the ordering of the documents. Map-Rerank# This method involves running an initial prompt on each chunk of data, that not only tries to complete a task but also gives a score for how certain it is in its answer. The responses are then ranked according to this score, and the highest score is returned. Pros: Similar pros as MapReduceDocumentsChain. Requires fewer calls, compared to MapReduceDocumentsChain. Cons: Cannot combine information between documents. This means it is most useful when you expect there to be a single simple answer in a single document. Contents Stuffing Map Reduce Refine Map-Rerank By Harrison Chase
https://langchain.readthedocs.io\en\latest\modules\indexes\combine_docs.html
1a7c2441a9b9-2
Stuffing Map Reduce Refine Map-Rerank By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\indexes\combine_docs.html
e1a73e077dec-0
.ipynb .pdf Getting Started Contents One Line Index Creation Walkthrough Getting Started# LangChain primary focuses on constructing indexes with the goal of using them as a Retriever. In order to best understand what this means, it’s worth highlighting what the base Retriever interface is. The BaseRetriever class in LangChain is as follows: from abc import ABC, abstractmethod from typing import List from langchain.schema import Document class BaseRetriever(ABC): @abstractmethod def get_relevant_documents(self, query: str) -> List[Document]: """Get texts relevant for a query. Args: query: string to find relevant tests for Returns: List of relevant documents """ It’s that simple! The get_relevant_documents method can be implemented however you see fit. Of course, we also help construct what we think useful Retrievers are. The main type of Retriever that we focus on is a Vectorstore retriever. We will focus on that for the rest of this guide. In order to understand what a vectorstore retriever is, it’s important to understand what a Vectorstore is. So let’s look at that. By default, LangChain uses Chroma as the vectorstore to index and search embeddings. To walk through this tutorial, we’ll first need to install chromadb. pip install chromadb This example showcases question answering over documents. We have chosen this as the example for getting started because it nicely combines a lot of different elements (Text splitters, embeddings, vectorstores) and then also shows how to use them in a chain. Question answering over documents consists of four steps: Create an index Create a Retriever from that index Create a question answering chain Ask questions!
https://langchain.readthedocs.io\en\latest\modules\indexes\getting_started.html
e1a73e077dec-1
Create a Retriever from that index Create a question answering chain Ask questions! Each of the steps has multiple sub steps and potential configurations. In this notebook we will primarily focus on (1). We will start by showing the one-liner for doing so, but then break down what is actually going on. First, let’s import some common classes we’ll use no matter what. from langchain.chains import RetrievalQA from langchain.llms import OpenAI Next in the generic setup, let’s specify the document loader we want to use. You can download the state_of_the_union.txt file here from langchain.document_loaders import TextLoader loader = TextLoader('../state_of_the_union.txt') One Line Index Creation# To get started as quickly as possible, we can use the VectorstoreIndexCreator. from langchain.indexes import VectorstoreIndexCreator index = VectorstoreIndexCreator().from_loaders([loader]) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. Now that the index is created, we can use it to ask questions of the data! Note that under the hood this is actually doing a few steps as well, which we will cover later in this guide. query = "What did the president say about Ketanji Brown Jackson" index.query(query) " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." query = "What did the president say about Ketanji Brown Jackson" index.query_with_sources(query)
https://langchain.readthedocs.io\en\latest\modules\indexes\getting_started.html
e1a73e077dec-2
index.query_with_sources(query) {'question': 'What did the president say about Ketanji Brown Jackson', 'answer': " The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, one of the nation's top legal minds, to continue Justice Breyer's legacy of excellence, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\n", 'sources': '../state_of_the_union.txt'} What is returned from the VectorstoreIndexCreator is VectorStoreIndexWrapper, which provides these nice query and query_with_sources functionality. If we just wanted to access the vectorstore directly, we can also do that. index.vectorstore <langchain.vectorstores.chroma.Chroma at 0x119aa5940> If we then want to access the VectorstoreRetriever, we can do that with: index.vectorstore.as_retriever() VectorStoreRetriever(vectorstore=<langchain.vectorstores.chroma.Chroma object at 0x119aa5940>, search_kwargs={}) Walkthrough# Okay, so what’s actually going on? How is this index getting created? A lot of the magic is being hid in this VectorstoreIndexCreator. What is this doing? There are three main steps going on after the documents are loaded: Splitting documents into chunks Creating embeddings for each document Storing documents and embeddings in a vectorstore Let’s walk through this in code documents = loader.load() Next, we will split the documents into chunks. from langchain.text_splitter import CharacterTextSplitter text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_documents(documents) We will then select which embeddings we want to use.
https://langchain.readthedocs.io\en\latest\modules\indexes\getting_started.html
e1a73e077dec-3
We will then select which embeddings we want to use. from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() We now create the vectorstore to use as the index. from langchain.vectorstores import Chroma db = Chroma.from_documents(texts, embeddings) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. So that’s creating the index. Then, we expose this index in a retriever interface. retriever = db.as_retriever() Then, as before, we create a chain and use it to answer questions! qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=retriever) query = "What did the president say about Ketanji Brown Jackson" qa.run(query) " The President said that Judge Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He said she is a consensus builder and has received a broad range of support from organizations such as the Fraternal Order of Police and former judges appointed by Democrats and Republicans." VectorstoreIndexCreator is just a wrapper around all this logic. It is configurable in the text splitter it uses, the embeddings it uses, and the vectorstore it uses. For example, you can configure it as below: index_creator = VectorstoreIndexCreator( vectorstore_cls=Chroma, embedding=OpenAIEmbeddings(), text_splitter=CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) )
https://langchain.readthedocs.io\en\latest\modules\indexes\getting_started.html
e1a73e077dec-4
) Hopefully this highlights what is going on under the hood of VectorstoreIndexCreator. While we think it’s important to have a simple way to create indexes, we also think it’s important to understand what’s going on under the hood. previous Indexes next Key Concepts Contents One Line Index Creation Walkthrough By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\indexes\getting_started.html
4f70c0079d02-0
.rst .pdf How To Guides Contents Utils Vectorstores Retrievers Chains How To Guides# Utils# There are a lot of different utilities that LangChain provides integrations for These guides go over how to use them. The utilities here are all utilities that make it easier to work with documents. Text Splitters: A walkthrough of how to split large documents up into smaller, more manageable pieces of text. VectorStores: A walkthrough of the vectorstore abstraction that LangChain supports. Embeddings: A walkthrough of embedding functionalities, and different types of embeddings, that LangChain supports. HyDE: How to use Hypothetical Document Embeddings, a novel way of constructing embeddings for document retrieval systems. Vectorstores# Vectorstores are one of the most important components of building indexes. In the below guides, we cover different types of vectorstores and how to use them. Chroma: A walkthrough of how to use the Chroma vectorstore wrapper. AtlasDB: A walkthrough of how to use the AtlasDB vectorstore and visualizer wrapper. DeepLake: A walkthrough of how to use the Deep Lake, data lake, wrapper. FAISS: A walkthrough of how to use the FAISS vectorstore wrapper. Elastic Search: A walkthrough of how to use the ElasticSearch wrapper. Milvus: A walkthrough of how to use the Milvus vectorstore wrapper. Open Search: A walkthrough of how to use the OpenSearch wrapper. Pinecone: A walkthrough of how to use the Pinecone vectorstore wrapper. Qdrant: A walkthrough of how to use the Qdrant vectorstore wrapper. Weaviate: A walkthrough of how to use the Weaviate vectorstore wrapper. PGVector: A walkthrough of how to use the PGVector (Postgres Vector DB) vectorstore wrapper.
https://langchain.readthedocs.io\en\latest\modules\indexes\how_to_guides.html
4f70c0079d02-1
Retrievers# The retriever interface is a generic interface that makes it easy to combine documents with language models. This interface exposes a get_relevant_documents method which takes in a query (a string) and returns a list of documents. Vectorstore Retriever: A walkthrough of how to use a VectorStore as a Retriever. ChatGPT Plugin Retriever: A walkthrough of how to use the ChatGPT Plugin Retriever within the LangChain framework. Chains# The examples here are all end-to-end chains that use indexes or utils covered above. Question Answering: A walkthrough of how to use LangChain for question answering over specific documents. Question Answering with Sources: A walkthrough of how to use LangChain for question answering (with sources) over specific documents. Summarization: A walkthrough of how to use LangChain for summarization over specific documents. Vector DB Text Generation: A walkthrough of how to use LangChain for text generation over a vector database. Vector DB Question Answering: A walkthrough of how to use LangChain for question answering over a vector database. Vector DB Question Answering with Sources: A walkthrough of how to use LangChain for question answering (with sources) over a vector database. Graph Question Answering: A walkthrough of how to use LangChain for question answering (with sources) over a graph database. Chat Vector DB: A walkthrough of how to use LangChain as a chatbot over a vector database. Analyze Document: A walkthrough of how to use LangChain to analyze long documents. previous Key Concepts next Embeddings Contents Utils Vectorstores Retrievers Chains By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\indexes\how_to_guides.html
1b40566c856c-0
.md .pdf Key Concepts Contents Text Splitter Embeddings Vectorstores CombineDocuments Chains Key Concepts# Text Splitter# This class is responsible for splitting long pieces of text into smaller components. It contains different ways for splitting text (on characters, using Spacy, etc) as well as different ways for measuring length (token based, character based, etc). Embeddings# These classes are very similar to the LLM classes in that they are wrappers around models, but rather than return a string they return an embedding (list of floats). These are particularly useful when implementing semantic search functionality. They expose separate methods for embedding queries versus embedding documents. Vectorstores# These are datastores that store embeddings of documents in vector form. They expose a method for passing in a string and finding similar documents. CombineDocuments Chains# These are a subset of chains designed to work with documents. There are two pieces to consider: The underlying chain method (eg, how the documents are combined) Use cases for these types of chains. For the first, please see this documentation for more detailed information on the types of chains LangChain supports. For the second, please see the Use Cases section for more information on question answering, question answering with sources, and summarization. previous Getting Started next How To Guides Contents Text Splitter Embeddings Vectorstores CombineDocuments Chains By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\indexes\key_concepts.html
aaf8275687fe-0
.ipynb .pdf Analyze Document Contents Summarize Question Answering Analyze Document# The AnalyzeDocumentChain is more of an end to chain. This chain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain. This can be used as more of an end-to-end chain. with open('../../state_of_the_union.txt') as f: state_of_the_union = f.read() Summarize# Let’s take a look at it in action below, using it summarize a long document. from langchain import OpenAI from langchain.chains.summarize import load_summarize_chain llm = OpenAI(temperature=0) summary_chain = load_summarize_chain(llm, chain_type="map_reduce") from langchain.chains import AnalyzeDocumentChain summarize_document_chain = AnalyzeDocumentChain(combine_docs_chain=summary_chain) summarize_document_chain.run(state_of_the_union) " In this speech, President Biden addresses the American people and the world, discussing the recent aggression of Russia's Vladimir Putin in Ukraine and the US response. He outlines economic sanctions and other measures taken to hold Putin accountable, and announces the US Department of Justice's task force to go after the crimes of Russian oligarchs. He also announces plans to fight inflation and lower costs for families, invest in American manufacturing, and provide military, economic, and humanitarian assistance to Ukraine. He calls for immigration reform, protecting the rights of women, and advancing the rights of LGBTQ+ Americans, and pays tribute to military families. He concludes with optimism for the future of America." Question Answering# Let’s take a look at this using a question answering chain. from langchain.chains.question_answering import load_qa_chain qa_chain = load_qa_chain(llm, chain_type="map_reduce")
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\analyze_document.html
aaf8275687fe-1
qa_chain = load_qa_chain(llm, chain_type="map_reduce") qa_document_chain = AnalyzeDocumentChain(combine_docs_chain=qa_chain) qa_document_chain.run(input_document=state_of_the_union, question="what did the president say about justice breyer?") ' The president thanked Justice Breyer for his service.' previous VectorStore Retriever next Chat Index Contents Summarize Question Answering By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\analyze_document.html
ac13b4e1199a-0
.ipynb .pdf Chat Index Contents Return Source Documents ConversationalRetrievalChain with search_distance ConversationalRetrievalChain with map_reduce ConversationalRetrievalChain with Question Answering with sources ConversationalRetrievalChain with streaming to stdout get_chat_history Function Chat Index# This notebook goes over how to set up a chain to chat with an index. The only difference between this chain and the RetrievalQAChain is that this allows for passing in of a chat history which can be used to allow for follow up questions. from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.text_splitter import CharacterTextSplitter from langchain.llms import OpenAI from langchain.chains import ConversationalRetrievalChain Load in documents. You can replace this with a loader for whatever type of data you want from langchain.document_loaders import TextLoader loader = TextLoader('../../state_of_the_union.txt') documents = loader.load() If you had multiple loaders that you wanted to combine, you do something like: # loaders = [....] # docs = [] # for loader in loaders: # docs.extend(loader.load()) We now split the documents, create embeddings for them, and put them in a vectorstore. This allows us to do semantic search over them. text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) documents = text_splitter.split_documents(documents) embeddings = OpenAIEmbeddings() vectorstore = Chroma.from_documents(documents, embeddings) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. We now initialize the ConversationalRetrievalChain
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\chat_vector_db.html
ac13b4e1199a-1
We now initialize the ConversationalRetrievalChain qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore) Here’s an example of asking a question with no chat history chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = qa({"question": query, "chat_history": chat_history}) result["answer"] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." Here’s an example of asking a question with some chat history chat_history = [(query, result["answer"])] query = "Did he mention who she suceeded" result = qa({"question": query, "chat_history": chat_history}) result['answer'] ' Justice Stephen Breyer' Return Source Documents# You can also easily return source documents from the ConversationalRetrievalChain. This is useful for when you want to inspect what documents were returned. qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore, return_source_documents=True) chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = qa({"question": query, "chat_history": chat_history}) result['source_documents'][0]
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\chat_vector_db.html
ac13b4e1199a-2
result['source_documents'][0] Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0) ConversationalRetrievalChain with search_distance# If you are using a vector store that supports filtering by search distance, you can add a threshold value parameter. vectordbkwargs = {"search_distance": 0.9} qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore, return_source_documents=True) chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = qa({"question": query, "chat_history": chat_history, "vectordbkwargs": vectordbkwargs}) ConversationalRetrievalChain with map_reduce# We can also use different types of combine document chains with the ConversationalRetrievalChain chain. from langchain.chains import LLMChain from langchain.chains.question_answering import load_qa_chain
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\chat_vector_db.html
ac13b4e1199a-3
from langchain.chains.question_answering import load_qa_chain from langchain.chains.chat_index.prompts import CONDENSE_QUESTION_PROMPT llm = OpenAI(temperature=0) question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT) doc_chain = load_qa_chain(llm, chain_type="map_reduce") chain = ConversationalRetrievalChain( retriever=vectorstore.as_retriever(), question_generator=question_generator, combine_docs_chain=doc_chain, ) chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = chain({"question": query, "chat_history": chat_history}) result['answer'] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." ConversationalRetrievalChain with Question Answering with sources# You can also use this chain with the question answering with sources chain. from langchain.chains.qa_with_sources import load_qa_with_sources_chain llm = OpenAI(temperature=0) question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT) doc_chain = load_qa_with_sources_chain(llm, chain_type="map_reduce") chain = ConversationalRetrievalChain( retriever=vectorstore.as_retriever(), question_generator=question_generator, combine_docs_chain=doc_chain, ) chat_history = [] query = "What did the president say about Ketanji Brown Jackson"
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\chat_vector_db.html
ac13b4e1199a-4
chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = chain({"question": query, "chat_history": chat_history}) result['answer'] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \nSOURCES: ../../state_of_the_union.txt" ConversationalRetrievalChain with streaming to stdout# Output from the chain will be streamed to stdout token by token in this example. from langchain.chains.llm import LLMChain from langchain.callbacks.base import CallbackManager from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler from langchain.chains.chat_index.prompts import CONDENSE_QUESTION_PROMPT, QA_PROMPT from langchain.chains.question_answering import load_qa_chain # Construct a ChatVectorDBChain with a streaming llm for combine docs # and a separate, non-streaming llm for question generation llm = OpenAI(temperature=0) streaming_llm = OpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0) question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT) doc_chain = load_qa_chain(streaming_llm, chain_type="stuff", prompt=QA_PROMPT) qa = ConversationalRetrievalChain( retriever=vectorstore.as_retriever(), combine_docs_chain=doc_chain, question_generator=question_generator) chat_history = []
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\chat_vector_db.html
ac13b4e1199a-5
chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = qa({"question": query, "chat_history": chat_history}) The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. chat_history = [(query, result["answer"])] query = "Did he mention who she suceeded" result = qa({"question": query, "chat_history": chat_history}) Justice Stephen Breyer get_chat_history Function# You can also specify a get_chat_history function, which can be used to format the chat_history string. def get_chat_history(inputs) -> str: res = [] for human, ai in inputs: res.append(f"Human:{human}\nAI:{ai}") return "\n".join(res) qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore, get_chat_history=get_chat_history) chat_history = [] query = "What did the president say about Ketanji Brown Jackson" result = qa({"question": query, "chat_history": chat_history}) result['answer'] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans." previous Analyze Document
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\chat_vector_db.html
ac13b4e1199a-6
previous Analyze Document next Graph QA Contents Return Source Documents ConversationalRetrievalChain with search_distance ConversationalRetrievalChain with map_reduce ConversationalRetrievalChain with Question Answering with sources ConversationalRetrievalChain with streaming to stdout get_chat_history Function By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\chat_vector_db.html
f8732739a26e-0
.ipynb .pdf Graph QA Contents Create the graph Querying the graph Save the graph Graph QA# This notebook goes over how to do question answering over a graph data structure. Create the graph# In this section, we construct an example graph. At the moment, this works best for small pieces of text. from langchain.indexes import GraphIndexCreator from langchain.llms import OpenAI from langchain.document_loaders import TextLoader index_creator = GraphIndexCreator(llm=OpenAI(temperature=0)) with open("../../state_of_the_union.txt") as f: all_text = f.read() We will use just a small snippet, because extracting the knowledge triplets is a bit intensive at the moment. text = "\n".join(all_text.split("\n\n")[105:108]) text 'It won’t look like much, but if you stop and look closely, you’ll see a “Field of dreams,” the ground on which America’s future will be built. \nThis is where Intel, the American company that helped build Silicon Valley, is going to build its $20 billion semiconductor “mega site”. \nUp to eight state-of-the-art factories in one place. 10,000 new good-paying jobs. ' graph = index_creator.from_text(text) We can inspect the created graph. graph.get_triples() [('Intel', '$20 billion semiconductor "mega site"', 'is going to build'), ('Intel', 'state-of-the-art factories', 'is building'), ('Intel', '10,000 new good-paying jobs', 'is creating'), ('Intel', 'Silicon Valley', 'is helping build'), ('Field of dreams', "America's future will be built", 'is the ground on which')] Querying the graph#
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\graph_qa.html
f8732739a26e-1
'is the ground on which')] Querying the graph# We can now use the graph QA chain to ask question of the graph from langchain.chains import GraphQAChain chain = GraphQAChain.from_llm(OpenAI(temperature=0), graph=graph, verbose=True) chain.run("what is Intel going to build?") > Entering new GraphQAChain chain... Entities Extracted: Intel Full Context: Intel is going to build $20 billion semiconductor "mega site" Intel is building state-of-the-art factories Intel is creating 10,000 new good-paying jobs Intel is helping build Silicon Valley > Finished chain. ' Intel is going to build a $20 billion semiconductor "mega site" with state-of-the-art factories, creating 10,000 new good-paying jobs and helping to build Silicon Valley.' Save the graph# We can also save and load the graph. graph.write_to_gml("graph.gml") from langchain.indexes.graph import NetworkxEntityGraph loaded_graph = NetworkxEntityGraph.from_gml("graph.gml") loaded_graph.get_triples() [('Intel', '$20 billion semiconductor "mega site"', 'is going to build'), ('Intel', 'state-of-the-art factories', 'is building'), ('Intel', '10,000 new good-paying jobs', 'is creating'), ('Intel', 'Silicon Valley', 'is helping build'), ('Field of dreams', "America's future will be built", 'is the ground on which')] previous Chat Index next Question Answering with Sources Contents Create the graph Querying the graph Save the graph By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Mar 24, 2023.
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\graph_qa.html
a3704047d63c-0
.ipynb .pdf Question Answering with Sources Contents Prepare Data Quickstart The stuff Chain The map_reduce Chain The refine Chain The map-rerank Chain Question Answering with Sources# This notebook walks through how to use LangChain for question answering with sources over a list of documents. It covers four different chain types: stuff, map_reduce, refine,map-rerank. For a more in depth explanation of what these chain types are, see here. Prepare Data# First we prepare the data. For this example we do similarity search over a vector database, but these documents could be fetched in any manner (the point of this notebook to highlight what to do AFTER you fetch the documents). from langchain.embeddings.openai import OpenAIEmbeddings from langchain.embeddings.cohere import CohereEmbeddings from langchain.text_splitter import CharacterTextSplitter from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch from langchain.vectorstores import Chroma from langchain.docstore.document import Document from langchain.prompts import PromptTemplate with open('../../state_of_the_union.txt') as f: state_of_the_union = f.read() text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0) texts = text_splitter.split_text(state_of_the_union) embeddings = OpenAIEmbeddings() docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": str(i)} for i in range(len(texts))]) Running Chroma using direct local API. Using DuckDB in-memory for database. Data will be transient. query = "What did the president say about Justice Breyer" docs = docsearch.similarity_search(query) from langchain.chains.qa_with_sources import load_qa_with_sources_chain
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\qa_with_sources.html
a3704047d63c-1
from langchain.chains.qa_with_sources import load_qa_with_sources_chain from langchain.llms import OpenAI Quickstart# If you just want to get started as quickly as possible, this is the recommended way to do it: chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'} If you want more control and understanding over what is happening, please see the information below. The stuff Chain# This sections shows results of using the stuff Chain to do question answering with sources. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. template = """Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES"). If you don't know the answer, just say that you don't know. Don't try to make up an answer. ALWAYS return a "SOURCES" part in your answer. Respond in Italian. QUESTION: {question} ========= {summaries} ========= FINAL ANSWER IN ITALIAN:""" PROMPT = PromptTemplate(template=template, input_variables=["summaries", "question"])
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\qa_with_sources.html
a3704047d63c-2
PROMPT = PromptTemplate(template=template, input_variables=["summaries", "question"]) chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="stuff", prompt=PROMPT) query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': '\nNon so cosa abbia detto il presidente riguardo a Justice Breyer.\nSOURCES: 30, 31, 33'} The map_reduce Chain# This sections shows results of using the map_reduce Chain to do question answering with sources. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'} Intermediate Steps We can also return the intermediate steps for map_reduce chains, should we want to inspect them. This is done with the return_map_steps variable. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True) chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': [' "Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service."', ' None', ' None', ' None'],
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\qa_with_sources.html
a3704047d63c-3
' None', ' None', ' None'], 'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. question_prompt_template = """Use the following portion of a long document to see if any of the text is relevant to answer the question. Return any relevant text in Italian. {context} Question: {question} Relevant text, if any, in Italian:""" QUESTION_PROMPT = PromptTemplate( template=question_prompt_template, input_variables=["context", "question"] ) combine_prompt_template = """Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES"). If you don't know the answer, just say that you don't know. Don't try to make up an answer. ALWAYS return a "SOURCES" part in your answer. Respond in Italian. QUESTION: {question} ========= {summaries} ========= FINAL ANSWER IN ITALIAN:""" COMBINE_PROMPT = PromptTemplate( template=combine_prompt_template, input_variables=["summaries", "question"] ) chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="map_reduce", return_intermediate_steps=True, question_prompt=QUESTION_PROMPT, combine_prompt=COMBINE_PROMPT) chain({"input_documents": docs, "question": query}, return_only_outputs=True)
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\qa_with_sources.html
a3704047d63c-4
chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ["\nStasera vorrei onorare qualcuno che ha dedicato la sua vita a servire questo paese: il giustizia Stephen Breyer - un veterano dell'esercito, uno studioso costituzionale e un giustizia in uscita della Corte Suprema degli Stati Uniti. Giustizia Breyer, grazie per il tuo servizio.", ' Non pertinente.', ' Non rilevante.', " Non c'è testo pertinente."], 'output_text': ' Non conosco la risposta. SOURCES: 30, 31, 33, 20.'} Batch Size When using the map_reduce chain, one thing to keep in mind is the batch size you are using during the map step. If this is too high, it could cause rate limiting errors. You can control this by setting the batch size on the LLM used. Note that this only applies for LLMs with this parameter. Below is an example of doing so: llm = OpenAI(batch_size=5, temperature=0) The refine Chain# This sections shows results of using the refine Chain to do question answering with sources. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="refine") query = "What did the president say about Justice Breyer" chain({"input_documents": docs, "question": query}, return_only_outputs=True)
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\qa_with_sources.html
a3704047d63c-5
chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'output_text': "\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked him for his service and praised his career as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He noted Justice Breyer's reputation as a consensus builder and the broad range of support he has received from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also highlighted the importance of securing the border and fixing the immigration system in order to advance liberty and justice, and mentioned the new technology, joint patrols, dedicated immigration judges, and commitments to support partners in South and Central America that have been put in place. He also expressed his commitment to the LGBTQ+ community, noting the need for the bipartisan Equality Act and the importance of protecting transgender Americans from state laws targeting them. He also highlighted his commitment to bipartisanship, noting the 80 bipartisan bills he signed into law last year, and his plans to strengthen the Violence Against Women Act. Additionally, he announced that the Justice Department will name a chief prosecutor for pandemic fraud and his plan to lower the deficit by more than one trillion dollars in a"} Intermediate Steps We can also return the intermediate steps for refine chains, should we want to inspect them. This is done with the return_intermediate_steps variable. chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True) chain({"input_documents": docs, "question": query}, return_only_outputs=True)
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\qa_with_sources.html
a3704047d63c-6
chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service.', '\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. \n\nSource: 31',
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\qa_with_sources.html
a3704047d63c-7
'\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. He also mentioned the need to pass the bipartisan Equality Act to protect LGBTQ+ Americans, and to strengthen the Violence Against Women Act that he had written three decades ago. \n\nSource: 31, 33',
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\qa_with_sources.html
a3704047d63c-8
'\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. He also mentioned the need to pass the bipartisan Equality Act to protect LGBTQ+ Americans, and to strengthen the Violence Against Women Act that he had written three decades ago. Additionally, he mentioned his plan to lower costs to give families a fair shot, lower the deficit, and go after criminals who stole billions in relief money meant for small businesses and millions of Americans. He also announced that the Justice Department will name a chief prosecutor for pandemic fraud. \n\nSource: 20, 31, 33'],
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\qa_with_sources.html
a3704047d63c-9
'output_text': '\n\nThe president said that he was honoring Justice Breyer for his dedication to serving the country and that he was a retiring Justice of the United States Supreme Court. He also thanked Justice Breyer for his service, noting his background as a top litigator in private practice, a former federal public defender, and a family of public school educators and police officers. He praised Justice Breyer for being a consensus builder and for receiving a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. He also noted that in order to advance liberty and justice, it was necessary to secure the border and fix the immigration system, and that the government was taking steps to do both. He also mentioned the need to pass the bipartisan Equality Act to protect LGBTQ+ Americans, and to strengthen the Violence Against Women Act that he had written three decades ago. Additionally, he mentioned his plan to lower costs to give families a fair shot, lower the deficit, and go after criminals who stole billions in relief money meant for small businesses and millions of Americans. He also announced that the Justice Department will name a chief prosecutor for pandemic fraud. \n\nSource: 20, 31, 33'} Custom Prompts You can also use your own prompts with this chain. In this example, we will respond in Italian. refine_template = ( "The original question is as follows: {question}\n" "We have provided an existing answer, including sources: {existing_answer}\n" "We have the opportunity to refine the existing answer" "(only if needed) with some more context below.\n" "------------\n" "{context_str}\n" "------------\n" "Given the new context, refine the original answer to better " "answer the question (in Italian)"
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\qa_with_sources.html
a3704047d63c-10
"answer the question (in Italian)" "If you do update it, please update the sources as well. " "If the context isn't useful, return the original answer." ) refine_prompt = PromptTemplate( input_variables=["question", "existing_answer", "context_str"], template=refine_template, ) question_template = ( "Context information is below. \n" "---------------------\n" "{context_str}" "\n---------------------\n" "Given the context information and not prior knowledge, " "answer the question in Italian: {question}\n" ) question_prompt = PromptTemplate( input_variables=["context_str", "question"], template=question_template ) chain = load_qa_with_sources_chain(OpenAI(temperature=0), chain_type="refine", return_intermediate_steps=True, question_prompt=question_prompt, refine_prompt=refine_prompt) chain({"input_documents": docs, "question": query}, return_only_outputs=True) {'intermediate_steps': ['\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese e ha onorato la sua carriera.',
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\qa_with_sources.html
a3704047d63c-11
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per",
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\qa_with_sources.html
a3704047d63c-12
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per",
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\qa_with_sources.html
a3704047d63c-13
"\n\nIl presidente ha detto che Justice Breyer ha dedicato la sua vita al servizio di questo paese, ha onorato la sua carriera e ha contribuito a costruire un consenso. Ha ricevuto un ampio sostegno, dall'Ordine Fraterno della Polizia a ex giudici nominati da democratici e repubblicani. Inoltre, ha sottolineato l'importanza di avanzare la libertà e la giustizia attraverso la sicurezza delle frontiere e la risoluzione del sistema di immigrazione. Ha anche menzionato le nuove tecnologie come scanner all'avanguardia per rilevare meglio il traffico di droga, le pattuglie congiunte con Messico e Guatemala per catturare più trafficanti di esseri umani, l'istituzione di giudici di immigrazione dedicati per far sì che le famiglie che fuggono da per"],
https://langchain.readthedocs.io\en\latest\modules\indexes\chain_examples\qa_with_sources.html