Integrating gpt4all-j as a LLM under LangChain
I was wondering, Is there a way we can use this model with LangChain for creating a model that can answer to questions based on corpus of text present inside a custom pdf documents.
+1
+1
there is a tuto to integrate dolly (open source) as LLM under langchain
https://www.dbdemos.ai/demo-notebooks.html?demoName=llm-dolly-chatbot
but I did not manage to do it. Maybe there could be a simplified version.
if someone has the talent to to do it...
@odysseus340 this guide looks promising but it needs a cluster with GPU on the cloud, i will try it on Google Colab PRO then i will try it on my personal PC with 32gb RAM but will use the 3B parameters dolly on my PC instead.
ill let you know if this works, thanks for sharing!
https://github.com/su77ungr/CASALIOY
ingesting .txt locally. just convert pdf to text before
I'm running 30ms per Token on a i5-9600k and 16GB. Using 7B Vicuna and Qdrant. You soon won't be using OpenAI anymore
That sounds incredible!!!
maybe you know this one already. I have discovered sagemaker yesterday: https://studiolab.sagemaker.aws/
it is a kind of free google collab on steroids.