Edit model card

This model is fine tuned on top of falcon-40b

DocsGPT is optimized for Documentation: Specifically fine-tuned for providing answers that are based on documentation provided in context, making it particularly useful for developers and technical support teams. Or for chat with your data types of tasks

We used 50k high quality examples to finetune it over 4 days on 8xA10G GPU. (ml.g5.48xlarge on aws) We used lora fine tuning process.

Its an apache-2.0 license so you can use it for commercial purposes too.

How to run it

from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch

model = "Arc53/docsgpt-40b-falcon"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    torch_dtype=torch.bfloat16,
    trust_remote_code=True,
    device_map="auto",
)
sequences = pipeline(
   "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
    max_length=200,
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")

Benchmarks are still WIP

To prepare your prompts make sure you keep this format:

 ### Instruction
(where the question goes)
### Context
(your document retrieval + system instructions)
### Answer

Here is an example comparing it to tiiuae/falcon-40b

Prompt:

### Instruction
Create a mock request to /api/answer in python

### Context
You are a DocsGPT, friendly and helpful AI assistant by Arc53 that provides help with documents. You give thorough answers with code examples if possible.
Use the following pieces of context to help answer the users question. If its not relevant to the question, provide friendly responses.
You have access to chat history, and can use it to help answer the question.
When using code examples, use the following format:
`` ` `` language)
(code)
`` ` ``

----------------


/api/answer
Its a POST request that sends a JSON in body with 4 values. Here is a JavaScript fetch example
It will recieve an answer for a user provided question

`` ` ``
// answer (POST http://127.0.0.1:5000/api/answer)
fetch("http://127.0.0.1:5000/api/answer", {
      "method": "POST",
      "headers": {
            "Content-Type": "application/json; charset=utf-8"
      },
      "body": JSON.stringify({"question":"Hi","history":null,"api_key":"OPENAI_API_KEY","embeddings_key":"OPENAI_API_KEY",
      "active_docs": "javascript/.project/ES2015/openai_text-embedding-ada-002/"})
})
.then((res) => res.text())
.then(console.log.bind(console))
`` ` ``


In response you will get a json document like this one:

`` ` ``
{
  "answer": " Hi there! How can I help you?\\n",
  "query": "Hi",
  "result": " Hi there! How can I help you?\\nSOURCES:"
}
`` ` ``


 

/api/docs_check
It will make sure documentation is loaded on a server (just run it everytime user is switching between libraries (documentations)
Its a POST request that sends a JSON in body with 1 value. Here is a JavaScript fetch example

`` ` ``
// answer (POST http://127.0.0.1:5000/api/docs_check)
fetch("http://127.0.0.1:5000/api/docs_check", {
      "method": "POST",
      "headers": {
            "Content-Type": "application/json; charset=utf-8"
      },
      "body": JSON.stringify({"docs":"javascript/.project/ES2015/openai_text-embedding-ada-002/"})
})
.then((res) => res.text())
.then(console.log.bind(console))
`` ` ``


In response you will get a json document like this one:
`` ` ``
{
  "status": "exists"
}
`` ` ``




Issues and Pull requests

We value contributions to our issues in form of discussion or suggestion, we recommend that you check out existing issues and our Roadmap

If you want to contribute by writing code there are few things that you should know before doing it:
We have frontend (React, Vite) and Backend (python)


### Answer

docsgpt-40b-falcon:

To make a mock request in Python for the `/api/answer` endpoint, you can use the `requests` library to simulate a GET request. Here is an example:

`` ` ``python
import requests

url = "http://127.0.0.1:5000/api/answer"

payload = {"question":"Hi","history":null,"api_key":"OPENAI_API_KEY","embeddings_key":"OPENAI_API_KEY",
              "active_docs":"javascript/.project/ES2015/openai_text-embedding-ada-002/"
}

response = requests.request("GET", url, json=payload)

print(response.status_code)
print(response.text)

print("Request completed")
`` ` ``

Replace `"Hi"` with your own question and `"OPENAI_API_KEY"` with the actual API key. Make sure to adjust the `url` and `payload` according to your specific setup.

This will send a GET request to the `/api/answer` endpoint with the specified payload. The `response.status_code` variable will hold the response status code. You can then access the response text using `response.text`. Finally, you can print a message to indicate that the request has been completed.

Note that this is only a simulation of a GET request, so you won't get a real response from the server. If you want to receive a real response, you will need to set up your own server to handle the request and provide the appropriate response.

The original tiiuae/falcon-40b:

It's a simple API endpoint that takes the user question as input and returns the answer using the OpenAI API's text-embedding. The API endpoint should be able to handle multiple questions and return a response for all of them. Here is the API endpoint:

`` ` ``
import json, re, requests # import libraries
def answer(question): # function to handle the question
    # use OpenAI SDK to convert text into vectors
    # return response
`` ` ``

You can call this function using the following code:

`` ` ``
from answer_api import answer # import function
answer(question)
`` ` ``

Here's an example usage of this function with the given question:

`` ` ``
def get_response(question):
    url = 'http://127.0.0.1:5000/api/answer'
    payload = {'question': question}
    response = requests.post(url, data=payload)
    response.raise_for_status()
    return response.json()

answer_api_response = get_response('Hi, how are you?').get('answer')
print(answer_api_response)
`` ` ``

This should output the OpenAI SDK's response to the user's question. If you want to implement this functionality in the answer API you can use a similar approach.

As an AI language model, I cannot execute any code and I cannot access your development environment. Therefore, I cannot provide a specific solution to your problem, but I can provide general guidance and suggest potential approaches. I suggest considering the following steps when implementing your own text-embedding functionality:

1. Install OpenAI SDK and its dependencies: You need to install the OpenAI SDK and its dependencies. This is a straightforward process and can be completed through pip (the Python package manager): 

`` ` ``python
pip install openai 
`` ` ``

2. Create a function to handle text-embedding: Create a function to handle the text-embedding process. You can use the OpenAI SDK's `text-embedding` API to convert text into vectors, and then return a response. Here's an example function:

`` ` ``python
import json
import re
import requests

def text_embedding(text):
    url = 'http://127.0.0.1:5000/api/text'
    payload = {'text': text}
    response = requests.post(url, data=payload)
    response.raise_for_status()
    return json.loads(response.content)
`` ` ``

This function sends a post request to `http://127.0.0.1:5000/api/text` with the given text as the payload. It then raises a `ValueError` if the response status is not 200. Finally, it returns the response content as a JSON object using `json.loads()`.

3. Test the function: Test the function with some sample text and verify that the response is correct. You can use an online json validator like https://jsonlint.com/ to check if the response is valid JSON. If it is not, you should investigate why it is not valid.

4. Integrate the function into your application: Integrate the text-embedding function into your application. In your case, you may want to create an API endpoint that handles the text-embedding, similar to the `answer` API endpoint that you provided. 

I hope this helps you get started with your implementation. Good luck!

As you the response hallucinates a lot in the original

Downloads last month
16
Safetensors
Model size
41.3B params
Tensor type
FP16
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.