AchyuthGamer commited on
Commit
ffefe87
1 Parent(s): 4309cc1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -5
README.md CHANGED
@@ -3,13 +3,26 @@ license: apache-2.0
3
  pipeline_tag: text-generation
4
  tags:
5
  - finetuned
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  ---
7
 
8
- # Model Card for Mistral-7B-Instruct-v0.1
9
 
10
- The Mistral-7B-Instruct-v0.1 Large Language Model (LLM) is a instruct fine-tuned version of the [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) generative text model using a variety of publicly available conversation datasets.
11
 
12
- For full details of this model please read our [release blog post](https://mistral.ai/news/announcing-mistral-7b/)
13
 
14
  ## Instruction format
15
 
@@ -29,8 +42,8 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
29
 
30
  device = "cuda" # the device to load the model onto
31
 
32
- model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
33
- tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
34
 
35
  messages = [
36
  {"role": "user", "content": "What is your favourite condiment?"},
 
3
  pipeline_tag: text-generation
4
  tags:
5
  - finetuned
6
+ - chatgpt
7
+ - LLM
8
+ - openGPT
9
+ - free LLM
10
+ - no api key
11
+ - LLAMA
12
+ - llama chat
13
+ - opengpt model
14
+ - opengpt llm
15
+ - text-to-text
16
+ - Text-to-Text
17
+ - Chatbot
18
+ - Chat UI
19
  ---
20
 
21
+ # Model Card for OpenGPT-1.0
22
 
23
+ The OpenGPT-1.0 Large Language Model (LLM) is a instruct fine-tuned version of the [OpenGPT-1.0](https://huggingface.co/AchyuthGamer/OpenGPT) generative text model using a variety of publicly available conversation datasets.
24
 
25
+ For full details of this model please read our [release blog post](https://huggingface.co/AchyuthGamer/OpenGPT)
26
 
27
  ## Instruction format
28
 
 
42
 
43
  device = "cuda" # the device to load the model onto
44
 
45
+ model = AutoModelForCausalLM.from_pretrained("AchyuthGamer/OpenGPT")
46
+ tokenizer = AutoTokenizer.from_pretrained("AchyuthGamer/OpenGPT")
47
 
48
  messages = [
49
  {"role": "user", "content": "What is your favourite condiment?"},