Josephgflowers commited on
Commit
fb3c9b6
1 Parent(s): 4764830

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -7,7 +7,7 @@ Model Name: Cinder
7
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6328952f798f8d122ce62a44/_5ihlDZflgdA0Em76t5j9.png)
8
 
9
  Created by: Joseph Flowers
10
- ***Updated 1-10-23*** New round of training, added gguf model 8bit.
11
  Model Overview
12
  Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration. It is built on the TinyLlama 1.1B parameter model and trained on a unique combination of datasets.
13
 
@@ -19,7 +19,7 @@ A subset of Open Orca: https://huggingface.co/datasets/Open-Orca/OpenOrcatr
19
  Q&A content generated by GPT-3.5 Turbo by having it read open source encyclopedias and create QA pairs.
20
  Shortened version of Samantha by Eric Hartford https://huggingface.co/datasets/cognitivecomputations/samantha-data
21
  OpenAssistant: https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25
22
- ***Updated 1-10-23*** Continued training with sorted Orca dataset to around 600mb for STEM related topics, generated around 100mb of STEM q and a with GPT3.5 and GPT4,
23
  a chunk of Samantha dataset, Glaive function calling v2, and python code instruction 18k alpaca dataset, around 1GB total.
24
  Core Influences: Inspired by the character 'Data' from Star Trek: The Next Generation, Lewis Carroll's writings, and a range of educational resources.
25
 
 
7
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6328952f798f8d122ce62a44/_5ihlDZflgdA0Em76t5j9.png)
8
 
9
  Created by: Joseph Flowers
10
+ ***Updated 1-10-24*** New round of training, added gguf model 8bit.
11
  Model Overview
12
  Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration. It is built on the TinyLlama 1.1B parameter model and trained on a unique combination of datasets.
13
 
 
19
  Q&A content generated by GPT-3.5 Turbo by having it read open source encyclopedias and create QA pairs.
20
  Shortened version of Samantha by Eric Hartford https://huggingface.co/datasets/cognitivecomputations/samantha-data
21
  OpenAssistant: https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25
22
+ ***Updated 1-10-24*** Continued training with sorted Orca dataset to around 600mb for STEM related topics, generated around 100mb of STEM q and a with GPT3.5 and GPT4,
23
  a chunk of Samantha dataset, Glaive function calling v2, and python code instruction 18k alpaca dataset, around 1GB total.
24
  Core Influences: Inspired by the character 'Data' from Star Trek: The Next Generation, Lewis Carroll's writings, and a range of educational resources.
25