Update README.md
Browse files
README.md
CHANGED
@@ -17,12 +17,12 @@ datasets:
|
|
17 |
language:
|
18 |
- en
|
19 |
---
|
20 |
-
#
|
21 |
|
22 |
![logo](https://huggingface.co/OEvortex/HelpingAI-3B/resolve/main/HelpingAI.png)
|
23 |
|
24 |
## Overview
|
25 |
-
|
26 |
|
27 |
- Engage in meaningful, open-ended dialogue while displaying high emotional intelligence.
|
28 |
- Recognize and validate user emotions and emotional contexts.
|
@@ -31,14 +31,14 @@ HelpingAI-6B is a state-of-the-art large language model designed to facilitate e
|
|
31 |
- Continuously improve emotional awareness and dialogue skills.
|
32 |
|
33 |
## Methodology
|
34 |
-
|
35 |
- **Supervised Learning**: Utilizing large dialogue datasets with emotional labeling to enhance empathy and emotional recognition.
|
36 |
- **Reinforcement Learning**: Implementing a reward model that favors emotionally supportive responses to ensure beneficial interactions.
|
37 |
- **Constitution Training**: Embedding stable and ethical objectives to guide its conversational behavior.
|
38 |
- **Knowledge Augmentation**: Incorporating psychological resources on emotional intelligence to improve its understanding and response capabilities.
|
39 |
|
40 |
## Emotional Quotient (EQ)
|
41 |
-
|
42 |
|
43 |
|
44 |
## Usage Code
|
@@ -46,10 +46,10 @@ HelpingAI-6B has achieved an impressive Emotional Quotient (EQ) of 91.57, making
|
|
46 |
import torch
|
47 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
48 |
|
49 |
-
# Load the
|
50 |
-
model = AutoModelForCausalLM.from_pretrained("OEvortex/
|
51 |
# Load the tokenizer
|
52 |
-
tokenizer = AutoTokenizer.from_pretrained("OEvortex/
|
53 |
|
54 |
|
55 |
# Define the chat input
|
@@ -93,8 +93,8 @@ from webscout.Local.samplers import SamplerSettings
|
|
93 |
|
94 |
|
95 |
# Download the model
|
96 |
-
repo_id = "OEvortex/
|
97 |
-
filename = "
|
98 |
model_path = download_model(repo_id, filename, token="")
|
99 |
|
100 |
# Load the model
|
@@ -114,7 +114,7 @@ sampler = SamplerSettings(temp=0.7, top_p=0.9)
|
|
114 |
thread = Thread(model, helpingai, sampler=sampler)
|
115 |
|
116 |
# Start interacting with the model
|
117 |
-
thread.interact(header="π
|
118 |
|
119 |
```
|
120 |
|
|
|
17 |
language:
|
18 |
- en
|
19 |
---
|
20 |
+
# HelpingAI2-6B : Emotionally Intelligent Conversational AI
|
21 |
|
22 |
![logo](https://huggingface.co/OEvortex/HelpingAI-3B/resolve/main/HelpingAI.png)
|
23 |
|
24 |
## Overview
|
25 |
+
HelpingAI2-6B is a state-of-the-art large language model designed to facilitate emotionally intelligent conversations. It leverages advanced natural language processing capabilities to engage users with empathy, understanding, and supportive dialogue across a variety of topics.
|
26 |
|
27 |
- Engage in meaningful, open-ended dialogue while displaying high emotional intelligence.
|
28 |
- Recognize and validate user emotions and emotional contexts.
|
|
|
31 |
- Continuously improve emotional awareness and dialogue skills.
|
32 |
|
33 |
## Methodology
|
34 |
+
HelpingAI2-6B is part of the HelpingAI series and has been trained using:
|
35 |
- **Supervised Learning**: Utilizing large dialogue datasets with emotional labeling to enhance empathy and emotional recognition.
|
36 |
- **Reinforcement Learning**: Implementing a reward model that favors emotionally supportive responses to ensure beneficial interactions.
|
37 |
- **Constitution Training**: Embedding stable and ethical objectives to guide its conversational behavior.
|
38 |
- **Knowledge Augmentation**: Incorporating psychological resources on emotional intelligence to improve its understanding and response capabilities.
|
39 |
|
40 |
## Emotional Quotient (EQ)
|
41 |
+
HelpingAI2-6B has achieved an impressive Emotional Quotient (EQ) of 91.57, making it one of the most emotionally intelligent AI models available. This EQ score reflects its advanced ability to understand and respond to human emotions in a supportive and empathetic manner.
|
42 |
|
43 |
|
44 |
## Usage Code
|
|
|
46 |
import torch
|
47 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
48 |
|
49 |
+
# Load the HelpingAI2-6B model
|
50 |
+
model = AutoModelForCausalLM.from_pretrained("OEvortex/HelpingAI2-6B ", trust_remote_code=True)
|
51 |
# Load the tokenizer
|
52 |
+
tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI2-6B ", trust_remote_code=True)
|
53 |
|
54 |
|
55 |
# Define the chat input
|
|
|
93 |
|
94 |
|
95 |
# Download the model
|
96 |
+
repo_id = "OEvortex/HelpingAI2-6B "
|
97 |
+
filename = "HelpingAI2-6B -q4_k_m.gguf"
|
98 |
model_path = download_model(repo_id, filename, token="")
|
99 |
|
100 |
# Load the model
|
|
|
114 |
thread = Thread(model, helpingai, sampler=sampler)
|
115 |
|
116 |
# Start interacting with the model
|
117 |
+
thread.interact(header="π HelpingAI2-6B : Emotionally Intelligent Conversational AI π", color=True)
|
118 |
|
119 |
```
|
120 |
|