Update README.md
Browse files
README.md
CHANGED
@@ -2,7 +2,11 @@
|
|
2 |
datasets:
|
3 |
- AI-MO/NuminaMath-CoT
|
4 |
- AI4Chem/ChemData700K
|
|
|
|
|
5 |
---
|
|
|
|
|
6 |
2024-08-12: Model is being fine tuned on chemical memory, rather than chmistry reasoning. Using the AI4Chem/ChemData700K dataset. Model is still halucinating chemical formulas, I will fine tune model on a few more data sets to see if this affects/reduce the halicinations.
|
7 |
|
8 |
2024-08-09: Model is still being fine tuned for logical reasoning, the responses being recieved at this time seem to be in line with the training set, so the model at this time for instance do not jump straight into an answer, but started "unpacking" the instruction before persorming a task, such as coding.
|
|
|
2 |
datasets:
|
3 |
- AI-MO/NuminaMath-CoT
|
4 |
- AI4Chem/ChemData700K
|
5 |
+
- medalpaca/medical_meadow_mediqa
|
6 |
+
- andersonbcdefg/chemistry
|
7 |
---
|
8 |
+
2024-08-12: The medalpaca/medical_meadow_mediqa data set was also used, but the model converged on this in less than one epoch, only 1400 steps of training was concluded, in future versions and editions I might elect to exclude this data set, but it is included in this version.
|
9 |
+
|
10 |
2024-08-12: Model is being fine tuned on chemical memory, rather than chmistry reasoning. Using the AI4Chem/ChemData700K dataset. Model is still halucinating chemical formulas, I will fine tune model on a few more data sets to see if this affects/reduce the halicinations.
|
11 |
|
12 |
2024-08-09: Model is still being fine tuned for logical reasoning, the responses being recieved at this time seem to be in line with the training set, so the model at this time for instance do not jump straight into an answer, but started "unpacking" the instruction before persorming a task, such as coding.
|