File size: 2,795 Bytes
9e29ac1 7707e0b 9e29ac1 4eb0513 ee7e114 7e974b4 7707e0b dc4ca03 9e95276 f62e2f2 4261cd7 f62e2f2 4261cd7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
---
datasets:
- AI-MO/NuminaMath-CoT
- AI4Chem/ChemData700K
- medalpaca/medical_meadow_mediqa
- andersonbcdefg/chemistry
---
2024-08-15: This is now the base model. The model with python RDKit training is created as Gemma_ChemWiz_rdkit_16bit. This model is now frozen, with a next model being created from this with "new" additional skills such as developing chemistry specific applications using rdkit.
2024-08-15: Splitting model today. This model will be the base ChemWiz Model. The first vintage that I will create today will be the RDKit coder, using my custom data set. Once I have this model, I will create a dev critic out of that model. Will then start a set of tests with Microsoft Autogen testing to test if the addition of a coding critic would improve the results. Still toying with the idea of creating a ChemWiz critic to see if it improves the outcomes and reduces halicinations. But lets see.
2024-08-13: Taking the model through second round of AI4Chem/ChemData700K, I am amazed how the model seem to converge and the suddenly it does not. I suspect in nextfew days that it will converge. I am quite keen to see this happen. The results of chemichal smiles are very low at this point.
2024-08-12: The medalpaca/medical_meadow_mediqa data set was also used, but the model converged on this in less than one epoch, only 1400 steps of training was concluded, in future versions and editions I might elect to exclude this data set, but it is included in this version.
2024-08-12: Model is being fine tuned on chemical memory, rather than chmistry reasoning. Using the AI4Chem/ChemData700K dataset. Model is still halucinating chemical formulas, I will fine tune model on a few more data sets to see if this affects/reduce the halicinations.
2024-08-09: Model is still being fine tuned for logical reasoning, the responses being recieved at this time seem to be in line with the training set, so the model at this time for instance do not jump straight into an answer, but started "unpacking" the instruction before persorming a task, such as coding.
Nothing this model creates at this time should be used for any production purpose, highly experimental.
---
base_model: unsloth/codegemma-7b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
datasets:
- AI-MO/NuminaMath-CoT
---
# Uploaded model
- **Developed by:** dbands
- **License:** apache-2.0
- **Finetuned from model :** unsloth/codegemma-7b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |