mrsteyk's picture
Update README.md
658de62
|
raw
history blame
3.9 kB
---
license: wtfpl
language:
- en
pipeline_tag: text-generation
tags:
- llama
- w++
- meme++
- tiny
---
# Model Card for Model ID
Meme++ generator.
## Model Details
### Model Description
This is a tiny LLaMA model trained from scratch for 31000 steps (253952000 tokens) out of `i forgor :skull:`.
- **Developed by:** mrsteyk
- **Model type:** LLaMA
- **Language(s) (NLP):** English
- **License:** WTFPL
### Model Sources [optional]
- **Repository:** maybe someday
## Uses
This was intended for Meme++ character chard generation, trained a small demo.
### Direct Use
Random Meme++ card generation.
### Out-of-Scope Use
CSAM related stuff.
## Bias, Risks, and Limitations
This model was trained on a randomly scraped DataSet, I tried filtering as much as I could automatically, it might still try to generate kids because people are fucking weirdos.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
Meme++ character definition taken off the internet.
### Training Procedure
This was trained using `lit-llama` based model code and `pytorch-lightning` CLI based trainer code.
#### Training Hyperparameters
- **Training regime:** fp32
- **Optimizer and LR:** DeepSpeed FusedAdamW with 1e-5
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
[W&B run](https://wandb.ai/mrsteyk/memepp-llama/runs/44e3aut4)
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 1050 Ti Mobile
- **Hours used:** ~6
- **Cloud Provider:** Local Machine(C)(TM)
- **Compute Region:** RU
- **Carbon Emitted:** ~~450kg~~
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]