File size: 2,370 Bytes
e3e9d2d 2b8c39b e3e9d2d 2b8c39b 96e6f4f e3e9d2d 2b8c39b e3e9d2d 2b8c39b e3e9d2d 2b8c39b e3e9d2d 2b8c39b e3e9d2d 2b8c39b e3e9d2d 2b8c39b e3e9d2d 2b8c39b e3e9d2d 2b8c39b e3e9d2d 2b8c39b e3e9d2d 2b8c39b e3e9d2d 2b8c39b e3e9d2d 2b8c39b e3e9d2d 2b8c39b e3e9d2d 2b8c39b e3e9d2d 2b8c39b e3e9d2d 2b8c39b e3e9d2d 2b8c39b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
---
language:
- en
tags:
- llama
- privacy policy
- terms of service
- fine-tuned
license: apache-2.0
datasets:
- CodeHima/app350_llama_format
base_model:
- unsloth/Llama-3.2-1B-Instruct
pipeline_tag: text-classification
library_name: adapter-transformers
---
# Llama_TOS: Fine-tuned Llama 3.2 1B for Privacy Policy and Terms of Service Analysis
## Model Description
This model is a fine-tuned version of the Llama 3.2 1B model, specifically trained to analyze privacy policies and terms of service. It can determine if clauses are fair or unfair and identify specific privacy practices mentioned in the text.
## Intended Use
This model is designed for:
- Analyzing privacy policy clauses
- Identifying fairness in terms of service
- Recognizing specific privacy practices in legal documents
## Training Procedure
The model was fine-tuned on the CodeHima/app350_llama_format dataset, which contains annotated conversations about privacy policy clauses. The fine-tuning process used the following parameters:
- Base model: unsloth/Llama-3.2-1B-Instruct
- Training steps: 100
- Learning rate: 2e-4
- Batch size: 2
- Gradient accumulation steps: 4
- Max sequence length: 2048
## Limitations
- The model's performance is limited by the size and quality of the fine-tuning dataset.
- It may not generalize well to privacy policies or terms of service that significantly differ from those in the training data.
- The model should not be considered a replacement for legal advice or professional analysis.
## Ethical Considerations
- This model should be used as a tool to assist in understanding privacy policies and terms of service, not as a definitive legal interpreter.
- Users should be aware of potential biases in the model's responses and always verify important information.
## How to Use
You can use this model to analyze privacy policy clauses or terms of service. Here's an example of how to use it:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("CodeHima/Llama_TOS")
model = AutoModelForCausalLM.from_pretrained("CodeHima/Llama_TOS")
prompt = "Analyze this privacy policy clause: 'We collect your email address for marketing purposes.'"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
``` |