File size: 8,061 Bytes
7650956 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 |
---
license: cc-by-sa-4.0
datasets:
- TheSkullery/Aether-Lite-v1.8.1
language:
- en
base_model:
- elinas/Llama-3-15B-Instruct-zeroed
library_name: transformers
---
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>L3-Aethora-15B v2 Data Card</title>
<link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet">
<style>
body, html {
height: 100%;
margin: 0;
padding: 0;
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #0a1128 0%, #1c2541 100%);
color: #e0e1dd;
font-size: 16px;
}
.container {
width: 100%;
height: 100%;
padding: 20px;
margin: 0;
background-color: rgba(255, 255, 255, 0.05);
border-radius: 12px;
box-shadow: 0 4px 10px rgba(0, 0, 0, 0.3);
backdrop-filter: blur(10px);
border: 1px solid rgba(255, 255, 255, 0.1);
}
.header h1 {
font-size: 28px;
color: #4cc9f0;
margin: 0 0 20px 0;
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.3);
}
.update-section h2 {
font-size: 24px;
color: #7209b7;
}
.update-section p {
font-size: 16px;
line-height: 1.6;
color: #e0e1dd;
}
.info img {
width: 100%;
border-radius: 10px;
margin-bottom: 15px;
}
a {
color: #4cc9f0;
text-decoration: none;
}
a:hover {
color: #f72585;
}
.button {
display: inline-block;
background-color: #3a0ca3;
color: #e0e1dd;
padding: 10px 20px;
border-radius: 5px;
cursor: pointer;
text-decoration: none;
}
.button:hover {
background-color: #7209b7;
}
pre {
background-color: #1c2541;
padding: 10px;
border-radius: 5px;
overflow-x: auto;
}
code {
font-family: 'Courier New', monospace;
color: #e0e1dd;
}
</style>
</head>
<body>
<div class="container">
<div class="header">
<h1>L3-Aethora-15B v2</h1>
</div>
<div class="info">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/yJpwVd5UTnAVDoEPVVCS1.png">
<h2>Presented by:</h2>
<p><strong>Creators: <a href="https://huggingface.co/ZeusLabs" target="_blank"> ZeusLabs</a> </p></strong>
<ul>
<li><a href="https://huggingface.co/steelskull" target="_blank">Steelskull</a></p></li>
<li><a href="https://huggingface.co/elinas" target="_blank">Elinas</a></p></li>
</ul>
<p><strong>Dataset:</strong> <a href="https://huggingface.co/datasets/TheSkullery/Aether-Lite-V1.8.1" target="_blank">Theskullery/Aether-Lite-V1.8.1</a></p>
<p><strong>Trained:</strong> 4 x A100 for 17.5 hours on 125k samples</p>
<p><strong>Sponsored by:</strong> Garg (@g4rg)</p>
<h2>About L3-Aethora-15B v2:</h2>
<pre><code> L3 = Llama3 </code></pre>
<p>L3-Aethora-15B v2 is an advanced language model built upon the Llama 3 architecture. It employs state-of-the-art training techniques and a curated dataset to deliver enhanced performance across a wide range of tasks.</p>
<h4>Quants:</h4>
<ul>
<li>@Mradermacher: <a href="https://huggingface.co/mradermacher/L3-Aethora-15B-V2-GGUF" target="_blank">L3-Aethora-15B-V2-GGUF</a></li>
</ul>
<h2>Training Process:</h2>
<ul>
<li>Base Model: elinas/Llama-3-15B-Instruct-zeroed</li>
<li>Training Duration: 17.5 hours on 4 x A100 GPUs</li>
<li>Training Method: LoRA (Low-Rank Adaptation)</li>
<li>Epochs: 4</li>
<li>Precision: BF16</li>
<li>Sequence Length: 8192 tokens</li>
</ul>
<h2>Model Capabilities:</h2>
<p>The goal of L3-Aethora-15B v2 is to have an expanded proficiency across a wide spectrum of tasks with a focus in creative writing:</p>
<ul>
<li><strong>Creative Writing and Storytelling:</strong>
<ul>
<li>Generates engaging narratives, poetry, and creative content</li>
<li>Adapts writing style to various genres and tones</li>
<li>Assists in plot development and character creation</li>
</ul>
</li>
<li><strong>General Intelligence:</strong>
<ul>
<li>Engages in detailed discussions on medical topics and scientific concepts</li>
<li>Explains complex scientific phenomena</li>
<li>Assists in literature review and hypothesis generation</li>
</ul>
</li>
<li><strong>Instructional and Educational Content:</strong>
<ul>
<li>Creates comprehensive tutorials and how-to guides</li>
<li>Explains complex topics with clarity and appropriate depth</li>
<li>Generates educational materials for various skill levels</li>
</ul>
</li>
<li><strong>Reasoning and Problem-Solving:</strong>
<ul>
<li>Analyzes complex scenarios and provides logical solutions</li>
<li>Engages in step-by-step problem-solving across various domains</li>
<li>Offers multiple perspectives on challenging issues</li>
</ul>
</li>
<li><strong>Contextual Understanding and Adaptability:</strong>
<ul>
<li>Maintains coherent, context-aware conversations across extended interactions</li>
<li>Adapts communication style based on the user's preferences and needs</li>
<li>Handles nuanced queries with appropriate depth and sensitivity</li>
</ul>
</ul>
<h2>Dataset Creation Process:</h2>
<p>The Aether-Lite-V1.8.1 dataset used for training L3-Aethora-15B v2 underwent a rigorous creation and curation process:</p>
<ol>
<li><strong>Data Collection:</strong> Aggregated from 12 diverse high-quality datasets, including:
<ul>
<li>jondurbin/airoboros-3.2</li>
<li>jtatman/medical-sci-instruct-100k-sharegpt</li>
<li>Doctor-Shotgun/no-robots-sharegpt</li>
<li>QuietImpostor/Sao10K-Claude-3-Opus-Instruct-15K-ShareGPT</li>
<li>TheSkullery/WizardLM_evol_instruct_v2_Filtered_Fuzzy_Dedup_ShareGPT</li>
<li>TheSkullery/Gryphe-Opus-WritingPrompts-merged</li>
<li>Alignment-Lab-AI/RPGuild-sharegpt-filtered</li>
<li>And others, providing a rich mix of instruction, creative writing, and specialized knowledge</li>
</ul>
</li>
<li><strong>Data Preprocessing:</strong>
<ul>
<li>Language Detection: Utilized a FastText language model to ensure English-language content</li>
<li>Text Sanitization: Cleaned and normalized text, removing or replacing problematic characters</li>
<li>Phrase Filtering: Removed specific unwanted phrases and content types</li>
</ul>
</li>
<li><strong>Deduplication:</strong>
<ul>
<li>Implemented advanced fuzzy deduplication with a 95% similarity threshold</li>
<li>Utilized text embeddings and cosine similarity calculations for efficient comparison</li>
<li>Removed 16,250 duplicate entries, ensuring dataset uniqueness</li>
</ul>
</li>
<li><strong>Data Balancing:</strong>
<ul>
<li>Carefully sampled from each source dataset to maintain diversity</li>
<li>Implemented data shuffling to ensure random distribution of samples</li>
</ul>
</ol>
<p>The final dataset comprises 125,119 high-quality, diverse samples, striking a balance between creativity, practical knowledge, and intellectual depth.</p>
<p>The full dataset used has been released to the public and is avalible for all (see presented section), any ideas or recomendations are always welcome to expand on the dataset further</p>
</div>
</div>
</body>
</html> |