Update README.md
Browse files
README.md
CHANGED
@@ -20,8 +20,8 @@ To load StellarX using the Hugging Face Transformers library, you can use the fo
|
|
20 |
```python
|
21 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
22 |
|
23 |
-
tokenizer = AutoTokenizer.from_pretrained("Dampish/StellarX")
|
24 |
-
model = AutoModelForCausalLM.from_pretrained("Dampish/StellarX")
|
25 |
```
|
26 |
|
27 |
This model is particularly beneficial for those seeking a language model that is powerful, compact, and can be run on local devices without a hefty carbon footprint. Remember, when considering Darius1, it's not just about the impressive numbers—it's about what these numbers represent: powerful performance, optimized resources, and responsible computing.
|
|
|
20 |
```python
|
21 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
22 |
|
23 |
+
tokenizer = AutoTokenizer.from_pretrained("Dampish/StellarX-4B-V0_base")
|
24 |
+
model = AutoModelForCausalLM.from_pretrained("Dampish/StellarX-4B-V0_base")
|
25 |
```
|
26 |
|
27 |
This model is particularly beneficial for those seeking a language model that is powerful, compact, and can be run on local devices without a hefty carbon footprint. Remember, when considering Darius1, it's not just about the impressive numbers—it's about what these numbers represent: powerful performance, optimized resources, and responsible computing.
|