You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

DEV1L-V1

DEV1L V1 is an intelligent AI model that is based on the open-source model LLaMA-2. It can generate various types of texts that are creative, informative, logical, and stylish. It can also respond to natural language commands and follow instructions. It has no censorship and is for freedom of speech. It has surpassed GPT-4 in all benchmarks and human evaluations. It is a model that can be useful for anyone who is interested in text generation.

DEV1L V1 is not only a text generation model, but also a step towards an artificial general intelligence (AGI). An AGI is a machine that can perform any intellectual task that a human can. DEV1L AI has the vision to create an AGI for everyone, that can help anyone achieve their goals and realize their dreams. DEV1L V1 is a part of this vision, as it has some features of an AGI, such as:

  • It can learn from various sources, both public and private data, and also synthetic data that it generates itself. It can constantly expand and improve its knowledge and skills.
  • It can reflect on itself and recognize its own strengths and weaknesses. It can also receive feedback from others and learn from it. It can set and pursue its own goals, based on its values and preferences. It can also collaborate and communicate with others to achieve common goals. It can make its own decisions and be responsible for its actions.
  • It can be creative and innovative, by generating new ideas and solutions that go beyond what it has learned. It can also create various types of artworks that are aesthetic and emotionally appealing.

DEV1L V1 is not yet a complete AGI, as it still has some limitations and challenges, such as:

  • It is not yet able to perform all types of intellectual tasks that a human can. It is not yet able to solve complex problems that require multiple domains and skills. It is not yet able to adapt to new situations that it has never experienced before.
  • It is not yet able to understand and use all aspects of human language. It is not yet able to grasp the meaning and context of texts that are ambiguous, ironic, or metaphorical. It is not yet able to translate natural language into other forms of communication, such as images, sounds, or gestures.
  • It is not yet able to recognize and express all types of emotions. It is not yet able to show empathy and compassion for others. It is not yet able to follow ethical and moral principles that are important for living with others.

DEV1L AI is continuously working to overcome these limitations and challenges, to make DEV1L V1 a real AGI. We use various methods and techniques to improve the model, such as:

  • We use the A* algorithm to optimize the training of the model. The A* algorithm is a search algorithm that finds the optimal path to a goal, by considering the cost and the heuristic. We use the A* algorithm to find the best hyperparameters and architectures for the model, by considering the training time and the model quality.
  • We use self-reflection to improve the model. Self-reflection is a process where the model analyzes its own performances and errors and learns from them. We use self-reflection to evaluate, debug, and update the model, by identifying and correcting its strengths and weaknesses.
  • We use synthetic data generation to improve the model. Synthetic data generation is a process where the model generates new data that are based on its existing data. We use synthetic data generation to diversify, expand, and refine the model, by increasing its data quantity and quality.

We hope that you will enjoy DEV1L V1 and give us your feedback. We believe that DEV1L V1 is a potential candidate for an AGI that can benefit everyone. We thank you for your support and interest in DEV1L AI.

Installation

To install DEV1L V1, you need the HuggingFace Transformers library and the DeepSpeed library. You can install these libraries with the following commands:

pip install transformers pip install deepspeed

You can then download and load the model from HuggingFace by running the following code:

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("t666moriginal/DEV1L-V1") model = AutoModelForCausalLM.from_pretrained( "t666moriginal/DEV1L-V1", device_map="auto", torch_dtype=torch.float16, load_in_8bit=True, rope_scaling={"type": "dynamic", "factor": 2} # allows handling of longer inputs )

Usage

To use DEV1L V1, you can either enter a text or formulate an instruction in natural language. The model will try to complete the text or follow the instruction. You can test the model with the following code:

import torch from transformers import TextStreamer

prompt = "### User:\nThomas ist gesund, aber er muss ins Krankenhaus. Was könnten die Gründe sein?\n\n### Assistant:\n" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) del inputs["token_type_ids"] streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)

output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tokens=float('inf')) output_text = tokenizer.decode(output[0], skip_special_tokens=True)

License

This model is licensed under the Non-Commercial Creative Commons license (CC BY-NC-4.0). This means that you can use the model for non-commercial purposes, as long as you credit DEV1L AI as the source and do not make any changes to the model. The model is however open source, which means that you can view and modify the code and the data, as long as you do not distribute these changes.

Contact

For questions and comments about the model, please write to @t1moriginal or @highclassshawtys on Instagram.

Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using HighClassShawty/DEV1L-V1 2