This model is Awesome

#20
by areumtecnologia - opened

You can say what you want, but among all the most used models, I haven't yet found one that is as effective. The Qwen1.5-7B-Chat works well with bitsandbytes (Q4 and Q8) and still maintains incredible attention. I tested several models in my projects, but the Qwen1.5-7B-Chat is unbeatable. The best feature of all is that it understands the user's language on the first interaction, something that other latest generation models only do through requests. Congratulations and greetings to the Qwen development team. I hope this team continues to develop this increasingly better and open-source model.
To contribute, here is a code to facilitate the use of different models, including Qwen:

from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

class LLMQ4:
# Load 6.188 GB
def init(self, repo_id):
# Definindo a configuração para evitar fragmentação
# os.environ['PYTORCH_CUDA_ALLOC_CONF'] = 'expandable_segments:True'
# Model name
model_name = repo_id
self.device = "cuda" if torch.cuda.is_available() else "cpu"
self.tokenizer, self.model = self.initialize_model(model_name)

def initialize_model(self, model_name):
    bnb_config = BitsAndBytesConfig(
        load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16
    )

    # Tokenizer initialization
    tokenizer = AutoTokenizer.from_pretrained(model_name)
    model = AutoModelForCausalLM.from_pretrained(model_name, device_map=self.device, torch_dtype=torch.bfloat16, quantization_config=bnb_config)
    return tokenizer, model

def prompt(self, context):
    
    # CONTEXT DEVE TER CONFIGURACAO POR ROLE
    text = self.tokenizer.apply_chat_template(
        context,
        tokenize=False,
        add_generation_prompt=True
    )
    model_inputs = self.tokenizer([text], return_tensors="pt").to(self.device)
    generated_ids = self.model.generate(
        model_inputs.input_ids,
        attention_mask=model_inputs['attention_mask'],
        max_new_tokens=38000,
        do_sample=True,
        temperature=0.6,
        top_p=0.9,
        repetition_penalty=1.1,
        eos_token_id=[
            self.tokenizer.eos_token_id,
        ],
    )
    generated_ids = [
        output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
    ]
    response = self.tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
    
    return response

Usage:

llm = LLMQ4('Qwen/Qwen1.5-7B-Chat')
chat_history=[
# Personality SystemPrompts
{"role": "system", "content": "Your name is Jarvis."},
{"role": "user", "content": "What your name?"},
]
response = llm.prompt(chat_history)

chat_history.append({"role": "assistant", "content": response})
chat_history.append({"role": "user", "content": "Do a serach on the internet about..."})
response = llm.prompt(chat_history)

...

Qwen org

thanks for your appreciation. yeah we are gonna to release new stuff, way better than 1.5, for sure

thanks for your appreciation. yeah we are gonna to release new stuff, way better than 1.5, for sure

really excited about that, any idea when (a month or more)?

Update: The new Qwen2 is awesome!

Update: The new Qwen2 is awesome!

7B available?

Update: The new Qwen2 is awesome!

Sign up or log in to comment