Edit model card

https://github.com/FlagAI-Open/FlagAI/tree/master/examples/Aquila

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained('qhduan/aquilachat-7b')
model = AutoModelForCausalLM.from_pretrained('qhduan/aquilachat-7b', trust_remote_code=True)
model = model.eval().half().cuda()

question = '北京为什么是中国的首都?'
prompt = (
    '''A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.'''
    f'''###Human: {question}###Assistant:'''
)
with torch.no_grad():
    ret = model.generate(
        **tokenizer(prompt, return_tensors='pt').to('cuda'),
        do_sample=False,
        max_new_tokens=200,
        use_cache=True
    )
    output_ids = ret[0].detach().cpu().numpy().tolist()
    if 100007 in output_ids:
        output_ids = output_ids[:output_ids.index(100007)]
    elif 0 in output_ids:
        output_ids = output_ids[:output_ids.index(0)]
    # 北京之所以成为中国的首都,是因为它在中国历史和文化中的重要地位和政治、经济、文化等方面的影响力。
    print(tokenizer.decode(output_ids))

Aquila-7B和Aquila-33B开源模型使用 智源Aquila系列模型许可协议, 原始代码基于Apache Licence 2.0。

Downloads last month
28
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using qhduan/aquilachat-7b 4