--- license: bigscience-openrail-m datasets: - laion/Anh library_name: transformers pipeline_tag: text-generation tags: - pytorch - casual-lm - multilingual - instruct - bloomz --- ### Model description This model is [`bloomz-7b1-mt`](https://huggingface.co/bigscience/bloomz-7b1-mt) model finetuned on instruct dataset `cross_lingual.jsonl` from [`laion/Anh`](https://huggingface.co/datasets/laion/Anh). ### How to use anh-bloomz-7b1-mt-cross-lingual model can be loaded and used via the following code: ```python import re from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "laion/anh-bloomz-7b1-mt-cross-lingual" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) whitespace_tokens_map = {'\n': '', ' ': ''} text = "User: Apakah kita akan bisa menyembuhkan penyakit kanker? Jawab dalam bahasa China.\n" for k, v in whitespace_tokens_map.items(): text = text.replace(k, v) inputs = tokenizer(text, return_tensors="pt") tokens = model.generate(**inputs, max_new_tokens=200, do_sample=True, top_k=40, top_p=0.9, temperature=0.2, repetition_penalty=1.2,num_return_sequences=1) output = tokenizer.decode(tokens[0], skip_special_tokens=True) for v in whitespace_tokens_map.values(): output = re.sub(rf"{v}\s+(\S+)", rf"{v}\1", output) for k, v in whitespace_tokens_map.items(): output = output.replace(v, k) ```