In addition to task 'text-generation', can falcon be used for other tasks like summarization, QA etc?
#37
by
VS9205
- opened
In addition to task 'text-generation', it seems falcon couldn't not be used for other tasks like summarization, QA etc?
I tired to change the task in code below
pipeline = transformers.pipeline(
"summarization",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
but getting the following error:
ValueError: Could not load model tiiuae/falcon-7b-instruct with any of the following classes: (<class
'transformers.models.auto.modeling_auto.AutoModelForSeq2SeqLM'>, <class
'transformers.models.auto.modeling_tf_auto.TFAutoModelForSeq2SeqLM'>).
The way HuggingFace implements these tasks does not allow for Falcon to be used for them, as they require seq2seq models while Falcon is a causal decoder-only.
FalconLLM
changed discussion status to
closed
You can do summarization by using the text-generation
pipeline and providing a prompt like:
Document: "{the text to summarize}"
Summary:
I have to say from my experience:
- It performs poorly in Text Summarisation especially in other languages.
- It is very difficult to give it context as to what kind of results you want. Example: You are an expert {X} return notes as bullet points. Or "Expert {X} Notes:"