You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for "dolly_context_enfr"

This is a filtered version of databricks-dolly-15k, then traduced to french with Deepl pro API, the best translation solution available on the market.

Our goal is to gather french data on question answering on context, where the model should not bring new information not present in the context given. Our goal is to limit hallucination. The filtering have been done in three parts:

  • We keep only the data with a not empty context (we are not interested in random chat or not sourced information)
  • We don't take data where the answer is more than 1,5 times longer than the context, our study of the data showed that in those cases the information come from other sources than the context, and/or concist of a copy past of the context
  • For long context data (>1000 characters), we don't take data where the answer is longer than context (character wize)
  • We also filter around 30 data with too long context (10k character), answer (5k character) and instruction (5k character) as ther were showed to have a wrong format

Our filtered version of dolly dataset only contain 3 of the 7 categories, the annotation guidelines for each of the categories were as follows:

  • Closed QA: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form.
  • Summarization: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form.
  • Information Extraction: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form.
Category Samples
closed_qa 1711
information_extraction 1377
summarization 1064

Note that we considered 'brainstorming' and 'classification' data, but there are not suited for our LLM project, and very subjective (as not based on a context), so we decided to not use them.

image/png

Downloads last month
34