Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
DOI:
Libraries:
Datasets
pandas
License:
jafarisbarov's picture
Update README.md
7f7f185 verified
metadata
language:
  - az
  - en
  - ru
license: apache-2.0
size_categories:
  - 10K<n<100K
task_categories:
  - multiple-choice
dataset_info:
  features:
    - name: stem
      dtype: string
    - name: stem_image
      dtype: string
    - name: options
      sequence: string
    - name: options_image
      sequence: string
    - name: true_label
      dtype: int64
    - name: subject
      dtype: string
    - name: main_category
      dtype: string
    - name: sub_category
      dtype: string
  splits:
    - name: train
      num_bytes: 252927826
      num_examples: 36915
  download_size: 240795227
  dataset_size: 252927826
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Closed Book Multiple Choice Questions in Azerbaijani

The Az-MCQ dataset is a comprehensive collection of multiple-choice questions designed to aid research in natural language processing in the Azerbaijani language. The dataset includes a wide variety of questions across multiple subjects and difficulty levels, allowing for extensive use in training and evaluating large language models for question-answering tasks.

Dataset Structure

The questions in the dataset are structured in a typical multiple-choice format, each containing a stem, a list of options, and a true label index. The number of choices ranges from 2 to 7. The stem and the answers can be in text or picture format, with the majority being text-based. Pictures are saved in base64 string format, providing a consistent method for storing image data.

The dataset covers a total of 22 subjects, primarily consisting of school subjects. Each question is accompanied by metadata, including subject, main_category and sub_category, providing further context and organization.

Uses

The Az-MCQ dataset is licensed under the Apache-2.0 license and is intended for educational, research, and commercial purposes. Additionally, users are encouraged to cite the dataset appropriately in any publications or works derived from it. Citation information:

@inproceedings{isbarov-etal-2024-open,
    title = "Open foundation models for {A}zerbaijani language",
    author = "Isbarov, Jafar  and
      Huseynova, Kavsar  and
      Mammadov, Elvin  and
      Hajili, Mammad  and
      Ataman, Duygu",
    editor = {Ataman, Duygu  and
      Derin, Mehmet Oguz  and
      Ivanova, Sardana  and
      K{\"o}ksal, Abdullatif  and
      S{\"a}lev{\"a}, Jonne  and
      Zeyrek, Deniz},
    booktitle = "Proceedings of the First Workshop on Natural Language Processing for Turkic Languages (SIGTURK 2024)",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand and Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.sigturk-1.2",
    pages = "18--28",
    abstract = "The emergence of multilingual large language models has enabled the development of language understanding and generation systems in Azerbaijani. However, most of the production-grade systems rely on cloud solutions, such as GPT-4. While there have been several attempts to develop open foundation models for Azerbaijani, these works have not found their way into common use due to a lack of systemic benchmarking. This paper encompasses several lines of work that promote open-source foundation models for Azerbaijani. We introduce (1) a large text corpus for Azerbaijani, (2) a family of encoder-only language models trained on this dataset, (3) labeled datasets for evaluating these models, and (4) extensive evaluation that covers all major open-source models with Azerbaijani support.",
}

https://arxiv.org/abs/2407.02337

Recommendations

Be aware of potential data quality issues, such as text-based stems failing to capture table formats when scraping questions. Use the dataset ethically and responsibly. Recognize and address technical limitations, including variations in answer formats and the range of choices, to enhance model performance.

Dataset Description

  • Curated by: Kavsar Huseynova, Jafar Isbarov
  • Funded by: PRODATA LLC
  • Shared by: aLLMA Lab
  • Languages: Azerbaijani, English, Russian
  • License: apache-2.0