Datasets:
ButaBytes 2.0 - The largest NLP corpus for Azerbaijani Language (43M+ sentences)
ButaBytes is designed for a wide range of NLP tasks, collected from 3 million sources with a diverse range of genres and topics, such as politics, economics, science, culture, sports, history, and society. The documents include a mix of contemporary and historical texts, drawn from newspapers, magazines, academic journals, Wikipedia articles, and books. This mix provides a comprehensive linguistic and cultural resource for NLP technologies.
Corpus Structure
Data Splits
The ButaBytes corpus has 4 main sources (books, wikipedia, news, and sentences) with the following distribution:
Source Name | Number of Instances | Size (GB) |
---|---|---|
sentences.json | 43,755,942 | 10.1 |
wikipedia.json | 178,836 | 0.64 |
news.json | 623,964 | 1.37 |
books.zip | 434 | 0.12 |
Methodology
The ButaBytes corpus was constructed by scraping a wide array of Azerbaijani content to ensure a comprehensive and diverse dataset. Our sources included Azerbaijani news websites known for their popularity and reliability, public documents, books spanning various genres, and a rich selection of user-generated content such as social media posts and blogs. We implemented specialized cleaning techniques tailored to each content type, enhancing the accuracy and consistency of the data across the corpus. This approach guarantees a robust and versatile resource suited for a multitude of NLP applications.
Usage Instructions
In order to use ButaBytes, you should download the corresponding material manually to your device.
Reading JSON Files
To read the JSON files from the dataset, use the following function:
import json
def read_local_json(file_path):
try:
with open(file_path, 'r') as file:
data = json.load(file)
print(f"Successfully loaded JSON data from {file_path}.")
return data
except json.JSONDecodeError:
print("The file is not a valid JSON.")
return None
except FileNotFoundError:
print("The file was not found. Please ensure the file path is correct.")
return None
# Example usage
file_path = "sentences.json"
data = read_local_json(file_path)
Converting JSON Data to DataFrame
import pandas as pd
file_path = "news.json"
data = read_local_json(file_path)
df = pd.DataFrame(data)
print(df.head())
import pandas as pd
file_path = "wikipedia.json"
data = read_local_json(file_path)
df = pd.DataFrame(data)
print(df.head())
Unzipping and Reading Text Files
import os
import zipfile
import glob
import pandas as pd
def unzip_file(zip_path, extract_to):
if not os.path.exists(extract_to):
os.makedirs(extract_to)
with zipfile.ZipFile(zip_path, 'r') as zip_ref:
zip_ref.extractall(extract_to)
print(f"Extracted all files from {zip_path} to {extract_to}")
# Example usage
zip_path = "books.zip"
extract_to = "books"
unzip_file(zip_path, extract_to)
def read_text_files_into_dataframe(root_folder):
all_text_files = glob.glob(os.path.join(root_folder, '**/*.txt'), recursive=True)
data = []
for file_path in all_text_files:
with open(file_path, 'r', encoding='utf-8') as file:
content = file.read()
data.append({
'file_path': file_path,
'content': content
})
df = pd.DataFrame(data)
return df
# Example usage
root_folder = "books"
df = read_text_files_into_dataframe(root_folder)
print(df.head())
Considerations for Using the Corpus
Social Impact
ButaBytes contributes significantly to the NLP research community by providing a valuable resource for developing text generation tools in Azerbaijani. It not only supports the advancement of language technologies but also promotes linguistic diversity and cultural preservation.
Biases and Limitations
While efforts were made to minimize bias in the corpus, some limitations remain. Users should be cautious with models trained on this data, particularly concerning inherent biases that might influence the performance and fairness of these models.
Corpus Authors
The ButaBytes 2.0 was developed by the Tifosi AI (formerly AZNLP), a group of dedicated researchers and data scientists focused on advancing Artificial Intelligence. The team has committed to ethical sourcing and responsible management of the dataset, ensuring it serves as a reliable and valuable resource for the community.
- Downloads last month
- 93