File size: 2,731 Bytes
5b39148 aa4542f 5b39148 1a007cb 0b325a7 e98f36c 0b325a7 e98f36c 0b325a7 e98f36c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
---
dataset_info:
features:
- name: path
dtype: string
- name: concatenated_notebook
dtype: string
splits:
- name: train
num_bytes: 13378216977
num_examples: 781578
download_size: 5447349438
dataset_size: 13378216977
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- jupyter
- python
- notebooks
size_categories:
- 100K<n<1M
---
# Merged Jupyter Notebooks Dataset
## Introduction
This dataset is a transformed version of the [Jupyter Code-Text Pairs](https://huggingface.co/datasets/bigcode/jupyter-code-text-pairs) dataset. The original dataset contains markdown, code, and output pairs extracted from Jupyter notebooks. This transformation merges these components into a single, cohesive format that resembles a Jupyter notebook, making it easier to analyze and understand the flow of information.
## Dataset Details
### Source
The original dataset is sourced from the Hugging Face Hub, specifically the [bigcode/jupyter-code-text-pairs](https://huggingface.co/datasets/bigcode/jupyter-code-text-pairs) dataset. It contains pairs of markdown, code, and output from Jupyter notebooks.
### Transformation Process
Using the flexibility and efficiency of DuckDB, I processed the entire dataset without the need for heavy hardware. DuckDB's ability to handle large datasets efficiently allowed me to concatenate the markdown, code, and output for each notebook path into a single string, simulating the structure of a Jupyter notebook.
The transformation was performed using the following DuckDB query:
```python
import duckdb
#Connect to a new DuckDB database
new_db = duckdb.connect('merged_notebooks.db')
#Query to concatenate markdown, code, and output
query = """
SELECT path,
STRING_AGG(CONCAT('###Markdown\n', markdown, '\n###Code\n', code, '\n###Output\n', output), '\n') AS concatenated_notebook
FROM read_parquet('jupyter-code-text-pairs/data/*.parquet')
GROUP BY path
"""
#Execute the query and create a new table
new_db.execute(f"CREATE TABLE concatenated_notebooks AS {query}")
```
## Usage
To replicate the transformation or explore the original dataset, you can download it using the following command:
```bash
git clone https://huggingface.co/datasets/bigcode/jupyter-code-text-pairs
```
Once downloaded, you can use the provided DuckDB query to process the data as needed.
## Conclusion
This dataset provides a more integrated view of Jupyter notebooks by merging markdown, code, and output into a single format. The use of DuckDB demonstrates its capability to handle large datasets efficiently, making it an excellent tool for data transformation tasks. |