start
timestamp[s]
feat_static_cat
sequence
feat_dynamic_real
sequence
item_id
stringlengths
2
4
target
sequence
2017-01-01T14:00:00
[ 0 ]
null
T1
[1.4681215161139218,1.3168553742908025,1.2616996900229107,1.4875904308354457,1.8424093280254574,2.05(...TRUNCATED)
2017-01-01T14:00:00
[ 1 ]
null
T2
[1.5387691370734362,1.4493727953425275,1.6367221720414984,1.8316146351592824,2.225288369663994,0.0,0(...TRUNCATED)
2017-01-01T14:00:00
[ 2 ]
null
T3
[-0.03061729995577464,-0.07967437063891349,-0.061296288209330425,-0.02289084127389972,0.150884009741(...TRUNCATED)
2017-01-01T14:00:00
[ 3 ]
null
T4
[-0.78150058654511,-0.7738617830223457,-0.7597756604722155,-0.7302906109021529,-0.7028791075315352,-(...TRUNCATED)
2017-01-01T14:00:00
[ 4 ]
null
T5
[-0.8026948718704668,-0.7983265386501908,-0.7800893787449574,-0.7539422754620526,-0.7234517122089164(...TRUNCATED)
2017-01-01T14:00:00
[ 5 ]
null
T6
[-0.772417320030675,-0.7677455953305619,-0.754046150827,-0.7216900063066929,-0.6948786494000704,-0.6(...TRUNCATED)
2017-01-01T14:00:00
[ 6 ]
null
T7
[-0.3788091461133809,-0.5129044010003218,-0.639455947987987,-0.6464347116108536,-0.632017911220609,-(...TRUNCATED)
2017-01-01T14:00:00
[ 7 ]
null
T8
[-0.23751390419435228,0.0,-0.5821608465684804,-0.5604286605298945,-0.5005818222999173,-0.44352780824(...TRUNCATED)
2017-01-01T14:00:00
[ 8 ]
null
T9
[-0.39394792203327683,-0.5078075771137169,-0.5977867833192549,-0.5550532823373345,-0.454864921805763(...TRUNCATED)
2017-01-01T14:00:00
[ 9 ]
null
T10
[-0.8102642598304147,-0.8039330450469737,-0.7920692636493097,-0.7657681074216051,-0.7417384723384544(...TRUNCATED)

Dataset Card for "kdd210_hourly"

More Information needed

Download the Dataset:

from datasets import load_dataset

dataset = load_dataset("LeoTungAnh/kdd210_hourly")

Dataset Card for Air Quality in KDD cup 2018

Originally, the dataset is from KDD cup 2018, which consists of 270 time series data with different starting time. This dataset encompasses 210 hourly time series data points starting from 2017-01-01T14:00:00. The dataset reveals the air quality levels in 59 stations in 2 cities from 01/01/2017 to 31/03/2018.

Preprocessing information:

  • Grouped by hour (frequency: "1H").
  • Applied Standardization as preprocessing technique ("Std").
  • Preprocessing steps:
    1. Standardizing data.
    2. Replacing NaN values with zeros.

Dataset information:

  • Missing values are converted to zeros.
  • Number of time series: 210
  • Number of training samples: 10802
  • Number of validation samples: 10850 (number_of_training_samples + 48)
  • Number of testing samples: 10898 (number_of_validation_samples + 48)

Dataset format:

  Dataset({
  
      features: ['start', 'target', 'feat_static_cat', 'feat_dynamic_real', 'item_id'],
      
      num_rows: 210
      
  })

Data format for a sample:

  • 'start': datetime.datetime

  • 'target': list of a time series data

  • 'feat_static_cat': time series index

  • 'feat_dynamic_real': None

  • 'item_id': name of time series

Data example:

{'start': datetime.datetime(2017, 1, 1, 14, 0, 0),
 'feat_static_cat': [0],
 'feat_dynamic_real': None,
 'item_id': 'T1',
 'target': [ 1.46812152,  1.31685537,  1.26169969, ...,  0.47487208, 0.80586637,  0.33006964]
}

Usage:

  • The dataset can be used by available Transformer, Autoformer, Informer of Huggingface.
  • Other algorithms can extract data directly by making use of 'target' feature.
Downloads last month
32
Edit dataset card