The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

Dataset Details

This dataset contains time-synchronized pairs of DIGIT images and SE(3) object poses. In our setup, the robot hand is stationary with its palm facing downwards and pressing against the object on a table. The robot hand has DIGIT sensors mounted on the index, middle, and ring fingertips, all of which are in contact with the object. A human manually perturbs the object's pose by translating and rotating it in SE(2). We use tag tracking to obtain the object's pose. We collect data using two objects: a Pringles can and the YCB sugar box, both of which have a tag fixed to their top surfaces. The following image illustrates our setting:

This dataset is part of TacBench for evaluating Sparsh touch representations. For more information, please visit https://sparsh-ssl.github.io/.

Uses

This dataset contains aligned DIGIT tactile data and world frame object poses. It is designed to evaluate the performance of Sparsh encoders in enhancing perception by predicting relative pose changes with respect to the sensor gel of the fingers, denoted as $S_t^{t-H} \triangleq (\Delta x, \Delta y, \Delta \theta) \in \mathbf{SE}(2)$, where H is the time stride.

For more information on how to use this dataset and set up corresponding downstream tasks, please refer to the Sparsh repository.

Dataset Structure

The dataset is a collection of sequences where a human manually perturbs the object's pose. We collect data using two objects: a Pringles can and the YCB sugar box. Each sequence corresponds to a pickle file containing the following labeled data:

  • DIGIT tactile images for index, middle and ring fingers
  • Object pose tracked from tag in format (x, y, z, qw, qx, qy, qz)
  • Robot hand joint positions
  • object_index_rel_pose_n5: the pose change within the last 5 samples as a transformation matrix. The object pose is with respect to the index finger.
  • object_middle_rel_pose_n5: the pose change within the last 5 samples as a transformation matrix. The object pose is with respect to the middle finger.
  • object_ring_rel_pose_n5: the pose change within the last 5 samples as a transformation matrix. The object pose is with respect to the ring finger.

We also provide reference (no contact) images for each of the DIGITs to facilitate pre-processing such as background subtraction.

train
β”œβ”€β”€ pringles 
β”‚   β”œβ”€β”€ bag_00.pkl
β”‚   β”œβ”€β”€ ...
β”‚   β”œβ”€β”€ bag_37.pkl
β”‚   β”œβ”€β”€ bag_38.pkl
β”œβ”€β”€ sugar 
β”‚   β”œβ”€β”€ ...
test
β”œβ”€β”€ pringles 
β”‚   β”œβ”€β”€ bag_00.pkl
β”‚   β”œβ”€β”€ ...
β”‚   β”œβ”€β”€ bag_05.pkl
β”‚   β”œβ”€β”€ bag_06.pkl
β”œβ”€β”€ sugar 
β”‚   β”œβ”€β”€ ...
bgs
β”œβ”€β”€ digit_index.png
β”œβ”€β”€ digit_index.png
β”œβ”€β”€ digit_index.png

The following code is an example about how to load the data:

def load_bin_image(io_buf):
    img = Image.open(io.BytesIO(io_buf))
    img = np.array(img)
    return img

def load_dataset_poses(dataset_name, finger_type, t_stride):
    path_data = f"{dataset_name}.pkl"

    with open(path_data, "rb") as file:
        data = pickle.load(file)

    idx_max = np.min(
        [
            len(data[f"digit_{finger_type}"]),
            len(data[f"object_{finger_type}_rel_pose_n{t_stride}"]),
        ]
    )

    dataset_digit = data[f"digit_{finger_type}"][:idx_max]
    dataset_poses = data[f"object_{finger_type}_rel_pose_n{t_stride}"][:idx_max]

    return dataset_digit, dataset_poses

dataset_digit, dataset_poses = load_dataset_poses("train/pringles/bag_00.pkl", "ring", 5)
delta_rel_pose_gt = dataset_poses[0]
img = load_bin_image(dataset_digit[0])

Please refer to Sparsh repository for further information about using the pose estimation dataset and downstream task training.

BibTeX entry and citation info

@inproceedings{
    higuera2024sparsh,
    title={Sparsh: Self-supervised touch representations for vision-based tactile sensing},
    author={Carolina Higuera and Akash Sharma and Chaithanya Krishna Bodduluri and Taosha Fan and Patrick Lancaster and Mrinal Kalakrishnan and Michael Kaess and Byron Boots and Mike Lambeta and Tingfan Wu and Mustafa Mukadam},
    booktitle={8th Annual Conference on Robot Learning},
    year={2024},
    url={https://openreview.net/forum?id=xYJn2e1uu8}
}
Downloads last month
39

Collection including facebook/digit-pose-estimation