--- license: mit datasets: - bpiyush/sound-of-water language: - en base_model: - facebook/wav2vec2-base-960h pipeline_tag: audio-classification tags: - physical-property-estimation - audio-visual - pouring-water --- # 🚰 The Sound of Water: Inferring Physical Properties from Pouring Liquids In this folder, we provide the following trained model checkpoints:
*Key insight*: As water is poured, the fundamental frequency that we hear changes predictably over time as a function of physical properties (e.g., container dimensions). **TL;DR**: We present a method to infer physical properties of liquids from *just* the sound of pouring. We show in theory how *pitch* can be used to derive various physical properties such as container height, flow rate, etc. Then, we train a pitch detection network (`wav2vec2`) using simulated and real data. The resulting model can predict the physical properties of pouring liquids with high accuracy. The latent representations learned also encode information about liquid mass and container shape. Arxiv link: https://arxiv.org/abs/2411.11222 ## Demo Check out the demo [here](https://huggingface.co/spaces/bpiyush/SoundOfWater). You can upload a video of pouring and the model estimates pitch and physical properties. ## 💻 Usage First, install the repository from `github`. ```sh git clone git@github.com:bpiyush/SoundOfWater.git cd SoundOfWater ``` Then, install dependencies. ```sh conda create -n sow python=3.8 conda activate sow # Install desired torch version # NOTE: change the version if you are using a different CUDA version pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121 # Additional packages pip install lightning==2.1.2 pip install timm==0.9.10 pip install pandas pip install decord==0.6.0 pip install librosa==0.10.1 pip install einops==0.7.0 pip install ipywidgets jupyterlab seaborn # if you find a package is missing, please install it with pip ``` Then, use this snippet to download the models: ```python from huggingface_hub import snapshot_download snapshot_download( repo_id="bpiyush/sound-of-water-models", local_dir="/path/to/download/", ) ``` To run our models on examples of pouring sounds, please see the [playground notebook](https://github.com/bpiyush/SoundOfWater/blob/main/playground.ipynb). If you would like to use our dataset for a different task, please download it from [here](https://huggingface.co/datasets/bpiyush/sound-of-water). ## Models We provide audio models trained to detect pitch in the sound of pouring water. We train these models in two stages: 1. **Pre-training on synthetic data**: We simulate sounds of pouring water using [DDSP](https://arxiv.org/abs/2001.04643) using only 80 samples. This is used to generate lots of simulated sounds of pouring water. Then, we train `wav2vec2` on this data. 2. **Fine-tuning on real data**: We fine-tune the model on real data. Since real data does not come with ground truth, we use visual co-supervision from the video stream to fine-tune the audio model. Here, we provide checkpoints for both the stages.
File name | Description | Size |
---|---|---|
dsr9mf13_ep100_step12423_synthetic_pretrained.pth | Pre-trained on synthetic data | 361M |
dsr9mf13_ep100_step12423_real_finetuned_with_cosupervision.pth | Trained with visual co-supervision | 361M |