Edit model card

Bokeh (γƒœγ‚± Japanese word for blur)

Bokeh model is based on a densenet like architecture trained on Unsplash images at 300x200 resolution. It classifies whether an photo is capture with bokeh producing a shallow depth of field

Model description

Bokeh model is based on a DenseNet architecture. The model is trained with a mini-batch size of 32 samples with Adam optimizer and a learning rate $0.0001$. It has 3.632 trainable parameters, 8 convolution filters are used for the network's input, with $7\times7$ kernel size.

Training data

The bokeh model is pretrained on depth-of-field dataset, a dataset consisted of 1200 images and 2 classes manually annotated.

BibTeX entry and citation info

@article{sniafas2021,
  title={DoF: An image dataset for depth of field classification},
  author={Niafas, Stavros},
  doi= {10.13140/RG.2.2.17217.89443},
  url= {https://www.researchgate.net/publication/355917312_Photography_Style_Analysis_using_Machine_Learning}
  year={2021}
}
Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train svnfs/bokeh

Space using svnfs/bokeh 1