Jump to main content
Professur Digital- und Schaltungstechnik
THEOStereo-1024-Li
Professur Digital- und Schaltungstechnik 

THEOStereo-1024-Li

This dataset is a postprocessed version of THEOStereo [1] to match the requirements of a training of Omni-AnyNet [2]. Like the original dataset, this dataset facilitates the training of artificial neural networks for 3D scene reconstruction. The resolution was reduced to w ⨯ h = 1024 ⨯ 1024. Instead of depth maps, this dataset offers omnidirectional disparity maps following the normalized disparity defined by Li et al. [3]. Beside the disparity maps, this dataset contains omnidirectional images for two sensors of a canonical stereo camera setup with a baseline of 15 cm. The camera model was the equiangular camera model. Both camera sensors exhibit a field of view F of 180°. The dataset contains 31,250 samples (25,000 for training / 3,125 for validation / 3,125 for testing). Please confer [2] for more details. If you use this dataset, we would be happy if you cite [1] and [2].

Hint: As THEOStereo did not provide the exact focal length, we approximated the focal length as f = w / F and calculated the normalized disparity based on this approximation.

Fig 1: Left camera image
Fig 2: Right camera image
Fig 3: Normalized disparity map (ground truth)

Dataset Structure

.
├── README.md
├── test
│   ├── disp_occ_0_exr    # (normalized disparity)
│   ├── img_stereo_webp   # (right image)
│   └── img_webp          # (left image)
├── train
│   ├── disp_occ_0_exr    # (normalized disparity)
│   ├── img_stereo_webp   # (right image)
│   └── img_webp          # (left image)
└── valid
    ├── disp_occ_0_exr    # (normalized disparity)
    ├── img_stereo_webp   # (right image)
    └── img_webp          # (left image)

License

Creative Commons License This dataset as well as the original THEOStereo dataset are licensed unter the Creative Commons Attribution 4.0 International License.

Download

We recommend Linux users to download the dataset with download.sh.
Alternatively, the dataset can be downloaded from our cloud and extracted manually.
The preprint paper can be downloaded from here. You can find the code for Omni-AnyNet [2] here.

BibTeX

If you use the data set in your work, please don't forget to cite [1] and [2]. You might want to use the following BibTex entry:

@inproceedings{seuffert_study_2021,
  address = {Online Conference},
  title = {A {Study} on the {Influence} of {Omnidirectional} {Distortion} on {CNN}-based {Stereo} {Vision}},
  isbn = {978-989-758-488-6},
  doi = {10.5220/0010324808090816},
  booktitle = {Proceedings of the 16th {International} {Joint} {Conference} on {Computer} {Vision}, {Imaging} and {Computer} {Graphics} {Theory} and {Applications}, {VISIGRAPP} 2021, {Volume} 5: {VISAPP}},
  publisher = {SciTePress},
  author = {Julian Bruno Seuffert and Ana Cecilia Perez Grassi and Tobias Scheck and Gangolf Hirtz},
  year = {2021},
  month = {2},
  pages = {809--816}
}

@article{seuffert_omniglasses_2024,
  title = {{OmniGlasses}: an optical aid for stereo vision {CNNs} to enable omnidirectional image processing},
  volume = {35},
  issn = {0932-8092, 1432-1769},
  shorttitle = {{OmniGlasses}},
  url = {https://link.springer.com/10.1007/s00138-024-01534-2},
  doi = {10.1007/s00138-024-01534-2},
  number = {3},
  urldate = {2024-05-07},
  journal = {Machine Vision and Applications},
  author = {Seuffert, Julian B. and Perez Grassi, Ana C. and Ahmed, Hamza and Seidel, Roman and Hirtz, Gangolf},
  month = apr,
  year = {2024},
  pages = {58--72}
}

References

[1] J. B. Seuffert, A. C. Perez Grassi, T. Scheck, and G. Hirtz, “A Study on the Influence of Omnidirectional Distortion on CNN-based Stereo Vision,” in Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2021, Volume 5: VISAPP, Online Conference, Feb. 2021, pp. 809–816, doi: 10.5220/0010324808090816.

[2] J. B. Seuffert, A. C. Perez Grassi, H. Ahmed, R. Seidel, and G. Hirtz, “OmniGlasses: an optical aid for stereo vision CNNs to enable omnidirectional image processing,” Machine Vision and Applications, vol. 35, no. 3, pp. 58–72, Apr. 2024, doi: 10.1007/s00138-024-01534-2.

[3] S. Li, “Trinocular Spherical Stereo,” in 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, Oct. 2006, pp. 4786–4791. doi: 10.1109/IROS.2006.282350.