SpiderCam: Low-Power Snapshot Depth from Differential Defocus

IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2026

1Northwestern University, 2Georgia Institute of Technology, 3Purdue University
*Equal contribution.
SpiderCam pipeline overview
Overview of SpiderCam pipeline. Our camera uses a beam splitter and a pair of differentially-defocused low power sensors to observe the same scene with an offset depth of field. Our algorithm processes these images to produce depth maps thresholded by confidence.

Abstract

We introduce SpiderCam, an FPGA-based snapshot depth-from-defocus camera which produces 480x400 sparse depth maps in real-time at 32.5 FPS over a working range of 52 cm while consuming 611 mW of power in total. SpiderCam comprises a custom camera that simultaneously captures two differently focused images of the same scene, processed with a SystemVerilog implementation of depth from differential defocus (DfDD) on a low-power FPGA. To achieve state-of-the-art power consumption, we present algorithmic improvements to DfDD that overcome challenges caused by low-power sensors, and design a memory-local implementation for streaming depth computation on a device that is too small to store even a single image pair. We report the first sub-Watt total power measurement for passive FPGA-based 3D cameras in the literature.

Demo Video

Citation

@inproceedings{ferreira2026spidercam,
    author={Ferreira, Marcos A. and Li, Tianao and Mamish, John and Hester, Josiah and Sangar, Yaman and Guo, Qi and Alexander, Emma},
    title={SpiderCam: Low-Power Snapshot Depth from Differential Defocus},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    year={2026}
}

Acknowledgements

This research was partially supported by the National Science Foundation under awards numbers CNS-2430327 and CCF-2431505. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or other supporters. We would also like to thank Junjie Luo and Alan Fu for helpful discussions.