Summary SoDaCam Software-defined Cameras via Single-Photon Imaging arxiv.org
16,254 words - PDF document - View PDF document
One Line
SoDaCam is a camera that utilizes single-photon detectors to capture photon data and can process it post-capture to simulate various imaging methods, while also reducing power usage and offering versatile functionality.
Slides
Slide Presentation (12 slides)
Key Points
- SoDaCam is a software-defined camera that uses single-photon detectors to capture photon data.
- The photon data can be processed post-capture to emulate various imaging modalities.
- Computing projections on-chip reduces power consumption and enables new capabilities.
- SPAD arrays and near-sensor processors are being developed to address the limitations of single-photon cameras.
- Software-defined cameras allow for the emulation of sensor motion during exposure time without physically moving the sensor.
- Parabolic trajectories can be used for optimal signal-to-noise ratio in blur-free images.
- SoDaCam is a platform that allows for the comparison of different imaging modalities.
- Various research papers and technologies related to programmable imaging and software-defined cameras have been discussed.
Summaries
33 word summary
SoDaCam is a software-defined camera that uses single-photon detectors to capture photon data, which can be processed post-capture to emulate different imaging modalities. Computing projections on-chip reduces power consumption and enables versatile functionality.
37 word summary
The authors introduce SoDaCam, a software-defined camera that uses single-photon detectors to capture photon data. They demonstrate that this data can be processed post-capture to emulate various imaging modalities. Computing projections on-chip reduces power consumption and enables
746 word summary
In this work, the authors introduce SoDaCam, a software-defined camera that uses single-photon detectors to capture photon data. They show that this data can be processed post-capture to emulate various imaging modalities, including exposure bracketing, video
Computing projections on-chip reduces power consumption and enables new capabilities. The SPAD array used in this work has limitations, but progress in single-photon cameras and near-sensor processors will address these shortcomings. A SPAD array captures incident light as
The proposed photon-cube projections in software-defined cameras allow for the emulation of sensor motion during exposure time without physically moving the sensor. These projections can be efficiently computed in an online manner, making on-chip implementation possible. The high temporal-sampling rate
The document discusses the use of software-defined cameras with single-photon imaging. The effects of varying contrast threshold and exponential decay on event images are demonstrated. Three imaging modalities are explored: video compressive sensing, event cameras, and motion-projection cameras
SPAD-events can be combined with intensity information, allowing for a synergistic combination that has been utilized in recent event-vision works. The generation of events is determined by a deviation threshold equation. Motion projections can be achieved using linear or parabolic trajectories
The document discusses the use of SoDaCam software-defined cameras with single-photon imaging. The authors propose a parabolic projection technique for optimal signal-to-noise ratio in blur-free images when only the direction of velocity is known. They also demonstrate the
We propose a method of improving the per-frame signal-to-noise ratio (SNR) of high-speed cameras by randomly sampling linear projections and blending them using optical flow. We compare our method to the conventional Photron Infinicam and demonstrate the
SoDaCam is a platform that allows for the comparison of different imaging modalities. It can emulate the imaging models of various sensors, making it easier to compare their performance and determine the advantages of one modality over another. Additionally, being software-defined
L. Chotard et al. present a back-illuminated stacked temporal contrast event-based vision sensor with 1280x720 resolution and 4.86μm pixels. The sensor includes a programmable event-rate controller and compressive
Live demonstrations of multi-mode event-based sensors and compressive single-photon 3D cameras have been presented at computer vision conferences. Other related research includes the use of histograms of oriented gradients for human detection, recovering high dynamic range radiance maps from photographs
The document provides a list of references related to software-defined cameras and single-photon imaging. The references include studies on photographic noise performance measures, high-flux passive imaging with single-photon sensors, quanta burst photography, passive inter-photon imaging,
Qi, R. Gulve, M. Wei, R. Genov, K. N. Kutulakos, and W. Heidrich presented a study on end-to-end video compressive sensing using anderson-accelerated un
The document discusses various research papers and technologies related to programmable imaging and software-defined cameras. It includes references to papers on topics such as frame-free image sensors, optics and image processing optimization, coded exposure photography, dynamic and active pixel vision sensors, events
Motion adaptive deblurring with single-photon cameras is discussed in a paper by Velten. The paper also references other works that explore topics such as CMOS image sensors with multi-bucket pixels, coded two-bucket cameras, differential SPAD array
This supplementary note provides details on video compressive sensing and the emulation of multi-bucket cameras. It includes the pseudo code for emulating multi-bucket captures and describes the mask sequences used for video compressive sensing. The decoding of video compressive captures
Parabolic trajectories are parameterized by their maximum absolute slope, v max, which determines the shape of the trajectory. To avoid image artifacts caused by the finite extent of parabolic integration, it is important to choose a v max that is higher than the
This text excerpt provides custom assembly code for implementing video compressive sensing, event cameras, and motion projections on UltraPhase software-defined cameras. The code includes instructions for loading pixel values, applying masks, accumulating pixels, shifting pixels, and strobing external trigger
Ensemble learning priors driven deep unfolding for scalable video snapshot compressive imaging. A unifying contrast maximization framework for event cameras, with applications to motion, depth, and optical flow estimation. End-to-end video compressive sensing using Anderson-accel
This excerpt contains a list of references to various research papers and conference proceedings related to software-defined cameras, event cameras, compressive video recovery, CMOS image sensors, video denoising, optical flow, image inpainting, and SPAD image sensors
This text excerpt includes a list of references to various research papers on the topic of software-defined cameras using single-photon imaging. The references cover a range of topics including model-based reconstruction, optical flow, video compressive sensing, total variation minimization,