NASA End-to-End Data System (NEEDS) information adaptive system-performing image processing onboard the spacecraft

Author(s):  
W. Lane Kelly ◽  
William M. Howle ◽  
Barry D. Meredith
Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3691
Author(s):  
Ciprian Orhei ◽  
Silviu Vert ◽  
Muguras Mocofan ◽  
Radu Vasiu

Computer Vision is a cross-research field with the main purpose of understanding the surrounding environment as closely as possible to human perception. The image processing systems is continuously growing and expanding into more complex systems, usually tailored to the certain needs or applications it may serve. To better serve this purpose, research on the architecture and design of such systems is also important. We present the End-to-End Computer Vision Framework, an open-source solution that aims to support researchers and teachers within the image processing vast field. The framework has incorporated Computer Vision features and Machine Learning models that researchers can use. In the continuous need to add new Computer Vision algorithms for a day-to-day research activity, our proposed framework has an advantage given by the configurable and scalar architecture. Even if the main focus of the framework is on the Computer Vision processing pipeline, the framework offers solutions to incorporate even more complex activities, such as training Machine Learning models. EECVF aims to become a useful tool for learning activities in the Computer Vision field, as it allows the learner and the teacher to handle only the topics at hand, and not the interconnection necessary for visual processing flow.


2018 ◽  
Vol 228 ◽  
pp. 02009
Author(s):  
Chen Yao ◽  
Yan Xia

In video surveillance application, grayscale image often influences the image processing results. In order to solve the colorization problem for surveillance images, this paper propose a fully end-to-end approach to obtain a reasonable colorization results. A CNN learning structure and gradient prior are be used for chromatic space inferring. Finally, our experimental results show our advantage.


2019 ◽  
Vol 28 (2) ◽  
pp. 912-923 ◽  
Author(s):  
Eli Schwartz ◽  
Raja Giryes ◽  
Alex M. Bronstein

2021 ◽  
Vol 40 (2) ◽  
pp. 1-19
Author(s):  
Ethan Tseng ◽  
Ali Mosleh ◽  
Fahim Mannan ◽  
Karl St-Arnaud ◽  
Avinash Sharma ◽  
...  

Most modern commodity imaging systems we use directly for photography—or indirectly rely on for downstream applications—employ optical systems of multiple lenses that must balance deviations from perfect optics, manufacturing constraints, tolerances, cost, and footprint. Although optical designs often have complex interactions with downstream image processing or analysis tasks, today’s compound optics are designed in isolation from these interactions. Existing optical design tools aim to minimize optical aberrations, such as deviations from Gauss’ linear model of optics, instead of application-specific losses, precluding joint optimization with hardware image signal processing (ISP) and highly parameterized neural network processing. In this article, we propose an optimization method for compound optics that lifts these limitations. We optimize entire lens systems jointly with hardware and software image processing pipelines, downstream neural network processing, and application-specific end-to-end losses. To this end, we propose a learned, differentiable forward model for compound optics and an alternating proximal optimization method that handles function compositions with highly varying parameter dimensions for optics, hardware ISP, and neural nets. Our method integrates seamlessly atop existing optical design tools, such as Zemax . We can thus assess our method across many camera system designs and end-to-end applications. We validate our approach in an automotive camera optics setting—together with hardware ISP post processing and detection—outperforming classical optics designs for automotive object detection and traffic light state detection. For human viewing tasks, we optimize optics and processing pipelines for dynamic outdoor scenarios and dynamic low-light imaging. We outperform existing compartmentalized design or fine-tuning methods qualitatively and quantitatively, across all domain-specific applications tested.


Sign in / Sign up

Export Citation Format

Share Document