Deep learning analysis on raw image data – case study on holographic cell analysis

Author(s):  
Gal Gozes ◽  
Shani Ben Baruch ◽  
Noa Rotman-Nativ ◽  
Darina Roitshtain ◽  
Natan T. Shaked ◽  
...  
Author(s):  
Ajay Kumar ◽  
Smita Nivrutti Kolnure ◽  
Kumar Abhishek ◽  
Fadi-Al-Turjman ◽  
Pranav Nerurkar ◽  
...  

Background: Infectious disease happens when an individual is defiled by a micro-organism/virus from another person or an animal. It is troublesome that causes hurt at both individual and huge scope scales. Case Presentation : The ongoing episode of COVID-19 ailment brought about by the new coronavirus first distinguished in Wuhan China, and its quick spread far and wide, revived the consideration of the world towards the impacts of such plagues on individual’s regular daily existence. We attempt to exploit this effectiveness of Advanced deep learning algorithms to predict the Growth of Infectious disease based on time series data and classification based on (symptoms) text data and X-ray image data. Conclusion: Goal is identifying the nature of the phenomenon represented by the sequence of observations and forecasting.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


2020 ◽  
Author(s):  
Wei Zhang ◽  
Zixing Huang ◽  
Jian Zhao ◽  
Du He ◽  
Mou Li ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 863
Author(s):  
Vidas Raudonis ◽  
Agne Paulauskaite-Taraseviciene ◽  
Kristina Sutiene

Background: Cell detection and counting is of essential importance in evaluating the quality of early-stage embryo. Full automation of this process remains a challenging task due to different cell size, shape, the presence of incomplete cell boundaries, partially or fully overlapping cells. Moreover, the algorithm to be developed should process a large number of image data of different quality in a reasonable amount of time. Methods: Multi-focus image fusion approach based on deep learning U-Net architecture is proposed in the paper, which allows reducing the amount of data up to 7 times without losing spectral information required for embryo enhancement in the microscopic image. Results: The experiment includes the visual and quantitative analysis by estimating the image similarity metrics and processing times, which is compared to the results achieved by two wellknown techniques—Inverse Laplacian Pyramid Transform and Enhanced Correlation Coefficient Maximization. Conclusion: Comparatively, the image fusion time is substantially improved for different image resolutions, whilst ensuring the high quality of the fused image.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2611
Author(s):  
Andrew Shepley ◽  
Greg Falzon ◽  
Christopher Lawson ◽  
Paul Meek ◽  
Paul Kwan

Image data is one of the primary sources of ecological data used in biodiversity conservation and management worldwide. However, classifying and interpreting large numbers of images is time and resource expensive, particularly in the context of camera trapping. Deep learning models have been used to achieve this task but are often not suited to specific applications due to their inability to generalise to new environments and inconsistent performance. Models need to be developed for specific species cohorts and environments, but the technical skills required to achieve this are a key barrier to the accessibility of this technology to ecologists. Thus, there is a strong need to democratize access to deep learning technologies by providing an easy-to-use software application allowing non-technical users to train custom object detectors. U-Infuse addresses this issue by providing ecologists with the ability to train customised models using publicly available images and/or their own images without specific technical expertise. Auto-annotation and annotation editing functionalities minimize the constraints of manually annotating and pre-processing large numbers of images. U-Infuse is a free and open-source software solution that supports both multiclass and single class training and object detection, allowing ecologists to access deep learning technologies usually only available to computer scientists, on their own device, customised for their application, without sharing intellectual property or sensitive data. It provides ecological practitioners with the ability to (i) easily achieve object detection within a user-friendly GUI, generating a species distribution report, and other useful statistics, (ii) custom train deep learning models using publicly available and custom training data, (iii) achieve supervised auto-annotation of images for further training, with the benefit of editing annotations to ensure quality datasets. Broad adoption of U-Infuse by ecological practitioners will improve ecological image analysis and processing by allowing significantly more image data to be processed with minimal expenditure of time and resources, particularly for camera trap images. Ease of training and use of transfer learning means domain-specific models can be trained rapidly, and frequently updated without the need for computer science expertise, or data sharing, protecting intellectual property and privacy.


2021 ◽  
pp. 1063293X2110031
Author(s):  
Maolin Yang ◽  
Auwal H Abubakar ◽  
Pingyu Jiang

Social manufacturing is characterized by its capability of utilizing socialized manufacturing resources to achieve value adding. Recently, a new type of social manufacturing pattern emerges and shows potential for core factories to improve their limited manufacturing capabilities by utilizing the resources from outside socialized manufacturing resource communities. However, the core factories need to analyze the resource characteristics of the socialized resource communities before making operation plans, and this is challenging due to the unaffiliated and self-driven characteristics of the resource providers in socialized resource communities. In this paper, a deep learning and complex network based approach is established to address this challenge by using socialized designer community for demonstration. Firstly, convolutional neural network models are trained to identify the design resource characteristics of each socialized designer in designer community according to the interaction texts posted by the socialized designer on internet platforms. During the process, an iterative dataset labelling method is established to reduce the time cost for training set labelling. Secondly, complex networks are used to model the design resource characteristics of the community according to the resource characteristics of all the socialized designers in the community. Two real communities from RepRap 3D printer project are used as case study.


Energies ◽  
2020 ◽  
Vol 14 (1) ◽  
pp. 156
Author(s):  
Paige Wenbin Tien ◽  
Shuangyu Wei ◽  
John Calautit

Because of extensive variations in occupancy patterns around office space environments and their use of electrical equipment, accurate occupants’ behaviour detection is valuable for reducing the building energy demand and carbon emissions. Using the collected occupancy information, building energy management system can automatically adjust the operation of heating, ventilation and air-conditioning (HVAC) systems to meet the actual demands in different conditioned spaces in real-time. Existing and commonly used ‘fixed’ schedules for HVAC systems are not sufficient and cannot adjust based on the dynamic changes in building environments. This study proposes a vision-based occupancy and equipment usage detection method based on deep learning for demand-driven control systems. A model based on region-based convolutional neural network (R-CNN) was developed, trained and deployed to a camera for real-time detection of occupancy activities and equipment usage. Experiments tests within a case study office room suggested an overall accuracy of 97.32% and 80.80%. In order to predict the energy savings that can be attained using the proposed approach, the case study building was simulated. The simulation results revealed that the heat gains could be over or under predicted when using static or fixed profiles. Based on the set conditions, the equipment and occupancy gains were 65.75% and 32.74% lower when using the deep learning approach. Overall, the study showed the capabilities of the proposed approach in detecting and recognising multiple occupants’ activities and equipment usage and providing an alternative to estimate the internal heat emissions.


Sign in / Sign up

Export Citation Format

Share Document