scholarly journals Detection of Neurons in the Proteomic and Genomic Image Data

2019 ◽  
Vol 8 (4) ◽  
pp. 11151-11157

Nowadays, the major biomedical data required for diagnosing the disease is neurons in the nerve cell. Just a brief timeframe after the neuron became recognized as the basic unit of the sensory system, the main endeavors were made to appraise the quantity of neurons in various parts of the sensory system. During the previous century, an incredible number of techniques have been utilized in making such gauges. In spite of the fact that the most generally utilized and acknowledged strategy is that of direct including in the magnifying lens, different systems, including photographic, projection, homogenate, programmed, and visual strategies have been planned. And in this project we are taking a brain tissue as an image data and from that image we are finding the number of neurons which are active in state for the first 24 hrs. and again check for 48 hrs. and finally for 72 hrs. so we here find how neurons are responding after giving information to a body and that information flows through nerves of the body and reaches to the neurons present in a human brain and the neurons react to the information and we take the data that how many neurons are responding to the information that is given to a human body. So, by finding the number of neurons responding to the information given to human body we could estimate the neurons which are alive, and which are dead by this we could declare the mental status of a person. So we are finding the number of neurons with the help of neural network method using MATLAB software and we created a page with the help of MATLAB so we can give input image in the page and the code we written will help to check the number of neurons.

SinkrOn ◽  
2021 ◽  
Vol 5 (2) ◽  
pp. 314-324
Author(s):  
Mawaddah Harahap ◽  
Valencia Angelina ◽  
Fenny Juliani ◽  
Celvin Celvin ◽  
Oscar Evander

Grapes are one type of fruit that is usually used to make grape juice, jelly, grapes, grape seed oil and raisins, or to be eaten directly. So far, checking for disease in grapes is still done manually, by checking the leaves of the grapes by experts. This method certainly takes a long time considering the extent of the vineyards that must be evaluated. To solve this problem, it is necessary to apply a method of detecting grape disease, so that it can help the common people to detect grape disease. This research will use the Dual-Channel Convolutional Neural Network method. The process of detecting grape disease using the DCCNN method will begin with the extraction of the leaves from the input image using the Gabor Filter method. After that, the Segmentation Based Fractal Co-Occurrence Texture Analysis method will be used to extract the features, color, and texture of the extracted leaves. The result is the number of datasets will affect the accuracy of the results of disease identification using the DCCNN method. However, more datasets will cause the execution process to take longer. Changes in the angle and frequency values in the Gabor method at the time of testing will reduce the accuracy of the test results. The conclusion of this study are the DCCNN method can be used to detect the type of leaf disease in grapes and the number of datasets will affect the accuracy of the results of disease identification using the DCCNN method.


2020 ◽  
Vol 10 (14) ◽  
pp. 4806 ◽  
Author(s):  
Ho-Hyoung Choi ◽  
Hyun-Soo Kang ◽  
Byoung-Ju Yun

For more than a decade, both academia and industry have focused attention on the computer vision and in particular the computational color constancy (CVCC). The CVCC is used as a fundamental preprocessing task in a wide range of computer vision applications. While our human visual system (HVS) has the innate ability to perceive constant surface colors of objects under varying illumination spectra, the computer vision is facing the color constancy challenge in nature. Accordingly, this article proposes novel convolutional neural network (CNN) architecture based on the residual neural network which consists of pre-activation, atrous or dilated convolution and batch normalization. The proposed network can automatically decide what to learn from input image data and how to pool without supervision. When receiving input image data, the proposed network crops each image into image patches prior to training. Once the network begins learning, local semantic information is automatically extracted from the image patches and fed to its novel pooling layer. As a result of the semantic pooling, a weighted map or a mask is generated. Simultaneously, the extracted information is estimated and combined to form global information during training. The use of the novel pooling layer enables the proposed network to distinguish between useful data and noisy data, and thus efficiently remove noisy data during learning and evaluating. The main contribution of the proposed network is taking CVCC to higher accuracy and efficiency by adopting the novel pooling method. The experimental results demonstrate that the proposed network outperforms its conventional counterparts in estimation accuracy.


2020 ◽  
Vol 11 (3) ◽  
pp. 167
Author(s):  
Eko Wahyu Prasetyo ◽  
Nambo Hidetaka ◽  
Dwi Arman Prasetya ◽  
Wahyu Dirgantara ◽  
Hari Fitria Windi

The development of technology is growing rapidly; one of the most popular among the scientist is robotics technology. Recently, the robot was created to resemble the function of the human brain. Robots can make decisions without being helped by humans, known as AI (Artificial Intelligent). Now, this technology is being developed so that it can be used in wheeled vehicles, where these vehicles can run without any obstacles. Furthermore, of research, Nvidia introduced an autonomous vehicle named Nvidia Dave-2, which became popular. It showed an accuracy rate of 90%. The CNN (Convolutional Neural Network) method is used in the track recognition process with input in the form of a trajectory that has been taken from several angles. The data is trained using Jupiter's notebook, and then the training results can be used to automate the movement of the robot on the track where the data has been retrieved. The results obtained are then used by the robot to determine the path it will take. Many images that are taken as data, precise the results will be, but the time to train the image data will also be longer. From the data that has been obtained, the highest train loss on the first epoch is 1.829455, and the highest test loss on the third epoch is 30.90127. This indicates better steering control, which means better stability.


2020 ◽  
Author(s):  
Soundarya Krishnan ◽  
Rishab Khincha ◽  
Lovekesh Vig ◽  
Tirtharaj Dash ◽  
Ashwin Srinivasan

All organs in the human body are susceptible to cancer, and we now have a growing store of images of lesions in different parts of the body. This, along with the acknowledged ability of neural-network methods to analyse image data, would suggest that accurate models for lesions can now be constructed by a deep neural network. However an important difficulty arises from the lack of annotated images from various parts of the body. Our proposed approach to address the issue of scarce training data for a target organ is to apply a form of transfer learning: that is, to adapt a model constructed for one organ to another for which there are minimal or no annotations. After consultation with medical specialists, we note that there are several discriminating visual features between malignant and benign lesions that occur consistently across organs. Therefore, in principle, these features boost the case for transfer learning on lesion images across organs. However, this has never been previously investigated. In this paper, we investigate whether lesion knowledge can be transferred across organs. Specifically, as a case study,we examine the transfer of a lesion model from the brain to lungs and lungs to the brain. We evaluate the efficacy of transfer of a brain-lesion model to the lung, and the transfer of a lung-lesion model to the brain by comparing against a model constructed: (a) without model-transfer(i.e.random weights); and (b) using model-transfer from a lesion-agnostic dataset (ImageNet). In all cases, our lesion models perform substantially better. These results point to the potential utility of transferring lesion-knowledge across organs other than those considered here.


2017 ◽  
Vol 2017 ◽  
pp. 1-7 ◽  
Author(s):  
Yulia Tunakova ◽  
Svetlana Novikova ◽  
Aligejdar Ragimov ◽  
Rashat Faizullin ◽  
Vsevolod Valiev

Models that describe the trace element status formation in the human organism are essential for a correction of micromineral (trace elements) deficiency. A direct trace element retention assessment in the body is difficult due to the many internal mechanisms. The trace element retention is determined by the amount and the ratio of incoming and excreted substance. So, the concentration of trace elements in drinking water characterizes the intake, whereas the element concentration in urine characterizes the excretion. This system can be interpreted as three interrelated elements that are in equilibrium. Since many relationships in the system are not known, the use of standard mathematical models is difficult. The artificial neural network use is suitable for constructing a model in the best way because it can take into account all dependencies in the system implicitly and process inaccurate and incomplete data. We created several neural network models to describe the retentions of trace elements in the human body. On the model basis, we can calculate the microelement levels in the body, knowing the trace element levels in drinking water and urine. These results can be used in health care to provide the population with safe drinking water.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Min Guo ◽  
Liying Song ◽  
Muhammad Ilyas

In the context of economic globalization and digitization, the current financial field is in an unprecedented complex situation. The methods and means to deal with this complexity are developing towards image intelligence. This paper takes financial prediction as the starting point, selects the artificial neural network in the intelligent algorithm and optimizes the algorithm, forecasts through the improved multilayer neural network, and compares it with the traditional neural network. Through comparison, it is found that the prediction success rate of the improved genetic multilayer neural network increases with the increase of the dimension of the input image data. This shows that, by adding more technical indicators as the input of the combined network, the prediction efficiency of the improved genetic multilayer neural network can be further improved and the advantage of computing speed can be maintained.


White blood cell (Leukocytes) is made up of bone marrow located in the blood and lymph tissue. They are portion of the human body’s immune system, thereby helping the body system to fight against infection and other related diseases. The number of leukocytes in the blood is usually part of a complete blood cell (CBC) test, which may be used to check for conditions such as infection, inflammation, allergies, and leukemia. Automation of variance count of leukocytes offers valuable information to medical pathologist to diagnose and treat of many blood based diseases. Early characterization and classification of blood sample is a major lacuna in the medical field, giving rise to lots of challenges for pathologist to adequately predict blood based disease. Several successful efforts have been made to address the aforementioned challenges with the use of machine learning generally and Convolution Neural Network in particular. However the processor configuration which can result in real time, and accurate classification of the high dimensional pattern is imminent, and a vast number of researchers are not explicit on the system configuration used to obtain the result in their report, which is the crux of this research. In this research,12,500 augment images of blood cells was obtained from the Kaggle Repository online. The leukocytes are contained in the blood smear image and categorized into five major types of their types: Neutrophil, Eosinophil, Basophil, Lymphocyte and Monocyte. The color, geometric and texture features are used by the pathologists to differentiate the leukocytes. The Simulation was done using python programing language and python libraries including Keras, pandas, sklearn, numpy, scipy and matplot for potting of graphs of results. The simulation was done on both CPU and GPU processor to compare the performance of the processors on CNNs based classification of the data. While CPU has faster clock speed GPU has more cores. Hence the evaluation metrics used which are precision, specificity, sensitivity, training accuracy and validation accuracy revealed that GPU processor outperforms CPU in terms of the stated metrics of comparison. Therefore a high configuration processor (GPU), which handles graphics better is recommended for processing image data that involves the use of machine learning techniques


2021 ◽  
Vol 5 (2) ◽  
pp. 265-271
Author(s):  
Arum TiaraSari ◽  
Emy Haryatmi

Corn kernels detection can be implemented in industry area. This can be implemented in the selection and packaging the corn kernels before it is distributed. This technique can be implemented in the selection and packaging machine to detect corn kernels accurately. Corn kernel images was used before it is implemented in real-time. The objective of this research was corn kernel detection using Convolutional Neural Network (CNN) deep learning. This technique consists of 3 main stages, the first preprocessing or normalizing the input of corn kernels image data by wrapping and cropping, both modeling and training the system, and testing. The experiment used CNN method to classify images of dry corn kernels and to determine the accuracy value. This research used 20 dry corn kernels images as testing from 80 dry corn kernels images which used in training dataset. The accuracy of detection was dependent from the size of image and position when the image was taken. The accuracy is around 80% - 100% by using 7 convolutional layers and the average of accuracy for testing data was 0,90296. The convolutional layer which implemented in CNN has the strength to detect features in the input image.  


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Xinliang Zhou ◽  
Shantian Wen

The use of artificial intelligence technology to analyze human behavior is one of the key research topics in the world. In order to detect and analyze the characteristics of human body behavior after training, a detection model combined with a convolutional neural network (CNN) is proposed. Firstly, the human skeleton suggestion model is established to analyze the driving mode of the human body in motion. Secondly, the number of layers and neurons in CNN are set according to the skeleton feature map. Then, the output information is classified according to the fatigue degree according to the body state after exercise. Finally, the training and performance test of the model are carried out, and the effect of the body behavior feature detection model in use is analyzed. The results show that the CNN designed in the study shows high accuracy and low loss rate in training and testing and also has high accuracy in the practical application of fatigue degree recognition after human training. According to the subjective evaluation of volunteers, the overall average evaluation is more than 9 points. The above results show that the designed convolution neural network-based detection model of body behavior characteristics after training has good performance and is feasible and practical, which has guiding significance for the design of sports training and training schemes.


2020 ◽  
Vol 2020 (17) ◽  
pp. 2-1-2-6
Author(s):  
Shih-Wei Sun ◽  
Ting-Chen Mou ◽  
Pao-Chi Chang

To improve the workout efficiency and to provide the body movement suggestions to users in a “smart gym” environment, we propose to use a depth camera for capturing a user’s body parts and mount multiple inertial sensors on the body parts of a user to generate deadlift behavior models generated by a recurrent neural network structure. The contribution of this paper is trifold: 1) The multimodal sensing signals obtained from multiple devices are fused for generating the deadlift behavior classifiers, 2) the recurrent neural network structure can analyze the information from the synchronized skeletal and inertial sensing data, and 3) a Vaplab dataset is generated for evaluating the deadlift behaviors recognizing capability in the proposed method.


Sign in / Sign up

Export Citation Format

Share Document