manual selection
Recently Published Documents


TOTAL DOCUMENTS

92
(FIVE YEARS 50)

H-INDEX

10
(FIVE YEARS 4)

2022 ◽  
Vol 2022 ◽  
pp. 1-13
Author(s):  
Zhihe Wang ◽  
Yongbiao Li ◽  
Hui Du ◽  
Xiaofen Wei

Aiming at density peaks clustering needs to manually select cluster centers, this paper proposes a fast new clustering method with auto-select cluster centers. Firstly, our method groups the data and marks each group as core or boundary groups according to its density. Secondly, it determines clusters by iteratively merging two core groups whose distance is less than the threshold and selects the cluster centers at the densest position in each cluster. Finally, it assigns boundary groups to the cluster corresponding to the nearest cluster center. Our method eliminates the need for the manual selection of cluster centers and improves clustering efficiency with the experimental results.


2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Hidetsugu Asano ◽  
Eiji Hirakawa ◽  
Hayato Hayashi ◽  
Keisuke Hamada ◽  
Yuto Asayama ◽  
...  

Abstract Background Regulation of temperature is clinically important in the care of neonates because it has a significant impact on prognosis. Although probes that make contact with the skin are widely used to monitor temperature and provide spot central and peripheral temperature information, they do not provide details of the temperature distribution around the body. Although it is possible to obtain detailed temperature distributions using multiple probes, this is not clinically practical. Thermographic techniques have been reported for measurement of temperature distribution in infants. However, as these methods require manual selection of the regions of interest (ROIs), they are not suitable for introduction into clinical settings in hospitals. Here, we describe a method for segmentation of thermal images that enables continuous quantitative contactless monitoring of the temperature distribution over the whole body of neonates. Methods The semantic segmentation method, U-Net, was applied to thermal images of infants. The optimal combination of Weight Normalization, Group Normalization, and Flexible Rectified Linear Unit (FReLU) was evaluated. U-Net Generative Adversarial Network (U-Net GAN) was applied to thermal images, and a Self-Attention (SA) module was finally applied to U-Net GAN (U-Net GAN + SA) to improve precision. The semantic segmentation performance of these methods was evaluated. Results The optimal semantic segmentation performance was obtained with application of FReLU and Group Normalization to U-Net, showing accuracy of 92.9% and Mean Intersection over Union (mIoU) of 64.5%. U-Net GAN improved the performance, yielding accuracy of 93.3% and mIoU of 66.9%, and U-Net GAN + SA showed further improvement with accuracy of 93.5% and mIoU of 70.4%. Conclusions FReLU and Group Normalization are appropriate semantic segmentation methods for application to neonatal thermal images. U-Net GAN and U-Net GAN + SA significantly improved the mIoU of segmentation.


2022 ◽  
pp. 1756-1775
Author(s):  
Mukta Goyal ◽  
Chetna Gupta

For successful completion of any software project, an efficient team is needed. This task becomes more challenging when the project is to be completed under global software development umbrella. The manual selection of team members based on some expert judgment may lead to inappropriate selection. In reality, there are hundreds of employees in an organization and a single expert may be biased towards any member. Thus, there is a need to adopt methods which consider multiple selection criteria with multiple expert views for making appropriate selection. This article uses an intuitionistic fuzzy approach to handle uncertainty in the expert's decision in multicriteria group decision making process and ranking among the finite team members. An intuitionistic fuzzy Muirhead Mean (IFMM) is used to aggregate the intuitionistic criteria's. To gain confidence between criteria and expert score relationship, the Annova test is performed. The results are promising with p value as small as 0.02 and one-tail t-test score equals to 0.0000002.


2021 ◽  
Vol 20 ◽  
pp. 41-53
Author(s):  
Grzegorz Lenda ◽  
Dominika Spytkowska

The shape of the surface of shell structures, measured by laser scanning, can be modelled using approximating spline functions. Since the 1990s, several modelling techniques have been developed: based on points, meshes, areas outlined on meshes, regions grouping areas with a similar structure. The most effective of them have been used in modern software, but their implementations differ significantly. The most important differences concern the accuracy of modelling, especially places with rapid shape changes, including edges. The differences also affect the mathematical complexity of the created model (the number of unknowns) and the time of its development. These factors contribute to the effectiveness of modelling. Some methods work fully automatically, others allow manual selection of certain parameters, there are also methods that require full manual control. Their selection and application is greatly affected by the user’s intuition and knowledge in the field of creating such surfaces. This study tested the influence of the above factors on the modelling efficiency. A total of six methods of creating spline surfaces were analysed in three software packages of different classes: Geomagic Design X, Solidworks and RhinoResurf. The analyses were carried out on a shell structure of complex shape, consisting of seven patches separated by edges. The created models were assessed in terms of their accuracy of fitting into the point cloud. Additionally, the complexity of the model expressed in the number of control points and the time of its development were determined. The results confirmed the validity of the four methods in terms of model fitting accuracy. The best results were achieved using the semi-automatic method in the most advanced software package and the manual method in the simplest package. This has confirmed the great importance of user experience in terms of theoretical properties of spline functions. However, complexity and development time did not show a direct relationship with the accuracy of the models created. ANALIZA EFEKTYWNOŚCI METOD TWORZENIA POWIERZCHNI SKLEJANYCH DLA MODELOWANIA OBIEKTÓW POWŁOKOWYCH Modelowanie kształtu powierzchni obiektów powłokowych, pomierzonych za pomocą skaningu laserowego, można przeprowadzić za pomocą aproksymacyjnych funkcji sklejanych. Funkcje te dobrze przybliżają kształty o ciągłej krzywiźnie, jakimi są powłoki, jednocześnie wykazując spadki dokładności w miejscach zerwania tej ciągłości. Od lat 90. XX wieku rozwinęło się kilka technik modelowania za ich pomocą, m.in.: wykorzystujących same punkty, siatki mesh, obszary obrysowane na siatkach mesh, regiony grupujące obszary o podobnej strukturze. Najbardziej skuteczne z nich zostały zastosowane we współczesnym oprogramowaniu, ale ich implementacje znacząco się pomiędzy sobą różnią. Najważniejsze różnice dotyczą dokładności modelowania, szczególnie miejsc o szybkich zmianach kształtu, włączając w nie krawędzie. Różnice dotyczą też złożoności matematycznej utworzonego modelu (liczby niewiadomych) oraz czasu jego opracowania. Czynniki te składają się na efektywność modelowania. Część metod działa w pełni automatycznie, inne pozwalają na ręczny dobór pewnych parametrów, są też metody wymagające pełnego sterowania ręcznego. W ich wyborze i stosowaniu duże znaczenie ma intuicja i wiedza użytkownika w zakresie tworzenia tego typu powierzchni. W opracowaniu przetestowano wpływ powyższych czynników na efektywność modelowania. Badaniom poddano łącznie sześć metod tworzenia powierzchni sklejanych w trzech pakietach oprogramowania różnej klasy: Geomagic Design X, Solidworks i RhinoResurf. Analizy przeprowadzono na obiekcie powłokowym o złożonym kształcie, składającym się z siedmiu płatów rozdzielonych krawędziami. Został on pomierzony metodą skaningu laserowego, a scalona chmura punktów stanowiła podstawę do modelowania za pomocą funkcji sklejanych. Utworzone modele oceniono pod względem dokładności wpasowania w chmurę punktów za pomocą wykresów odchyłek punktów od powierzchni, odchyłek średnich oraz maksymalnych. Dodatkowo określono złożoność modelu wyrażoną liczbą punktów kontrolnych oraz czas jego opracowania. Wyniki pozwoliły na potwierdzenie skuteczności czterech metod w zakresie dokładności wpasowania modeli. Najlepsze efekty osiągnięto stosując metodę półautomatyczną w najbardziej zaawansowanym pakiecie oprogramowania oraz metodę ręczną w najprostszym z pakietów. Potwierdza to duże znaczenie doświadczenia użytkownika w zakresie teoretycznych własności funkcji sklejanych. Złożoność i czas opracowania nie wykazywały natomiast bezpośredniego związku z dokładnością tworzonych modeli.


2021 ◽  
Author(s):  
Marian Hruska-Plochan ◽  
Katharina M Hembach ◽  
Silvia Ronchi ◽  
Vera I Wiersma ◽  
Zuzanna Maniecka ◽  
...  

Human cellular models of neurodegeneration require reproducibility and longevity, which is necessary for simulating these age-dependent diseases. Such systems are particularly needed for TDP-43 proteinopathies, which involve human-specific mechanisms that cannot be directly studied in animal models. To explore the emergence and consequences of TDP-43 pathologies, we generated iPSC-derived, colony morphology neural stem cells (iCoMoNSCs) via manual selection of neural precursors. Single-cell transcriptomics (scRNA-seq) and comparison to independent NSCs, showed that iCoMoNSCs are uniquely homogenous and self-renewing. Differentiated iCoMoNSCs formed a self-organized multicellular system consisting of synaptically connected and electrophysiologically active neurons, which matured into long-lived functional networks. Neuronal and glial maturation in iCoMoNSC-derived cultures was similar to that of cortical organoids. Overexpression of wild-type TDP-43 in a minority of iCoMoNSC-derived neurons led to progressive fragmentation and aggregation, resulting in loss of function and neurotoxicity. scRNA-seq revealed a novel set of misregulated RNA targets coinciding in both TDP-43 overexpressing neurons and patient brains exhibiting loss of nuclear TDP-43. The strongest misregulated target encoded for the synaptic protein NPTX2, which was consistently misaccumulated in ALS and FTLD patient neurons with TDP-43 pathology. Our work directly links TDP-43 misregulation and NPTX2 accumulation, thereby highlighting a new pathway of neurotoxicity.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Qunjing Ji

With the rapid development of image recognition technology, freehand sketch recognition has attracted more and more attention. How to achieve good recognition effect in the absence of color and texture information is the key to the development of freehand sketch recognition. Traditional nonlearning classical models are highly dependent on manual selection features. To solve this problem, a neural network sketch recognition method based on DSCN structure is proposed in this paper. Firstly, the stroke sequence of the sketch is drawn; then, the feature is extracted according to the stroke sequence combined with neural network, and the extracted image features are used as the input of the model to construct the time relationship between different image features. Through the control experiment on TU-Berlin dataset, the results show that, compared with the traditional nonlearning methods, HOG-SVM, SIFT-Fisher Vector, MKL-SVM, and FV-SP, the recognition accuracy of DSCN network is improved by 15.8%, 10.3%, 6.0%, and 2.9%, respectively. Compared with the classical deep learning model, Alex-Net, the recognition accuracy is improved by 5.6%. The above results show that the DSCN network proposed in this paper has strong ability of feature extraction and nonlinear expression and can effectively improve the recognition accuracy of hand-painted sketches after introducing the stroke order.


2021 ◽  
Vol 911 (1) ◽  
pp. 012045
Author(s):  
Bunyamin Zainuddin ◽  
Muhammad Aqil

Abstract Assessment nutrient content of maize leaf is particularly important in achieving higher grain yield. Characterization of leaf chlorophyll involves routine Soil Plant Analyzer Development (SPAD) reading particularly at critical stage of growth development. The objective of the study was to assess the color spectrum of maize leaf in relation to the chlorophyll content by using Random-forest modeling. genotypes of corn plants based on the characters of the ear and kernel using a logistic regression model. The research was conducted at IP2TP Bajeng in 2021 by planting maize varieties at various fertilizer level. RGB data of maize leaf was recorded by using Hamamatsu sensor (Hamamatsu, Japan), and converted to HIS, HSV and LAB color spectrum. The results indicated that Random-forest model with 20-fold validation indicated the highest accuracy as compared to the other fold-range. Among the tested model, integration of Random-forest model to LAB (Light, red/green coordinate, and the yellow/blue coordinate) color spectrum provided the best model performances with RMSE (4.77), MSE (22.76), MAE (3.80) and R2 (0.853). This value indicates that the use of Hamamatsu color sensor and converted into LAB color spectrum provided the best SPAD (Soil Plant Analyzer Development) reading with high accuracy and consistency of results. Thus, digital based model can be integrated with manual selection for fast and precise nutrient monitoring.


2021 ◽  
Vol 911 (1) ◽  
pp. 012017
Author(s):  
M. Arief Subchan ◽  
N.N. Andayani

Abstract Characterization of maize plants is among the pre-requested document prior to release to the public in Indonesia. Characterization of maize genotype involves various parameters includes agronomic parameters, yield and yield components. Characterization is generally carried out by professionals because it requires special skills in identifying genotypes based on their specific characters. The objective of the study was to classify the genotypes of corn plants based on the characters of the ear and kernel using a logistic regression model. The research was conducted at IP2TP Bajeng in 2020 by planting 4 genotypes, namely DYM-15, N 79, Mal 03 and G102612. A total of 100 plants per genotype were planted for cob characterization. Data analysis was done by using open-source software, Orange Software. The results indicated that the logistic regression model had a very good performance in classifying maize genotypes with an accuracy of > 98%. The values of the five parameters used to access the accuracy of the model are AUC=1.0, CA=0.99, F1=0.99, precision=0.99, recall=0.99. This value indicates that the use of IT-based tools can correctly classifying genotypes with high accuracy and consistency of results. Thus, digital based model can be integrated with manual selection for fast and precise grading of maize genotypes for maintaining seed quality.


Author(s):  
Janak TRIVEDI ◽  
Mandalapu Sarada DEVI ◽  
Dave DHARA

We present vehicle detection classification using the Convolution Neural Network (CNN) of the deep learning approach. The automatic vehicle classification for traffic surveillance video systems is challenging for the Intelligent Transportation System (ITS) to build a smart city. In this article, three different vehicles: bike, car and truck classification are considered for around 3,000 bikes, 6,000 cars, and 2,000 images of trucks. CNN can automatically absorb and extract different vehicle dataset’s different features without a manual selection of features. The accuracy of CNN is measured in terms of the confidence values of the detected object. The highest confidence value is about 0.99 in the case of the bike category vehicle classification. The automatic vehicle classification supports building an electronic toll collection system and identifying emergency vehicles in the traffic.


2021 ◽  
Vol 6 (2) ◽  
pp. 82
Author(s):  
Bayhaqqi Bayhaqqi ◽  
Saiful Bukhori ◽  
Gayatri Dwi Santika

Temporary Waste Disposal Site (TPSS) is a place to collect waste from various community activities which will later be transported to the final disposal site by garbage trucks. There are many considerations in choosing a TPSS location, so the selection of a TPSS location is very important in supporting the collection of waste that will be transported to final disposal. The Jember Regency Environmental Service is an agency in charge of waste management, including the selection of TPSS locations. Choosing the location of TPSS so far is still manual, where manual selection cannot be separated from human error, so that choosing the location of TPSS is not accurate can cause new problems in the community. In addition, there is no standardized assessment system in the TPSS selection process, so a decision support system is needed that can be used to assist the process of selecting the best TPSS location recommendations. In making this research system, we implemented the hybrid method of AHP and TOPSIS. Where the AHP method is used to determine the weight of the criteria while the TOPSIS method is used for the selection process for TPSS candidates.


Sign in / Sign up

Export Citation Format

Share Document